id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
250383517
pes2o/s2orc
v3-fos-license
Evaluation of China's OFDI Efficiency in Key Countries along "the Belt and Road" Based on DEA Model and Malmquist Index With the continuous promotion and development of "the Belt and Road" construction, the economic cooperation and exchanges between China and the countries along "the Belt and Road" have been gradually deepened, and outward foreign direct investment (OFDI) is one of the important channels of economic exchanges between China and other countries. Therefore, the efficiency of outward investment is also getting more and more attention from all walks of life. Based on the panel data of China's direct investment in 26 key countries along "the Belt and Road" from 2014 to 2020, this paper uses DEA model and Malmquist index to conduct a comprehensive analysis and evaluation of China's OFDI efficiency. From the results of the DEA model, China's OFDI efficiency to key countries along "the Belt and Road" is low to medium, and the country differences in investment efficiency are mainly caused by the low scale efficiency (SE). From the results of the Malmquist index, China's investment in key countries along " the Belt and Road" in 2014-2020 is on a declining trend, and the investment efficiency will decrease significantly in 2019-2020 due to the impact of the COVID-19 pandemic. Therefore, we shall further improve the scale efficiency of OFDI, strive to develop advanced technology, encourage innovation, improve the technical progress index, optimize the choice of regional investment, and stimulate its vitality. Introduction and Literature Review In 2013, President Xi Jinping put forward the strategic concept of innovating cooperation models and jointly building "the Silk Road Economic Belt" and "the 21st Century Maritime Silk Road", aiming to build a new pattern of regional economic cooperation between China and Central Asian countries with "the Silk Road Economic Belt" as the central axis. This strategic concept has received great attention from the international community. The continuous development of "the Belt and Road" in recent years has driven the advancement of China's outward foreign direct investment (OFDI). Since "the Belt and Road" initiative was proposed, China has gradually increased its direct investment in countries along the route. According to the data in 2020 Statistical Bulletin of China s Outward Foreign Direct Investment, China's net outward foreign direct investment reached USD153.71 billion in 2020, up 12.3% year-on-year. To be specific, by the end of 2020, Chinese domestic investors setting up overseas enterprises in 63 countries along "the Belt and Road" achieved USD22.54 billion of direct investment in the current year, a year-on-year increase of 20.6%, accounting for 14.7% of China's OFDI flow in the same period and up 1% as compared with the previous year. From 2013 to 2020, China's cumulative direct investment in countries along route amounted to US$139.85 billion and covered a wide range, including 61 countries and regions such as Singapore, Indonesia, the United Arab Emirates and Algeria, which shows that "the Belt and Road" initiative has made outstanding contributions to the economic development of China and the countries along the route. With China surpassing Japan to become the world's second largest economy and the world's second largest OFDI country, the role of China's OFDI will be more prominent, and China's direct investment in countries along "the Belt and Road" has become an important factor to drive the economic development of these countries. Therefore, China's OFDI efficiency in countries along "the Belt and Road" has a high research value and has received wide attention from academia, and many scholars have conducted in-depth research on it from various aspects. In addition, since the Covid-19 pandemic in 2019, China's OFDI projects have also been affected to a certain extent. Therefore, in this special period different from the past, how to cope with the challenges brought by the Covid-19 pandemic to OFDI and improve the efficiency of OFDI has also become a new issue of common concern for policy makers and academic researchers. Academic research on OFDI along "the Belt and Road" has been fruitful, and the research directions and approaches are very rich and extensive. At present, the research on China's OFDI to countries along "the Belt and Road" can be divided into the following aspects. The first one is studies on the efficiency of China's outward investment in countries along "the Belt and Road" by the stochastic frontier gravity model, which has been widely used in related studies. For example, Li Kaiwen and Zhou Ji [1] selected the panel data of 52 countries along "the Belt and Road" from 2003 to 2014, established a stochastic frontier gravity model and conducted an empirical analysis, concluding that the efficiency of China's OFDI to countries along "the Belt and Road" is low and there is still much room for improvement. Song Lin, Xie Wei and Zheng Wen [2] used a heterogeneous stochastic frontier gravity model to measure the potential and efficiency of Chinese OFDI, and found that there is serious underinvestment in "the Belt and Road" region with high investment potential, and the loss of investment efficiency in the "Silk Road Economic Belt" is more serious. By constructing a host country financial ecology evaluation system and stochastic frontier gravity model, Hu Bing and Wang Xiaofang [3] pointed out that OFDI has efficiency loss in the region, and the improvement of economic, political and credit environment in host country makes it face more market competition, which has a significantly greater negative effect than the positive effect of financial environment on OFDI efficiency. Yan Jiajia, Liu Yongfu and He Yi [4] argued that the efficiency of China's direct investment is at a low level, and the efficiency of investment in developing countries is higher than that of developed countries, which shows a positive time-varying effect, with smooth growth and spatial convergence, while the overall level is gradually improving. Xiong Bin and Fan Yaya [5] examined the efficiency and potential of China's investment in countries along "the Belt and Road" and its influencing factors from two aspects of investment stock and flow, and proved that the efficiency of China's investment with the countries along "the Belt and Road" during 2003-2016 is of great imbalance. The second one is to evaluate and analyze the efficiency of China's OFDI to the countries along "the Belt and Road" by DEA model. For example, Ni Kun and Wang Lei [6] measured the efficiency of OFDI based on DEA-Malmquist index and concluded that the overall investment efficiency and technical efficiency of China is low and needs to be improved. Li Jigang and Ma Yong [7] found that since "the Belt and Road" initiative was proposed, the overall investment efficiency of listed companies along the route has been on the rise, and the policy has an obvious positive effect on the investment efficiency of listed companies along the route. Tian Ze and Xu Dongmei [8] used the super-efficiency DEA and Malmquist index method to comprehensively evaluate the investment efficiency and its changes, and their research indicates that China's investment in the countries along the route is not highly efficient in general, and there are large differences between countries. The investment scale returns of most countries are at the stage of effective or incremental investment. The third one focuses on the efficiency of OFDI in certain countries or regions along "the Belt and Road". For example, Tian Ze, Wang Yili and Jin Shuiying [9] argued that the efficiency of China's investment in key energy countries in Africa is not high, and the efficiency of investment in only a few countries reaches an effective level, while the efficiency of investment in most countries is a low to medium level. In terms of investment risk, Liu Jiaguo, Ye Zhening and Ding Jingjing [10] believed that China's investment efficiency in countries along the "21st Century Maritime Silk Road" is significantly negatively correlated with investment risk, and both high-risk and higher-risk countries show diminishing returns to scale, non-optimal investment efficiency and irrational investment. In addition, a small number of scholars have chosen research directions or methods that are different from the above-mentioned literature. For example, Li Bing and Tian Shihui [11] used a DEA model to measure the efficiency of IFDI in countries along the route, and concluded that the efficiency of IFDI in countries along the route varied greatly from country to country. After "the Belt and Road" initiative was proposed, the investment efficiency has been significantly improved. Besides, Zu Yu and Li Zongming [12] analyzed the panel data through a gravity model and proved that Chinese enterprises have a positive impact on the governance of host countries along "the Belt and Road". In summary, the current research on the investment efficiency of China's countries along "the Belt and Road" is mostly focused on theoretical analysis by mainly using stochastic frontier gravity models, while less literature pays attention to DEA model for research and analysis. Most of them are based on the data when "the Belt and Road" initiative was just proposed, and have not been updated and studied in recent years. In addition, the outbreak of the Covid-19 pandemic since 2019 has brought a lot of uncertainties to China's OFDI, which has also been affected to a certain extent. However, little importance has been attached to the data and efficiency changes of China's OFDI to countries along "the Belt and Road" after the Covid-19 pandemic. Therefore, in the context of the comprehensive promotion of "the Belt and Road" initiative and the outbreak of the Covid-19 pandemic, how to improve the efficiency of China's direct investment in countries along the route is of great significance to break the bottleneck of China's OFDI and smoothly promote "the Belt and Road" initiative. Therefore, according the existing literature, this paper uses DEA and Malmquist index method to comprehensively evaluate the efficiency of OFDI and its changes based on the panel data of China's direct investment in 35 key countries along "the Belt and Road" from 2014 to 2020, analyzes the impact of the Covid-19 pandemic on China's direct investment in "the Belt and Road" and puts forward the suggestions. It aims to provide a reference for further related research through the comprehensive evaluation of OFDI efficiency, and effective suggestions for improving the efficiency of China's direct investment in countries along "the Belt and Road" and driving the common development of "the Belt and Road" economic zone. Theoretical Basis of Static Efficiency Measurement -Data Envelopment Analysis (DEA) Model Data Envelopment Analysis (DEA) model is a non-parametric estimation method to evaluate the relative effectiveness of the same type of decision-making unit (DMU) in the same period, and is often used to measure the efficiency. DEA model uses the data of input and output indicators to derive the production frontier, and determines whether the DEA is effective by comparing the distance between the decision-making unit and the production frontier. The basic DEA models are divided into two types: CCR and BCC models, and the CCR model measures the efficiency of each decisionmaking unit (DMU) under the assumption of constant returns to scale. However, the returns to scale often do not remain constant in real life, but are increasing and decreasing. Therefore, Banker, Charnes, and Cooper improved the CCR model in 1984 by establishing a model to measure efficiency under variable returns to scale, which is also the BCC model that will be used in this paper in the following form (based on input orientation): . . FEMS 2022 Volume 19 (2022) 346 where and are the input vector and output vector of the jth decision unit respectively, is the efficiency evaluation value of the decision unit, is the planning decision variable, + and − are the output slack variable and the input slack variable respectively, and is the non-Archimedean infinitesimal. When =1 and + = − = 0, it indicates that this decision unit DEA is valid and the input minimization and output maximization are achieved in the current decision unit system. When =1 and + > 0 or − > 0, it means that the decision unit is weakly DEA valid for pure technical efficiency valid, but the scale of inputs and outputs are not matched, and the scale needs to be increased or decreased. When < 1, it means that the decision unit is not DEA valid and non-optimal, and there is still room for improvement. In the BCC model, the Technical Efficiency (TE) can be decomposed into Pure Technical Efficiency (PTE) and Scale Efficiency (SE), and the relationship between the three can be expressed as TE=PTE×SE. In summary, considering the reality that there is a variation of returns to scale in China's OFDI, the BCC model is chosen to better approximate the real efficiency of China's OFDI, and the decomposition of overall OFDI efficiency can better identify the composition of efficiency and provide guidance for future development. Basis of Dynamic Efficiency Exploration -Malmquist Index The Malmquist index was first proposed by Malmquist in 1953 and then applied to measure productivity by Caves et al. in 1982. The advantage of the Malmquist index is the ability to measure the total factor productivity of a decision unit from a dynamic perspective. Based on the aforementioned BCC and CCR models, the Malmquist index can be constructed to analyze the dynamic changes of China's OFDI efficiency to the countries along "the Belt and Road" from a dynamic perspective during 2014-2020. The Malmquist index ( +1 , +1 , , ) from period t to period t+1 is expressed by the following equation. where ( +1 , +1 ) and ( , ) denote the input and output vector in periods t+1 and t respectively, and and +1 denote the distance functions of the objects examined in periods t and t+1 respectively, with the technology level in period t as the reference. In equation (2), when M<1, it implies that total factor productivity decreases from period t to t+1; when M=1, it indicates that total factor productivity is constant from period t to t+1; when M>1, it means that total factor productivity increases from period t to t+1. Further decomposition of the above equation leads to the EFC index and TEC index, whose expressions are Where EFC is the technical efficiency index, when EFC<1, the technical efficiency decreases and the distance between DMU and production front surface becomes farther; when EFC=1, the technical efficiency remains unchanged, and so does the distance between DMU and production front surface; when EFC>1, the technical efficiency is improved and the distance between DMU and production front surface is drawn closer. TEC is the technical progress index. When TEC < 1, technology declines and the production possibility boundary of the whole industry moves inward; when TEC = 1, technology remains unchanged and the production possibility boundary of the whole industry remains unchanged; when TEC > 1, technology advances and the production possibility boundary of the whole industry moves outward. Evaluation Object Based on the current situation of China's FDI to countries along "the Belt and Road", This paper strictly follows the requirements of DEA model for sample size and takes into account the availability and validity of data. The sample size is equal to two times the product of input and output indicators (two input indicators and four output indicators are selected in this paper). In terms of the selection of evaluation objects, reference is made to the research results of Zhong Feiteng et al. [13], and the host countries that rank top 26 in terms of OFDI stock along "the Belt and Road" at the end of 2020 have been selected as the evaluation object (DMU), which is of strong representation. The period of data selection is from 2014 to 2020, which will end at the end of 2020 for measurement and analysis of OFDI efficiency of 26 host countries. Evaluation Indicator Based on the principles that the selected indicators should be feasible and practically meaningful, the study selects China's direct investment stock in the host country (USD million) and total labor force in the host country (thousand people) as input indicators for China's OFDI efficiency analysis. Host country GDP (USD million), GDP per capita (USD million), fiscal revenue (USD million), and total import and export trade (USD million) are selected as output indicators. The relevant data are obtained from Statistical Bulletin of China's Outward Foreign Direct Investment, World Bank database, and IMF database. The specific indicators are described as follows. 1. The stock of China's direct investment in the host country reflects the input of China's OFDI from the perspective of capital. 2. The total labor force in the host country reveals the host country's inputs from a labor force perspective. 3. Host country GDP and GDP per capita measure the economic development of each country, which indicates the total economic volume and economic strength of the host country, thus reflecting the output level of OFDI. 4. Host country fiscal revenue indicates government revenue, and the impact of OFDI on government revenue output. 5. The total import and export trade of the host country reflects the foreign trade of the host country and measures the efficiency of OFDI from a macroeconomic perspective. Evaluation of Investment Efficiency Based on DEA Model Using the input-oriented BCC model, the collected data of each index are substituted into DEAP software to calculate the technical efficiency (TE), pure technical efficiency (PTE), scale efficiency (SE) and returns to scale (RTS) of China's OFDI to 26 key countries along "the Belt and Road" from 2014 to 2020. The data results are shown in Table 3. Based on the data above, the analysis is as follows. 1.The overall OFDI efficiency of China to key countries along "the Belt and Road" shows a low to medium level. According to the technical efficiency (TE) analysis, only eight countries including UAE, Czech Republic, Kuwait, Singapore, Israel, Saudi Arabia, Turkey and India have an OFDI efficiency of 1, that is DEA effective, which accounts for 30.8% of the 26 countries. Apart from that, China's investment efficiency in other countries remains at a low to medium level, accounting for nearly 70% of the total. In the study of Tian Ze and Xu Dongmei [8], the overall investment efficiency of China to countries along "the Belt and Road" from 2008 to 2014 is at a low to medium level, which indicates that the OFDI efficiency of China to key countries along "the Belt and Road" in recent years has not developed significantly and needs to be further improved. 2.According to the pure technical efficiency (PTE) analysis, a total of 10 countries such as to UAE, Czech Republic, and Kuwait have been maintained in DEA effective situation (efficiency value = 1), accounting for 38%, which is higher than the technical efficiency (TE) of DEA efficiency. Among the remaining countries only the PTE to Egypt and Vietnam has significantly increased in 2014-2020. The PTE of China's OFDI to Laos, Malaysia, and Uzbekistan is decreasing instead of increasing, while the other countries are steadily increasing but at a slower growth rate, and the investment efficiency still remains at a low to medium level. 3. According to the scale efficiency (SE) analysis, the scale efficiency of China's OFDI to UAE, Czech Republic, Kuwait, Singapore and Israel has been maintained at the DEA effective level of about 19%. In addition to this, the scale efficiency of our investments in Saudi Arabia, Turkey and India has been significantly improved during the five years and reached DEA efficiency in 2020. However, the scale efficiency of our OFDI to 7 countries is decreasing, which accounts for 27% of the 26 countries. 4. China's OFDI efficiency in key countries along "the Belt and Road" is on a growing trend but slowly. China's OFDI efficiency to 26 key countries along the Belt and Road shows a growing trend during 2014-2020, and only three countries, Kazakhstan, Malaysia and Tajikistan, show a decreasing trend, accounting for 11.5% of the 26 countries, while the OFDI efficiency to the remaining 23 countries all show a growing trend. But among those countries that are growing, only a few are increasing rapidly, such as Egypt, Saudi Arabia, Turkey, and India, which only account for 15% of the 26 countries. Besides, the remaining 19 countries' OFDI efficiency is increasing but slowly, and the investment efficiency still remains at a low to medium level. 5.The DEA efficiency of PTE of China's investment in key countries along "the Belt and Road" is much greater than the DEA efficiency of SE. According to the formula TE=PTE*SE, we can analyze that the reason why China's OFDI efficiency in many countries cannot reach DEA efficiency is that the scale efficiency cannot reach 1. Therefore, China should pay more attention to the scale efficiency of OFDI and try to improve it in foreign investment in the future. Efficiency Dynamization Analysis Based on Malmquist Index Malmquist Index can reflect the changing trend of China's OFDI efficiency to countries along "the Belt and Road" in terms of year and region dynamically, and DEAP software is used to analyze the data of China's OFDI to 26 host countries along "the Belt and Road" from 2014 to 2020. The evaluation results are as follows. Analysis of the Average Change in Overall Investment Efficiency The average value of Malmquist index of China's OFDI to 26 countries along "the Belt and Road" from 2014 to 2020 is 0.955, which indicates that the efficiency of China's OFDI to countries along "the Belt and Road" in 2020 is decreasing compared with 2014. By decomposing the Malmquist index, we can get: M=EFC*TEC, which means that the Malmquist index is decomposed into the product of technical efficiency index (EFC) and technical progress index (TEC), and the average value of technical efficiency index is 1.131 and that of technical progress index is 0.906. So we can conclude that the decrease in the efficiency of China's investment in key countries along "the Belt and Road" in 2014-2020 is due to the dynamic change in the rate of technical progress, which decreases by 9.4% in 2014-2020, while the technical efficiency increases by 13.1%, which indicates that the improvement of technical efficiency has slowed down China's investment in the key countries along "the Belt and Road" to a certain extent. The above analysis shows that the technical efficiency of China's OFDI in 2014-2020 has enhanced under the existing technology, and the factor allocation has been optimized and the factor utilization rate has been improved. However, the rate of technical progress is on a decreasing trend, which implies that the most advanced technical efficiency of China is still a key factor restricting the development and improvement of OFDI efficiency, and China needs to further improve the efficiency of technical progress. Analysis of the Average Change in Annual Investment Efficiency By means of calculation, it can be concluded that the OFDI efficiency of China's key countries along "the Belt and Road" from 2014 to 2020 demonstrates a fluctuating trend, with an upward trend from 2014 to 2017, but a decline from 2017 to 2018, followed by a rise again from 2018 to 2019, but a significant decline from 2019 to 2020. The specific values and changes are shown in Table 3 and Figure 1. The following detailed analysis is conducted for the Malmquist index and decomposition. 1. Investment efficiency increased year by year from 2014 to 2017, with an increase of 9.1% and 6.7%, both due to the increase in the technical progress index (TEC), which increased by 10% and 33.1% respectively, but the technical efficiency (EFC) decreased by 20.1% and 30.5% respectively, which largely hinders the improvement of the efficiency of foreign investment. 2. Investment efficiency declined in 2017-2018 by 4%, due to a 13.1% decrease in the rate of technical progress, in contrast to a 6.8% increase in technical efficiency. 3. Investment efficiency rose again in 2018-2019 and peaked at 4.6% in the investigation period as technical efficiency rose by 4.3%, but the rate of technical progress fell by 0.5%. 4. Investment efficiency plunges by 12.6% in 2019-2020, approximately equal to 2015-2015, mainly because there is a significant decline in the rate of technical progress, down by 25.8%, but technical efficiency rises by 24.6%, providing a certain buffer for the decline in investment efficiency. It can be concluded that TEC is the key factor affecting the change of investment efficiency in most cases. Therefore, attaching importance to technical progress and improving the technical progress index is a crucial choice for China to improve the efficiency of OFDI. Trend Analysis of Investment Efficiency Changes in Each Region The Malmquist index and decomposition of the efficiency of China's investment in each region of the 26 key countries along "the Belt and Road" from 2014 to 2020 by means of calculation are shown in Table 4. The analysis is as follows. 1. The efficiency of our investment in Southeast Asia is on an upward trend, rising by 0.9% due to a 10.1% increase in technical efficiency, but a 6.2% decrease in the rate of technical progress. 2. China's investment efficiency in South Asia, West Asia and North Africa, East and Central Asia, and Central and Eastern Europe are all on a downward trend, with investment efficiency in South Asia declining by 6%, in West Asia and North Africa by 8.8%, in East and Central Asia by 5.5%, and in Central and Eastern Europe by 4.6%. While the technical efficiency of investment in these regions are on an upward trend, the reason for the decline in investment efficiency is the decline of the rate of technical progress. Based on above analysis, we can learn that China's investment focus on countries along "the Belt and Road" has shifted from developed countries to developing countries, among which the investment efficiency in Southeast Asia is in the rising stage from 2014 to 2020 with a greater investment potential. However, the efficiency of China's investment in West Asia and North Africa have declined significantly, indicating that the investment in West Asia and North Africa has a long history and a mature scale, coupled with insufficient technical progress, resulting in a downward trend of investment efficiency. Overall, the technical efficiency of China's direct investment in these five regions is on an upward trend, rising by 10.1%, 37.8%, 14.3%, 3.9% and 5.8% respectively, which contributes to the growth of investment efficiency. However, the rate of technical progress for all five regions is on a downward trend, which implies that technical progress has become an important factor that hinders the growth of direct investment efficiency in countries along "the Belt and Road", and needs further attention. In the future, China should encourage technological innovation and strive to improve the rate of technical progress, which can effectively solve the current problems and further improve the efficiency of China's OFDI to the countries along "the Belt and Road". Analysis of Investment Efficiency Changes after the Covid-19 Pandemic At the beginning of 2020, the New Coronavirus swept the world, causing a serious blow to people's health and a negative impact on the global economic development, while little literature has paid attention to the impact of the New Coronavirus on China's OFDI. Therefore, the following analysis will study the impact of our country on the OFDI efficiency of the key countries along "the Belt and Road" before and after the outbreak of Covid-19 pandemic. The analysis of Table 3 and Figure 1 reveals that the efficiency of China's investment in key countries decreases significantly in 2019-2020, down by 12.6%, almost back to 2014-2015, and the main reason is that the rate of technical progress decreases by 25.8%. Although it is due to a variety of reasons, the Covid-19 pandemic is also one of the main influencing factors. First, the Covid-19 pandemic has exacerbated geopolitical risks, increased global economic uncertainty and restrictions on the flow of production factors, which has led to the suspension or termination of China's OFDI with a significant reduction, and a serious impact on the operations of enterprises that have invested abroad in host countries. Data released by the Ministry of Commerce show that China's outward investment in 2020 achieved overall growth, with an annual outward foreign direct investment of US $132.94 billion, equivalent to RMB 916.97 billion, up 3.3% year-onyear. Although the amount of China's outward investment showed a counter-trend growth, it was still lower than the growth rate in previous years. Secondly, after the epidemic, China's economy has entered into a phase of "Inward Development", because the domestic market needs to be built after the epidemic with a strong demand of investment, and the government will appropriately guide to contract the resources invested abroad and slow down the pace of economic "Going Out". Finally, the Covid-19 pandemic brings huge losses to the global economy, science and technology development, restricting technical progress and innovation, resulting in a 25.8% decline in the rate of technical progress, which directly affects the efficiency of China's foreign investment. At present, the Covid-19 pandemic is still prevalent worldwide. In the face of the new environment at present, how China's foreign-invested enterprises can improve the current investment environment, optimize the allocation of resources, and strive to improve the rate of technical progress and the efficiency of foreign investment in the context of the prevalence of the Covid-19 pandemic, has become one of the top issues that foreign enterprises in China need to solve currently. Conclusions and Suggestions This paper uses DEA model and Malmquist index to make a comprehensive evaluation of China's OFDI efficiency and change trends for 26 key countries along "the Belt and Road", and the conclusions are as follows: 1. From the results of the DEA model, the OFDI efficiency of China to the key countries along "the Belt and Road" is low to medium, and the country differences in investment efficiency are mainly caused by the low scale efficiency (SE). 2. China's investment efficiency in Southeast Asia is on a decreasing trend, which is due to the dynamic decrease in the rate of technical progress, while the increase of technical efficiency has slowed down the decrease of China's OFDI efficiency in countries along "the Belt and Road" to a certain extent. 3. China's OFDI efficiency in Southeast Asia is on an increasing trend, but its investment efficiency in South Asia, while West Asia and North Africa, East and Central Asia, Central and Eastern Europe is decreasing, with the largest decrease in West Asia and North Africa. In addition, the rate of technical progress for all five regions is decreasing, which indicates that technical progress has become an important factor hindering the growth of China's OFDI efficiency in countries along "the Belt and Road" and needs further attention. 4. The efficiency of investment in key countries along "the Belt and Road" is affected by the Covid-19 pandemic in 2019-2020, with a significant decline of 12.6%. Therefore, in view of the current situation, the suggestions are as follows: 1. It is supposed to further improve the scale efficiency of OFDI, so as to enhance the efficiency of China's OFDI to countries along "the Belt and Road" in general. 2. We need to strive to develop advanced technology, encourage innovation, and improve the technical progress index to enhance the efficiency of outward investment. 3. It is suggested to optimize the regional choice of investment, seize investment opportunities in Southeast Asia, expand the scale of investment and tap the potential for cooperation, while stimulating investment dynamics in other regions, especially in West Asia and North Africa. 4. We should strengthen global cooperation in combating epidemics, curb its spread, improve infrastructure development, and explore investment opportunities to stimulate economic vitality in the context of the Covid-19 pandemic.
2022-07-09T15:25:38.118Z
2022-05-31T00:00:00.000
{ "year": 2022, "sha1": "00fd0f49391a61bc8cab6ff57864ec1baddd10b5", "oa_license": "CCBY", "oa_url": "https://bcpublication.org/index.php/BM/article/download/825/833", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "195c44d9a291effbf34ae12beb1672eaed006508", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [] }
1526810
pes2o/s2orc
v3-fos-license
The growth exponent for planar loop-erased random walk We give a new proof of a result of Kenyon that the growth exponent for loop-erased random walks in two dimensions is 5/4. The proof uses the convergence of LERW to Schramm-Loewner evolution with parameter 2, and is valid for irreducible bounded symmetric random walks on any two-dimensional discrete lattice. Overview Let S be a random walk on a discrete lattice Λ ⊂ R d , started at the origin. The looperased random walk (LERW) S n is obtained by running S up to the first exit time of the ball of radius n and then chronologically erasing its loops. The LERW was introduced by Lawler [9] in order to study the self-avoiding walk, but it was soon found that the two processes are in different universality classes. Nevertheless, LERW is extensively studied in statistical physics for two reasons. First of all, LERW is a model that exhibits many similarities to other interesting models: there is a critical dimension above which its behavior is trivial, it satisfies a domain Markov property, and it has a conformally invariant scaling limit. Furthermore, LERWs are often easier to analyze than these other models because properties of LERWs can often be deduced from facts about random walks. The other reason why LERWs are studied is that they are closely related to certain models in statistical physics like the uniform spanning tree (through Wilson's algorithm which allows one to generate uniform spanning trees from LERWs [30]), the abelian sandpile model [6] and the b-Laplacian random walk [10] (LERW is the case b = 1). Let Gr(n) be the expected number of steps of a d-dimensional LERW S n . Then the d dimensional growth exponent α d is defined to be such that For d ≥ 4, it was shown by Lawler [10,11] that α d = 2 (roughly speaking, in these dimensions, random walks do not produce many loops and LERWs have the same growth exponent as random walks). For d = 3, numerical simulations suggest that α 3 is approximately 1.62 [1] but neither the existence of α 3 , nor its exact value has been determined rigorously (it is not expected to be a rational number). In the two dimensional case, it was shown by Kenyon [7] that α 2 exists for simple random walk on the integer lattice Z 2 and is equal to 5/4. His proof uses domino tilings to compute asymptotics for the number of uniform spanning trees of rectilinear regions of R 2 and then uses the relation between uniform spanning trees and LERW to conclude that α 2 = 5/4. In this paper, we give a substantially different proof that α 2 = 5/4. Namely, we prove Theorem 1.1. Let S be an irreducible bounded symmetric random walk on a twodimensional discrete lattice started at the origin and let σ n be the first exit time of the ball of radius n. Let S n be the loop-erasure of S[0, σ n ] and Gr(n) be the expected number of steps of S n . Then Gr(n) ≈ n 5/4 . The proof of Theorem 1.1 uses the fact that LERW has a conformally invariant scaling limit called radial SLE 2 . Radial Schramm-Loewner evolution with parameter κ ≥ 0 is a continuous random process from the unit circle to the origin in D. It was introduced by Schramm [23] as a candidate for the scaling limit of various discrete models from statistical physics. Indeed, he showed that if LERW has a conformally invariant scaling limit, then that limit must be SLE 2 . In the later paper by Lawler, Schramm and Werner [20], the convergence of LERW to SLE 2 was proved. Other models known to scale to SLE include the uniform spanning tree Peano curve (κ = 8, Lawler, Schramm and Werner [20]), the interface of the Ising model at criticality (κ = 16/3, Smirnov [26]), the harmonic explorer (κ = 4, Schramm and Sheffield [24]), the interface of the discrete Gaussian free field (κ = 4, Schramm and Sheffield [25]), and the interface of critical percolation on the triangular lattice (κ = 6, Smirnov [27] and Camia and Newman [4,5]). There is also strong evidence to suggest that the self-avoiding walk converges to SLE 8/3 , but so far, attempts to prove this have been unsuccessful [21]. One of the reasons to show convergence of discrete models to SLE is that properties and exponents for SLE are usually easier to derive than those for the corresponding discrete model. It is also widely believed that the discrete model will share the exponents of its corresponding SLE scaling limit. However, the equivalence of exponents between the discrete models and their scaling limits is not immediate. For instance, Lawler and Puckette [17] showed that the exponent associated to the non-intersection of two random walks is the same as that for the non-intersection of two Brownian motions. In the case of discrete models converging to SLE, different techniques must be used, since the convergence is weaker than the convergence of random walks to Brownian motion. To the author's knowledge, the derivation of arm exponents for critical percolation from disconnection exponents for SLE 6 by Lawler, Schramm and Werner [19] and Smirnov and Werner [28] is the only other example of exponents for a discrete model being derived from those for its SLE scaling limit. There are three main reasons for giving a new proof that α 2 = 5/4. The first is to give another example where an exponent for a discrete model is derived from its corresponding SLE scaling limit. The second reason is that the convergence of LERW to SLE 2 holds for a general class of random walks on a broad set of lattices. This allows us to establish the exponent 5/4 for irreducible bounded symmetric random walks on discrete lattices of R 2 , and thereby generalize Kenyon's result which holds only for simple random walks on Z 2 . Finally, in the course of the proof we establish some facts about LERWs that are of interest on their own. Indeed, in a forthcoming paper with Martin Barlow [2], we use a number of the intermediary results in this paper to obtain second moment estimates for the growth exponent. There are two properties of SLE 2 that suggest that α 2 = 5/4. The first is that the Hausdorff dimension of the SLE curves was established by Beffara [3], and is equal to 5/4 for SLE 2 . However, we have not found a proof that uses this fact directly. Instead, we use the fact that the probability that a complex Brownian motion from the origin to the unit circle does not intersect an independent SLE 2 curve from the unit circle to the circle of radius 0 < r < 1 is comparable to r 3/4 . This and other exponents for SLE were established by Lawler, Schramm and Werner [18]. We use this fact to show that the probability that a random walk and an independent LERW started at the origin and stopped at the first exit time of the ball of radius n do not intersect is logarithmically asymptotic to n −3/4 . We then relate this intersection exponent 3/4 to the growth exponent α 2 and show that α 2 = 5/4. Outline of the proof of Theorem 1.1 While many of the details are quite technical, the main steps in the proof are fairly straightforward. Let Es(n) be the probability that a LERW and an independent random walk started at the origin do not intersect each other up to leaving B n , the ball of radius n. As we mentioned in the previous section, the fact that Gr(n) ≈ n 5/4 follows from the fact that Es(n) ≈ n −3/4 . Intuitively, this is not difficult to see. Let z be a point in B n that is not too close to the origin or the boundary. In order for z to be on the LERW path, it must first be on the random walk path; the expected number of times the random walk path goes through z is of order 1. Then, in order for z to be on the LERW path, it cannot be part of a loop that gets erased; this occurs if and only if the random walk path from z to ∂B n does not intersect the loop-erasure of the random walk path from 0 to z. This is comparable to Es(n). Therefore, since there are on the order of n 2 points in B n , Gr(n) is comparable to n 2 Es(n), and so it suffices to show that Es(n) ≈ n −3/4 . The above heuristic does not work for points close to the origin or to the circle of radius n, and so the actual details are a bit more complicated. Given l ≤ m ≤ n, decompose the LERW path S n as S n = η 1 ⊕ η * ⊕ η 2 (see Figure 1). Define Es(m, n) to be the probability that a random walk started at n m l η * η 2 η 1 Figure 1: Decomposition of a LERW path into η 1 , η 2 and η * the origin leaves the ball B n before intersecting η 2 . Notice that Es(m, n) is the discrete analog of the probability that a Brownian motion from the origin to the unit circle does not intersect an independent SLE 2 curve from the unit circle to the circle of radius m/n. As mentioned in the previous section, the latter probability is comparable to (m/n) 3/4 [18]. Therefore, using the convergence of LERW to SLE 2 and the strong approximation of Brownian motion by random walks one can show that there exists C < ∞ such that the following holds (Theorem 5.6). For all 0 < r < 1, there exists N such that for all n > N , Unfortunately, N in the previous statement depends on r, so one cannot simply take r → 0 to recover Es(n). Therefore, one has to relate Es(n) to Es(m, n). This is not as easy as it sounds because the probability that a random walk avoids a LERW is highly dependent on the behavior of the LERW near the origin. Nevertheless, we show (Propositions 5.2 and 5.3) that there exists C < ∞ such that It is then straightforward to combine (1) and (2) to deduce that Es(n) ≈ n −3/4 (Theorem 5.7). To prove (2), we let l = m/4 in the decomposition given in Figure 1. Then in order for a random walk S and a LERW S n not to intersect up to leaving B n , they must first reach the circle of radius l without intersecting; this is Es(l). Next, we show that with probability bounded below by a constant, η * is contained in a fixed half-wedge (Corollary 3.8). We then use a separation lemma (Theorem 4.7) which states that on the event Es(l), S and S n are at least a distance cl apart at the circle of radius l. This allows us to conclude that, conditioned on the event Es(l), with a probability bounded below by a constant, S will not intersect η * . Finally, we use the fact that η 1 and η 2 are "independent up to constants" (Proposition 4.6) to deduce that Formula (2) then follows because m = 4l and thus Es(l) is comparable to Es(m). Structure of the paper In Chapter 2, we give precise definitions of random walks, LERWs and SLE and state some of the basic facts and properties that we require. In Chapter 3, we prove some technical lemmas about random walks. Section 3.1 establishes some estimates about Green's functions and the probability of a random walk hitting a set K 1 before another set K 2 . Section 3.2 examines the behavior of random walks conditioned to avoid certain sets. Finally, in Section 3.3 we prove Proposition 3.12 which states the following. For a fixed continuous curve α in the unit disc D, the probability that a continuous random walk on the lattice δΛ exits D before hitting α tends to the probability that a Brownian motion exits D before hitting α. Furthermore, if one fixes r, then the convergence is uniform over all curves whose diameter is larger than r. Chapter 4 is devoted to proving two results for LERW that are central to the main proof of the paper. The first is Proposition 4.6 which states that if 4l ≤ m ≤ n then η 1 and η 2 are independent up to a multiplicative constant (see Figure 1). The second result is a separation lemma for LERW. This key lemma states the following intuitive fact about LERW: there exist positive constants c 1 and c 2 so that, conditioned on the event that a random walk and a LERW do not intersect up to leaving the ball B n , the probability that the random walk and the LERW are at least distance c 1 n apart when they exit the ball B n is bounded below by c 2 . Separation lemmas like this one are often quite useful in establishing exponents; a separation lemma was used in [12] to establish the existence of the intersection exponent for two Brownian motions and in [28] to derive arm exponents for critical percolation. In Chapter 5, we prove that the growth exponent α 2 = 5/4. To do this, we first relate the non-intersection of a random walk and a LERW to the non-intersection of a Brownian motion and an SLE 2 . Using the fact that the exponent for the latter is 3/4, we deduce the same result for the former (Theorem 5.7). Finally, we show how this implies that the growth exponent α 2 for LERW is 5/4 (Theorem 1.1). Acknowledgements I would like to thank Wendelin Werner for suggesting this problem to me. This work was done while I was a graduate student at the University of Chicago and I am very grateful to my advisors Steve Lalley and Greg Lawler for all their patient help and guidance. 2 Definitions and background 2.1 Irreducible bounded symmetric random walks Throughout this paper, Λ will be a two-dimensional discrete lattice of R 2 . In other words, Λ is an additive subgroup of R 2 not generated by a single element such that there exists an open neighborhood of the origin whose intersection with Λ is just the origin. It can be shown (see for example [16,Proposition 1.3.1]) that Λ is isomorphic as a group to Z 2 . Now suppose that V ⊂ Λ \ {0} is a finite generating set for Λ with the property that the first nonzero component of every x ∈ V is positive. Suppose that κ : where the random variables X k are independent with distribution p. Then S is a symmetric, irreducible random walk with bounded increments. It is a Markov chain with transition probabilities p(x, y) = p(y − x). If X = (X 1 , X 2 ) has distribution p, then is the covariance matrix associated to S. There exists a unique symmetric positive definite matrix A such that Γ = A 2 . Therefore, if S j = A −1 S j , then S is a random walk on the discrete lattice A −1 Λ with covariance matrix the identity. Since a linear transformation of a circle is an ellipse, it is clear that if we can show that the growth exponent α 2 is 5/4 for random walks whose covariance matrix is the identity, then α 2 will be 5/4 for random walks with arbitrary covariance matrix. Therefore, to simplify notation and proofs, throughout the paper S will denote a symmetric, irreducible random walk on a discrete lattice Λ with bounded increments and covariance matrix equal to the identity. A note about constants For the entirety of the paper, we will use the letters c and C to denote constants that may change from line to line but will only depend on the random walk S (which will be fixed throughout). Given two functions f (n) and g(n), we write f (n) ≈ g(n) if lim n→∞ log f (n) log g(n) = 1, and f (n) ≍ g(n) if there exists 0 < C < ∞ such that for all n If f (n) → ∞ and g(n) → ∞ then f (n) ≍ g(n) implies that f (n) ≈ g(n), but the converse does not hold. Subsets of C and Λ Recall that our discrete lattice Λ and our random walk S with distribution p are fixed throughout. Given z ∈ C, let be the open disk of radius r centered at z in C, and be the ball of radius n centered at z in Λ. We write D r for D r (0), B n for B n (0) and let D = D 1 be the unit disk in C. We use the symbol ∂ to denote both the usual boundary of subsets of C and the outer boundary of subsets of Λ, where the outer boundary of a set K ⊂ Λ (with respect to the distribution p) is The context will make it clear whether we are considering a given set as a subset of C or of Λ. We will also sometimes consider the inner boundary ∂ i K = {x ∈ K : there exists y ∈ Λ \ K such that p(x, y) > 0}. We let K = K ∪ ∂K and K • = K \ ∂ i K. A path with respect to the distribution p is a sequence of points We say that a set K ⊂ Λ is connected (with respect to the distribution p) if for any pair of points x, y ∈ K, there exists a path ω ⊂ K connecting x and y. Given l ≤ m ≤ n, let Ω l be the set of paths ω = [0, ω 1 , . . . , ω k ] ⊂ Λ such that ω j ∈ B l , j = 1, . . . , k − 1 and ω k ∈ ∂B l . Let Ω m,n be the set of paths λ = [λ 0 , λ 1 , . . . , λ k ′ ] such that λ 0 ∈ ∂B m , λ j ∈ A m,n , j = 0, 1, . . . , k ′ − 1 and λ k ′ ∈ ∂B n , where A m,n denotes the annulus B n \ B m . Basic facts about Brownian motion and random walks Throughout this paper, W t , t ≥ 0 will denote a standard complex Brownian motion. Given a set K ⊂ Λ, let be first exit times of the set K. We also let be the first hitting times of the set K. We let σ n = σ Bn and use a similar convention for σ n , ξ n and ξ n . We also define the following stopping times for Brownian motion: Depending on whether the Brownian motion is started inside or outside D, τ D will be either an exit time or a hitting time. Suppose that X is a Markov chain on Λ and that K ⊂ Λ. Let For x, y ∈ K, we let denote the Green's function for X in K. We will sometimes write G X (x, y; K) for G X K (x, y) and also abbreviate G X K (x) for G X K (x, x). When X = S is a random walk, we will omit the superscript S. Recall that a function f defined on K ⊂ Λ is discrete harmonic (with respect to the distribution p) if for all z ∈ K, For any two disjoint subsets K 1 and K 2 of Λ, it is easy to verify that that the function is discrete harmonic on Λ \ (K 1 ∪ K 2 ). The following important theorem concerning discrete harmonic functions will be used repeatedly in the sequel [16, Theorem 6.3.9]. for all x, y ∈ nA ∩ Λ. Suppose that X is a Markov chain with hitting times ξ X K = min{j ≥ 0 : X ∈ K}. Given two disjoint subsets K 1 and K 2 of Λ, let Y be X conditioned to hit K 1 before K 2 (as long as this event has positive probability). Then if we let h(z) = P z ξ Using this fact, the following lemma follows readily. Lemma 2.2. Suppose that X is a Markov chain and let Y be X conditioned to hit . Then for any x, y ∈ K, y). Finally, we recall an important theorem concerning the intersections of random walks and Brownian motion with continuous curves. 1. There exists a constant C < ∞ such that the following holds. Suppose that α : [0, t α ] → C is a continuous curve such that α(0) = 0 and α(t α ) ∈ ∂D r . Then if z ∈ D r , 2. There exists a constant C < ∞ such that the following holds. Suppose that ω is a path from the origin to ∂B n . Then if z ∈ B n , Proof. The statement about Brownian motion can be found, for example, in [14,Theorem 3.76]. The statement about random walks was originally proved in [8]; a formulation that is closer to the one given above can be found in [15]. Let n = inf{i : s i = m}. Then Note that one may obtain a different result if one performs the loop-erasing procedure backwards instead of forwards. In other words, if we let λ R = [λ m , . . . , λ 0 ], then in general, L(λ R ) = L(λ) R . However, if λ has the distribution of a random walk, then L(λ R ) has the same distribution as L(λ) R [10, Lemma 7.2.1]. Now suppose that S is a random walk on Λ and K is a proper subset of Λ. We define the LERW S K to be the process In other words, we run S up to the first exit time of K and then erase loops. We write S n for S Bn . We also define the following stopping times. Given A ⊂ K, we let If either A or K is a ball B n , we replace A or K by n in the subscript or superscript. Different sets K will produce different LERWs S K , but one can define an "infinite LERW" as follows. For ω ∈ Ω l , and n > l let µ l,n (ω) = P S[0, σ n l ] = ω . The µ l are consistent and therefore there exists a measure µ on infinite self-avoiding paths. We call the associated process the infinite LERW and denote it by S. In this paper, we will consider both the infinite LERW S, and LERWs S K obtained by stopping a random walk at the first exit time of K and then erasing loops. Suppose that X is a Markov chain and ω = [ω 0 , . . . , ω k ] is a path in Λ with respect to p X . One can write down an exact formula for the probability that the first k steps of the loop-erased process X K are equal to ω. Letting A j = {ω 0 , . . . , ω j }, j = 0, . . . , k, A −1 = ∅, and G X (.; .) be the Green's function for X, we define Then [13], We can use the previous formula to show that while LERW is certainly not a Markov chain, it does satisfy the following "domain Markov property": for any Markov chain X, if we condition the initial part of X to be equal to ω, the rest of X can be obtained by running X conditioned to avoid ω and then loop-erasing. Lemma 2.4 (Domain Markov Property). Let X be a Markov chain, K ⊂ Λ and ω = [ω 0 , ω 1 , . . . , ω k ] be a path in K (with respect to p X ). Define a new Markov chain Y to be X started at ω k conditioned on the event that Proof. Let G X (.; .) and G Y (.; .) be the Green's functions for X and Y respectively. Then by formula (5), and by Lemma 2.2, Schramm-Loewner evolution In this subsection, we give a brief description of Schramm-Loewner evolution. For a much more thorough introduction to SLE, see for instance [14] or [29]. Suppose that γ : [0, ∞] → D is a simple continuous curve such that γ(0) ∈ ∂D, γ(0, ∞] ⊂ D and γ(∞) = 0. Then by the Riemann mapping theorem, for each t ≥ 0, there exists a unique conformal map g t : D \ γ(0, t] → D such that g t (0) = 0 and g ′ t (0) > 0. The quantity log g ′ t (0) is called the capacity of D \ γ(0, t] from 0. By the Schwarz Lemma, g ′ t (0) is increasing in t and therefore, one can reparametrize γ so that g ′ t (0) = e t ; this is the capacity parametrization of γ. For each t ≥ 0, one can verify that exists and is continuous as a function of t. Also, g t and U t satisfy Loewner's equatioṅ Therefore, given a simple curve γ as above, one produces a curve U t on the unit circle satisfying (6). One calls U t the driving function of γ. The idea behind the Schramm-Loewner evolution is to start with a driving function U t and use that to generate the curve γ. Indeed, given a continuous curve U : [0, ∞] → ∂D and z ∈ D, one can solve the ODE (6) up to the first time T z that g t (z) = U t . If we let K t = {z ∈ D : T z ≤ t} then one can show that g t is a conformal map from D \ K t onto D such that g t (0) = 0 and g ′ t (0) = e t . We note that there does not necessarily exist a curve γ such that K t = γ[0, t] as was the case above. The radial Schramm-Loewner evolution arises as a special choice of the driving Brownian motion. Then the resulting random maps g t and sets K t are called radial SLE κ . It is possible to show that with probability 1, there exists a curve γ such that containing 0 (see [22] for the case κ = 8 and [20] for κ = 8). In [22] it was shown that if κ ≤ 4 then γ is a.s. a simple curve and if κ > 4, γ is a.s. not a simple curve. One refers to γ as the radial SLE κ curve. One defines radial SLE κ in other simply connected domains to be such that SLE κ is conformally invariant. Given a simply connected domain D = C, z ∈ D and w ∈ ∂D, there exists a unique conformal map f : D → D such that f (0) = z and f (1) = w. Then SLE κ in D from w to z is defined to be the image under f of radial SLE κ in D from 1 to 0. We will focus on the case κ = 2, and throughout γ : We conclude this section with precise statements of the two facts about SLE 2 that were mentioned in the introduction: the intersection exponent for SLE 2 and the weak convergence of LERW to SLE 2 . Theorem 2.5 (Lawler, Schramm, Werner [18]). Let γ be radial SLE κ from 1 to 0 in D and for 0 < r < 1, let τ r be the first time γ enters the disk of radius r. Let W be an independent complex Brownian motion started at 0. Then In particular, ν = 3/4 for SLE 2 . In order to state the convergence of LERW to SLE 2 we require some notation. Let Γ denote the set of continuous curves α : [0, t α ] → D (we allow t α to be ∞) such that α(0) ∈ ∂D, α(0, t α ] ⊂ D and α(t α ) = 0. We can make Γ into a metric space as follows. where the infimum is taken over all continuous, increasing bijections θ : Note that d is a pseudo-metric on Γ, and is a metric if we consider two curves to be equivalent if they are the same up to reparametrization. Let f be a continuous function on Γ, γ be radial SLE 2 , and extend S n to a continuous curve by linear interpolation (so that the time reversal of n −1 S n is in Γ), then Theorem 2.6 (Lawler, Schramm, Werner [20]). Some results for random walks In this section we establish some technical lemmas concerning random walks that will be used repeatedly in the sequel. Hitting probabilities and Green's function estimates Recall that ξ K is the first hitting time of the set K and G(.; Λ \ K) is the Green's function in the set Λ \ K. Proof. We begin by showing that for any K ⊂ Λ, z ∈ Λ \ K and y ∈ ∂ i K, To prove this, we proceed as in the proof of [10, Lemma 2.1.1]. Let Note that τ is not a stopping time. However, since τ < ξ K , Applying the previous equality to K = K 1 ∪ K 2 , we get that By reversing paths, one sees that Thus, However, by reversing paths yet again, which completes the proof of the lemma. 1. There exists c > 0 and N such that for all l ≥ N the following holds. Suppose that K ⊂ Λ contains a path connecting 0 to ∂B l . Then for any x ∈ B l , 2. There exists c > 0 and N such that for all N ≤ 2l < n, the following holds. Suppose that K ⊂ Λ contains a path connecting ∂B 2l to ∂B n . Then for any Proof. Proof of (1): We assume that N is sufficiently large so that for all l ≥ N , each of the steps below works. First of all, we may assume that z ∈ B l/4 since if z ∈ B l , If p is the distribution of the random walk S, let m = max{|x| : p(x) > 0}. Since K connects 0 to ∂B l , there exists a subset K ′ of K such that for each i = 1, . . . , ⌊l/m⌋, there is exactly one point It is clear that if the lemma holds for K ′ then it will hold for K. Therefore, we assume that K has this property. By [16, Proposition 6.3.5], there exists a constant C such that if z ∈ B l , Therefore, if y, z ∈ B l with |z − y| < l/2, and l is large enough, Let V be the number of visits to K before leaving B 2l . Then for any z ∈ B l/4 , since there are at least l/(4m) points within distance l/2 from z, Also, since there are at most 2j/m points in K within distance j from z ∈ B l , Therefore, for any x ∈ B l , Proof of (2): We again let N be large enough so that if l ≥ N the following steps work. For x ∈ ∂B 2l , there exists c > 0 such that for all l large enough, Therefore, we may assume that n > 4l. We will show that if K ⊂ Λ contains a path connecting ∂B 2l to ∂B 4l , then It suffices to show that for all z, y ∈ B 4l \ B 2l , For if we can show (7), then we can proceed as in the proof of (1). To prove the left inequality, we note that for z ∈ ∂B l/4 (y), and by approximation by Brownian motion, one can bound the latter from below by a uniform constant. We now prove the right inequality in (7). By the monotone convergence theorem, However, since B m \ B l is a finite set, we can apply [16, Proposition 4.6.2] which states that where a denotes the potential kernel. By [16,Theorem 4.4.3], Therefore, However, because |z| < 4l, a standard estimate [16, Proposition 6.4.1] shows that Therefore, Lemma 3.3. There exists C < ∞ and N such that for all N ≤ 2l ≤ n, the following holds. Suppose that K ⊂ Λ contains a path connecting ∂B 2l to ∂B n . Then for any Proof. Without loss of generality, we may assume that K ⊂ Λ \ B 2l . In that case, σ 2l < ξ K ∧ σ n for all walks started in B l and therefore, However, by Lemma 3.2, for any w ∈ ∂B 2l , Lemma 3.4. There exists c > 0 and N such that for N ≤ 2l ≤ n the following holds. Suppose K ⊂ Λ \ B 2l contains a path connecting ∂B 2l to ∂B n . Then for z ∈ B l , Proof. To begin with, we claim that it suffices to show that for z ∈ ∂B l such that To see this, note that Therefore it suffices to show that for all z ∈ B l , However, for z ∈ B l , Furthermore, by the discrete Harnack inequality, for any y, y ′ ∈ ∂B l , Therefore, the lemma will follow once we prove (8). Let z ∈ ∂B l be such that Then, By Lemma 3.2, for any w ∈ ∂B 2l , Thus, which completes the proof. Random walks conditioned to avoid certain sets Proposition 3.5. There exist constants N and c > 0 such that for all n ≥ N the following holds. Suppose that K ⊂ Λ \ B n (n, 0) where B n (n, 0) denotes the ball of radius n centered at (n, 0) (see Figure 2). Then, where W denotes standard two-dimensional Brownian motion. Then h is the solution to the Dirichlet problem with boundary value 1 [−π/4,π/4] . Therefore, we can express h as is the Poisson kernel for the unit disk. One can compute h (it is easier to consider the problem on H and then map back via a conformal transformation): We now establish three basic facts about h that we will use below. 1. Let D 1 (1) be the disk of radius 1 centered at the point 1. We claim that for all (1)). Thus, to prove the claim, it suffices to show that takes its maximal value at t = π for 2π/3 ≤ t ≤ 4π/3. Since one has an explicit formula for h, this is left as an exercise for the reader or the reader's Calculus students. These results can also be obtained from the explicit formula for h. Assume that n is large enough so that B rn ⊂ nD where r is as in the previous paragraph. We let h n (z) = h(z/n) which is harmonic in nD. Then for z ∈ B rn , define Then h n is discrete harmonic in B rn and agrees with h n on ∂B rn . A natural question to ask is how close does the discrete harmonic solution h n approximate the continuous harmonic solution h n ? By [16,Corollary 6.2.4], for all z ∈ B rn , By Taylor's theorem, for any C 4 function f and z ∈ Λ, where R is the range of the walk S and M 4 (f ) is the L ∞ norm of the sum of the fourth derivatives of f in the disk D R (z). Since the random walk S has covariance matrix the identity (we have been assuming that S has this property but this is the first place we use it), one can show that L is actually a multiple of the continuous Laplacian. Thus, Lh n = 0. Furthermore, since the fourth derivatives of h are bounded on rD, M 4 (h n ) is bounded by Cn −4 in B rn . Therefore, combining all the previous remarks (and letting CR 4 = C since R depends only on the random walk S which we've fixed), we obtain that for z ∈ B rn , We now have all the pieces we need to prove the proposition. Let z be any point in B rn \ B n (n, 0), and fix x ∈ Λ such that Re(x) > 0. Then by Taylor's theorem and our previous observations about h, if n is large enough so that x is in B rn , where M 2 (h) is the L ∞ norm of the sum of the second order derivatives of h in rD. Since ∂h ∂x (0) > 0, it is clear that for n sufficiently large, Thus, This implies that for n sufficiently large, since h(0) = 1/4. Recall that r was defined so that for all z such that r < |z| < 1, and |arg(z)| > π/3, h(z) < 1/8. Therefore, Since x is independent of K and n, and hence, Finally, Lemma 3.6. For 0 < θ < π, there exist c(θ), N (θ) and α(θ) such that the following holds. For n > N , and z ∈ Λ with N < |z| < n, let W be the wedge Then, Remark By comparison with Brownian motion, one expects that α(θ) = π/θ would be the optimal constant. However, in this paper we will only need the existence of α and not its exact value. Proof. It is clear that we can make α(θ) non-increasing in θ, therefore, without loss of generality, take θ < π/2. Also, without loss of generality, assume arg(z) = 0. Let W be the cone We define a random sequence of points {z k } ⊂ W as follows. We let z 0 = z. Then, given z k , we let B k be the largest ball centered at z k such that B k ⊂ W , r k be the radius of B k and let z k+1 = S(σ B k ) where S is a random walk starting at z k . We note that z j = z k for all j ≥ k if and only if z k ∈ ∂ i W . We make N (θ) large enough to ensure that if |z| > N then z / ∈ ∂ i W . In this case, there exists c ′ (θ) > 0 such that r 0 ≥ c ′ (θ) |z|. Let E k denote the event that z k+1 = z k and that On the event E k , r k+1 ≥ (1 + 2 sin(θ/4))r k = c(θ)r k , and |z k+1 | 2 ≥ |z k | 2 + r 2 k (we use the fact that θ < π/2 for the second assertion). Therefore, if E 0 , . . . , E j all hold, then r k ≥ c k r 0 ≥ c k c ′ |z| for k = 1, . . . , j. Therefore, Since c > 1, it follows that if we let j be the smallest integer such that Finally, by the invariance principle, there exists a constant c ′′ (θ) and N such that for n ≥ N , P (E k ) ≥ c ′′ for all k. Therefore, Corollary 3.7. Fix θ 1 , θ 2 ∈ (0, π/2). There exist N , α and c > 0 depending only on θ 1 + θ 2 such that the following holds. Let N ≤ l < m < n, and z ∈ ∂B m . Let W be the half-wedge 1. Let r = min{m sin θ 1 , m sin θ 2 , m − l}. Then for any K ⊂ B m , 2. Let r ′ = min{m sin θ 1 , m sin θ 2 , n − m}. There exists β = β(θ 1 + θ 2 , l/m) such that for any K ⊂ Λ \ B m , Notice that in both cases, the right hand side depends only on θ 1 , θ 2 , and the ratios l/n and m/n. Proof. Both parts of the corollary are proved similarly. We prove 1 in detail, and indicate the modifications needed to prove 2. Without loss of generality, assume that arg(z) = 0. The quantity r defined in the statement of the corollary is the radius of the largest ball with center z whose closure is contained in the half-infinite wedge We can apply Proposition 3.5 to the ball B = B(z, r), to obtain that there exists a constant c > 0 such that Let y be any point on ∂B such that |arg(y − z)| ≤ π/4, and let B = B(y, r/2). Note that B ⊂ W \ B m . There exists a point w ∈ ∂ B such that y is on the bisector of the angle formed from the lines joining w to the two outermost "corners" of W , and let W be the wedge with vertex w, radius s and such that x 1 and x 2 are on ∂ W . The wedge W will have aperture θ ≥ (θ 1 + θ 2 )/2 and y will be on the axis of symmetry of W . Therefore, by Lemma 3.6, . To finish the proof of 1, let c * = c r n α . Then, The proof of 2 is similar. In this case, the angle θ of the wedge W will be such that This is why β will also depend on l/m. Besides this observation, the proof of 2 is identical to the proof of 1. The following corollary is similar to the previous one, except that here we are conditioning to avoid sets that are on either side of the half-wedge. Suppose that K 1 ⊂ B n contains a path connecting ∂B an to ∂ i B n , and K 2 ⊂ Λ \ B 4n contains a path connecting ∂B 4n to ∂B 4bn . Let K = K 1 ∪ K 2 . Then for any z ∈ ∂B n , y ∈ ∂B 4n with |arg(z)| < θ/2, |arg(y)| < θ/2, one obtains that there exists a constant c = c(a, θ) such that Now suppose that w ∈ ∂B 2n ∩ W * . Then by Lemma 3.1, However, for w ∈ ∂B 2n ∩ W * , G(w; W \ ({y} ∪ K)) ≥ G(w; B c(θ)n (w)), and therefore by Lemma 3.3, Thus, By the strong Markov property, However, by Lemma 3.4, there exists c(θ, b) such that for x ∈ ∂ i B 3n ∩ W * , Furthermore, by the discrete Harnack inequality, there exists c > 0 such that for all Similarly, Therefore using part 2 of Corollary 3.7, Random walk approximations to hitting probabilities of curves by Brownian motion Given a random walk S on a discrete lattice Λ, we can make S into a continuous curve S t by linear interpolation and define S (n) t = n −1 S n 2 t . Now fix a continuous curve α : [0, t α ] → D. In this section, we will compare the probability that a Brownian motion W t started at the origin leaves the unit disk before hitting α to the probability that S (n) t started at the origin leaves the unit disk before hitting α. By the invariance principle, one can show that as n tends to infinity, the latter probability approaches the former. What is more difficult is to show that this convergence is uniform in α as long as the diameter of α is sufficiently large. This is Proposition 3.12 and is the main result of the section. For 0 < δ < 1, let A δ denote the annulus Given a curve α : Recall that D δ (z) is the disk of radius δ centered at z. We construct a continuous curve ω as follows. Given any z = e iθ ∈ ∂D, let We let ω(0) = α(t 1 ), then follow the curve α from α(t 1 ) to α(t 2 ) (we might be following the curve backwards), then the ray r δ (z 2 ) from α(t 2 ) to z 2 , then ∂D clockwise from z 2 to z 1 , and finally the ray r δ (z 1 ) back to α(t 1 ). See Figure 6. From the definition of the t k and z k , and the fact that α is simple, ω is a closed simple curve. Therefore, by the Jordan curve theorem, ω separates the plane into two disjoint connected components. Furthermore, because θ 2 −θ 1 < 2π, the winding number of ω with respect to 0 is 0. Therefore, 0 is in the unbounded component defined by Now suppose that θ 2 − θ 1 ≥ 2π. Let z 1 = e iθ 1 and z 2 = e iθ 2 . In order to prove the lemma for this case, we claim that it suffices to show that either r δ (z 1 ) ∩ α[0, t α ] or r δ (z 2 ) ∩ α[0, t α ] contains two points whose Argument differs by a nonzero multiple of 2π. For suppose that w 1 = |w 1 | e iθ 1 = α(s 1 ) and w 2 = |w 2 | e iθ 1 = α(s 2 ) are such that Arg(w 2 ) − Arg(w 1 ) = 2kπ, k = 0, and w 1 and w 2 are chosen so that arg(α(t)) = θ 1 C δ z 1 z 2 Figure 5: The set C δ and the points z 1 , z 2 and α(t 1 ), α(t 2 ) ω Figure 6: The curve ω in the case θ 2 − θ 1 < 2π for t between s 1 and s 2 . Also, without loss of generality, |w 1 | < |w 2 |. Then we can consider the curve ω that starts at w 1 , follows α from w 1 to w 2 , and then returns to w 1 along the ray r δ (z 1 ) (see Figure 7). By construction, ω is a closed simple curve whose winding number is nonzero. Therefore, ω contains 0, and since ω ⊂ D, it disconnects 0 from ∂D. This shows that r δ (z 1 ) ∪ α[0, t α ] ∪ r δ (z 2 ) disconnects 0 from ∂D, from which the lemma follows. In order to show that either r δ (z 1 ) ∩ α[0, t α ] or r δ (z 2 ) ∩ α[0, t α ] contains two points whose Argument differs by a nonzero multiple of 2π, we let r k and t k be such that r k = sup{|α(t)| : Arg(α(t)) = θ k } and α(t k ) = r k e iθ k , k = 1, 2. We assume to the contrary that both {re iθ 1 : r 1 < r ≤ 1} ∩ α[0, t α ] = ∅ and {re iθ 2 : r 2 < r ≤ 1} ∩ α[0, t α ] = ∅. Then we can define two curves ω 1 and ω 2 as follows. ω 1 starts at r 1 e iθ 1 , travels along α to r 2 e iθ 2 , follows the ray r δ (z 2 ) to z 2 , then travels along ∂D clockwise to z 1 , and finally returns to r 1 e iθ 1 along r δ (z 1 ). We define ω 2 in the same way except that we travel along ∂D clockwise. Then by our assumptions, ω 1 and ω 2 are both closed simple curves with nonzero winding number, and hence both contain the origin, a contradiction. Proof. Let T be large enough so that P 0 {τ D > T } < ǫ. By the strong approximation of Brownian motion by random walk [16, Theorem 3.5.1], there exists a sequence S n of random walks defined on the same probability space as W so that if S (n) t = n −1 S n n 2 t is defined as above, then almost surely, Therefore, we can let N be such that for n ≥ N , where C is the larger of the constants in the Beurling estimates (Theorem 2.3). Now fix n ≥ N and let τ * = τ α ∧ τ D and σ * = ξ α ∧ σ D . Suppose first that τ * < σ * , and let W τ * = w, S (n) τ * = z. Suppose further that τ D < T and that Then on this event, |z − w| ≤ C −2 ǫ 2 r. Since both α and ∂D are continuous curves, by the Beurling estimates, letting D = D r (w), The case where σ * < τ * is proved in the same way, using the Beurling estimates for Brownian motion. 2. For all ǫ > 0, there exists δ > 0 and N such that for all n ≥ N and α : [0, t α ] → D, Proof. By Lemma 3.9, there exist z 1 , z 2 ∈ ∂D such that However, by rotational symmetry of Brownian motion, is the same for all z ∈ ∂D. Since, D δ (z) shrinks to a single point as δ tends to 0, the right-hand side above can be made to be less than ǫ by making δ small enough. This proves 1. The proof of 2 is the same as 1, except that we cannot use any sort of rotational symmetry. Therefore, we must show that there exists δ > 0 and N such that for all n ≥ N and z ∈ ∂D, Let δ > 0 be small enough so that for all z ∈ ∂D, where τ 2 is the hitting time of the circle of radius 2 by the Brownian motion W . We now apply Lemma 3.10 to obtain that there exists N such that for all n ≥ N , there exists a simple random walk S, defined on the same probability space as W such that This implies that Proof. By Lemmas 3.10 and 3.11 , there exists δ > 0 and N such that for all n ≥ N , the following holds. There exists a Brownian motion W and a random walk S defined on the same probability space such that for all continuous curves α : where τ * = τ α ∧ τ D and σ * = ξ α ∧ σ D . We will show that the proposition holds with this choice of N . Note that We will show that P 0 (E) < ǫ. The proof that P 0 (F ) < ǫ is entirely similar. Recall that D 1−δ denotes the ball of radius 1 − δ and that A δ denotes the annulus D \ D 1−δ . Then, However by (9), P 0 (E 1 ) < ǫ, and by (11), P 0 (E 2 ) < ǫ and P 0 (E 3 ) < ǫ. Some results for loop-erased random walks 4.1 Up to constant independence of the initial and terminal parts of a LERW path For this section only, we no longer restrict our random walks to be two-dimensional. When it is necessary to specify what dimension we are in, we will denote the dimension by d. Although we have avoided using it up to now, it will be convenient to use "big-O" notation in this section. Recall that f (n) = O(a(n)) if there exists C < ∞ such that f (n) ≤ Ca(n). Here, C can depend on the dimension but on no other quantity. We will also write Recall that for a natural number l, Ω l denotes the set of paths ω = [0, ω 1 , . . . , ω k ] such that ω j ∈ B l , j = 0, 1, . . . , k − 1 and ω k ∈ ∂B l . Given a set K such that B l ⊂ K, and such that we define µ l,K on Ω l to be the measure obtained by running a random walk up to the first exit time σ K of K, loop-erasing and restricting to B l . More precisely, for ω ∈ Ω l , If B l ⊂ K 1 and B l ⊂ K 2 are such that for either i = 1 or i = 2, we define a measure µ l,K 1 ,K 2 on Ω l as follows. Let X denote random walk conditioned to leave K 1 before K 2 (as long as this has positive probability; if not, µ l,K 1 ,K 2 is not defined). Then for ω ∈ Ω l , we let This is the measure on Ω l obtained by running X up to σ X K 1 , loop-erasing and restricting to B l . Note that µ l,K is equal to µ l,K,Λ . In this section, we establish some relations between the measures defined above. In fact we will show that for n ≥ 4 and any K 1 and K 2 such that B nl ⊂ K 1 and B nl ⊂ K 2 , ( Proposition 4.4) This implies that if B 4l ⊂ K 1 and B 4l ⊂ K 2 then (recall that the symbol ≍ means that each side is bounded by a constant multiple of the other side, the constant depending on the random walk S and on nothing else). We use these facts to prove that for a LERW S n , η 1 l ( S n ) and η 2 4l,n ( S n ) (see the definitions in section 2.3) are independent up to constants (Proposition 4.6). For ω ∈ Ω l and y ∈ ∂B l , Proof. Let y 0 be such that We will show that P y 0 {σ nl < ξ ω } ≤ C log n which will clearly imply the result for all y ∈ ∂B l . Proof. Let K = K 1 ∩ K 2 . Let X be a random walk conditioned to exit K 1 before K 2 . Then by formula (5), The function h is harmonic in B nl and ω k ∈ B l . Therefore, by the difference estimates for harmonic functions [16,Theorem 6.3.8], and thus, Hence, it suffices to show that Let y 0 ∈ ω be such that Then Therefore, and hence A similar argument shows that Now let y be any point on the path ω. Then since B nl is a subset of both K 1 and K 2 , Let y 1 be such that and y 2 be such that Then by (12) and (13), if d = 2, The lower bound and the case d ≥ 3 follows in the same way. We now define a measure on unrooted loops in Λ. See [16,Chapter 9] for more details. A rooted loop η = [η 0 , η 1 , . . . , η k ] is a path in Λ such that η 0 = η k ; η 0 is called the root of the loop. We say that two rooted loops η and η ′ are equivalent if η ′ = [η j , η j+1 , . . . , η k−1 , η 0 , . . . , η j ] for some j. We call the equivalence classes under this relation unrooted loops. We will denote by η the unrooted loop corresponding to the rooted loop η. Recall the notation Notice that this does not depend on the root of η and therefore p( η) is well defined for unrooted loops η. We define a measure m on the set of unrooted loops as follows. Given an unrooted loop η, let α( η) be the number of distinct rooted representatives of η. Then we define where | η| denotes the number of steps of a representative of η. Any two representatives of η have the same number of steps so that m is well defined. Proof. By Formula (5), for any ω ∈ Ω l , Let e(n) = (log n) −1 if d = 2 and e(n) = n 2−d if d ≥ 3. Let ω ′ = [ω ′ 0 , . . . , ω ′ k ′ ] be any other path in Ω l . We will show that and For this will imply that One then gets the other bound by reversing the roles of K 1 and K 2 . We first show (15). Since B nl ⊂ K 1 If d ≥ 3, then [16, Proposition 6.4.2] for z ∈ ∂B nl , One gets a similar formula with K 2 replacing K 1 and ω ′ replacing ω, from which (15) follows for the case d ≥ 3. To prove (15) for the case d = 2, we first note that [10, Lemma 2.1.2] Furthermore, for z ∈ ∂B nl , By applying Lemma 4.1 and [10, Lemma 2.1.2] again we get that for y ∈ ∂ i B l , Thus, Therefore, and hence, with a similar lower bound. We get similar bounds with ω ′ replacing ω and K 2 replacing K 1 from which (15) follows. Let η * be such that | η * | =< η > and such that Suppose first that d = 2. Then for j ≤ l/2 and z ∈ ∂B j , where the exponent 1/2 comes from the Beurling estimates (Theorem 2.3). If l/2 < j ≤ l, then Therefore, The case d ≥ 3 is easier. In this case, Thus, Corollary 4.5. Recall that S denotes an infinite LERW. Suppose that n ≥ 4, K is such that B nl ⊂ K, and ω ∈ Ω l . Then, In particular, Proof. This follows immediately from Proposition 4.4 and the definition of the infinite LERW S: We conclude this section with the proof that η 1 and η 2 are independent (up to constants) for the LERW S n . 1 + O( l m ) P η 1 l S n = ω P η 2 m,n S n = λ d ≥ 3. Proof. We fix l, m and n throughout and let η 1 = η 1 l , η 2 = η 2 m,n . Let X be a random walk started at 0 conditioned to leave B n before returning to 0. Then X and S n have the same distribution. Let Y be a random walk started on ∂B n according to harmonic measure from 0 and conditioned to hit 0 before returning to ∂B n . By reversing paths, for all z ∈ ∂B n , Therefore, X and Y R (the time-reversal of Y ) have the same distribution. Recall that one obtains the same distribution on LERW by erasing loops from random walks forwards or backwards. Therefore, if ω and λ are as above, Now let Z be a random walk starting at λ 0 , conditioned to hit 0 before leaving B n \ λ. Then by the domain Markov property for LERW (Lemma 2.4), However, by again reversing paths as above, and noting that the loop-erasure of a random walk starting at 0 and conditioned to avoid 0 after the first step has the same distribution as the loop-erasure of an unconditioned random walk, The separation lemma Throughout this section S will be a random walk and S will be an independent infinite LERW. Let F k denote the σ-algebra generated by For positive integers j and k, let A k be the event and T k j be the integer valued random variable The goal of this section is to prove the following separation lemma which states that, conditioned on the event A k that the random walk S and the infinite LERW S do not intersect up to the circle of radius k, the probability that they are further than some fixed distance apart from each other at the circle of radius k (D k ≥ c 1 ) is bounded from below by a constant c 2 > 0. Theorem 4.7 (Separation Lemma). There exist constants c 1 , c 2 > 0 such that for all k, The proof of Theorem 4.7 depends on two lemmas. Lemma 4.8 roughly states that the probability that S and S stay close together without intersecting each other is very small. More precisely, the probability that T j−1 ≥ (1 + cj 2 2 −j )T j and that the paths don't intersect is less than 2 −βj 2 . Lemma 4.9 states that if S and S are separated, then there is a substantial probability that they stay separated and don't intersect. To wit, if {T j > k} and A T j hold, then the probability that A 2k and {D 2k ≥ 2 −j } hold is greater than 2 −αj . The proof of the separation lemma then combines the two lemmas to show that then conditioned on A 2k , there is a probability bounded below that S and S separate to some fixed distance before leaving the ball of radius 2k no matter how close the two paths were upon leaving the ball of radius k. Proof. We let j 0 be such that for all j ≥ j 0 , cj 2 2 −j < 1/2. Since k is fixed we will write T j for T k j from now on. We suppose that S[0, σ(T j )] and S[0, σ(T j )] are any paths such that T j ≤ 3k 2 holds. We also assume that D T j < 2 −j+1 or else there is nothing to prove. Now consider K := S[0, σ((1 + cj 2 2 −j )T j )] and let ρ = inf{n ≥ σ(T j ) : dist(S n , K) ≤ 2 −j+1 |S n |}. Notice that even though we assume that D T j < 2 −j+1 , ρ is not necessarily equal to σ(T j ). If ρ > σ((1 + 4 · 2 −j )T j ) then this means that T j−1 < (1 + 4 · 2 −j )T j . However, if ρ ≤ σ((1 + 4 · 2 −j )T j ), then by the Beurling estimates for random walk (Theorem 2.3), there exists c ′ < 1 such that The same estimate will hold starting at T j + 8k2 −j , k = 0, 1, . . . , ⌊cj 2 /8⌋. Therefore, Lemma 4.9. There exists α < ∞ and c > 0 such that for all j and k, Proof. Since k is fixed, we will omit the superscript k from now on. Let z 1 = S(σ T j ) and z 2 = S( σ T j ). Without loss of generality, we may assume that T j < 2k (or else there is nothing to prove) and also that arg(z 2 ) < arg(z 1 ). Note that |z 1 | = |z 2 | = T j and k ≤ T j ≤ 2k. Suppose that A T j holds. By definition of T j , there exists c > 0 and half-wedges Using Lemma 2.4 and Proposition 3.5, it is easy to verify that there exists a global constant c ′ such that Now consider the half-wedges Applying Lemma 2.4 and Corollary 3.7 to W ′ 1 and W ′ 2 , one obtains that for any z ′ 1 ∈ ∂W 1 such that |z ′ The result then follows since W ′ 1 and W ′ 2 are distance c2 −j T j apart and S and S are independent. Proof of Theorem 4.7. We again fix k and let where c is chosen so that s ≤ 3/2. We also let j 0 be such that for j ≥ j 0 , 2 −βj 2 +αj < 1, where α and β = β(c) are as in Lemmas 4.8 and 4.9. To prove the theorem, it suffices to show that for all m, By Lemma 4.9, it is enough to find a constant c ′ 2 such that P T j 0 ≤ 3k/4 A k ; 2 −m ≤ D k/2 < 2 −m+1 ≥ c ′ 2 . In fact, we will show that Then, However, Therefore, by Lemmas 4.8 and 4.9, Using the same techniques, one can prove a "reverse" separation lemma. Let S be a random walk started uniformly on the circle ∂B n and conditioned to hit 0 before leaving B n . Let X be the time reversal of S n (so that X is also a process from ∂B n to 0). As before, for k ≤ n, let Then, Theorem 4.10 (Reverse Separation Lemma). There exists c 1 , c 2 > 0 such that 5 The growth exponent Introduction Recall that W t denotes standard complex Brownian motion and γ denotes radial SLE 2 in D started uniformly on ∂D. In this chapter we will consider random walks and independent LERWs. We will view them as being defined on different probability spaces so that P {.} and E [.] denote probabilities and expectations with respect to the LERW, while P {.} and E [.] will denote probabilities and expectations with respect to the random walk. For m ≤ n, we define Es(m, n), Es(n) and Es(n) as follows. Es Es(m, n) is the probability that a random walk from the origin to ∂B n and the terminal part of an independent LERW from m to n do not intersect. Es(n) is the probability that a random walk from the origin to ∂B n and the loop-erasure of an independent random walk from the origin to ∂B n do not intersect. Es(n) is the probability that a random walk from the origin to ∂B n and an infinite LERW from the origin to ∂B n do not intersect. In section 5.2, we prove that for m < n, Es(n) can be decomposed as Es(n) ≍ Es(m) Es(m, n). In section 5.3, we use the convergence of LERW to SLE 2 (Theorem 2.6) and the intersection exponent 3/4 for SLE 2 (Theorem 2.5) to show that We then combine these two results to show that Es(n) ≈ n −3/4 . Finally, in section 5.4, we show how the fact that Es(n) ≈ n −3/4 implies that Gr(n) ≈ n 5/4 . Before proceeding, we prove the following lemma which shows that Es(n) and Es(4n) are on the same order of magnitude. Proof. By Corollary 4.5, it suffices to show that It is clear that the left hand side is greater than or equal to the right hand side. To prove the other direction, we will use the separation lemma (Theorem 4.7). Given a point z ∈ ∂B n , let W (z) be the half-wedge where c 1 is as in the statement of the separation lemma. We also let By the strong Markov property for random walk, By Lemma 2.4 and Corollary 3.7, Finally, by the separation lemma, and therefore, Proof. Let l = ⌊m/4⌋ and fix η 1 = η 1 l and η 2 = η 2 m,n . For any path η in Ω n , Intersection exponents for SLE 2 and LERW In this section, we use the convergence of LERW to SLE 2 to show that for 0 < r < 1, Es(rn, n) ≍ r 3/4 . We combine this result with the decomposition Es(n) ≍ Es(rn) Es(rn, n) from the previous section to obtain that Es(n) ≈ n −3/4 . We recall the notation introduced in Section 2.6. Let Γ denote the set of continuous curves α : [0, t α ] → D (we allow t α to be ∞) such that α(0) ∈ ∂D, α(0, t α ] ⊂ D and α(t α ) = 0. We can make Γ into a metric space as follows. If α, β ∈ Γ, we let where the infimum is taken over all continuous, increasing bijections θ : [0, t α ] → [0, t β ]. Note that d is a pseudo-metric on Γ, and is a metric if we consider two curves to be equivalent if they are the same up to reparametrization. Recall (Theorem 2.6) that LERW converges weakly to SLE 2 on the space (Γ, d). We want to apply this result to the functions f r defined as follows. Given 0 < r < 1 and α ∈ Γ, we let where ρ r = inf{t : |α(t)| = r}. We also define f r to be identically 1 for r ≥ 1 (think of ρ r = 0 in that case, so that the above probability is 1). Recall that Theorem 2.5 states that if γ is SLE 2 then Unfortunately, the f r are not continuous on the space (Γ, d). However, the following lemma shows that they can be approximated by continuous functions. Lemma 5.4. For all 0 < r < 1, there exists a function f r that is uniformly continuous on the space (Γ, d) such that for all α ∈ Γ f r/2 (α) ≤ f r (α) ≤ f 2r (α). The latter can be made arbitrarily small by choosing δ small enough. By reversing the roles of α and β, one gets a similar lower bound, proving that f r is uniformly continuous. Lemma 5.5. There exists C < ∞ such that the following holds. Given a random walk S and an independent LERW S n , we extend them to continuous curves S t and S n t by linear interpolation. Then for all 0 < r < 1, there exists N = N (r) such that for n ≥ N , 1 C r 3/4 ≤ E P S[0, σ n ] ∩ η 2 rn,n ( S n ) = ∅ ≤ Cr 3/4 . By Lemma 5.4, f r (n −1 S n ) ≤ f 2r (n −1 S n ), and f 2r is continuous in the metric (Γ, d). Therefore, by the weak convergence of LERW to SLE 2 described at the beginning of this section, there exists N 2 such that for n ≥ N 2 , E f 2r (n −1 S n ) ≤ E f 2r (γ) + r 3/4 where γ denotes SLE 2 . Furthermore, applying first Lemma 5.4, and then Theorem 2.5, Therefore, the upper bound holds for N = max{N 1 , N 2 }. The lower bound is proved in the same fashion. We now prove the analogue of the previous lemma for the case where S and S n are discrete processes. The reason why the discrete case does not follow immediately from the continuous case is that we allow random walks that "jump", and therefore it's possible for two realizations of S and S n to avoid each other on the lattice Λ but to intersect after they are made continuous curves by linear interpolation. Theorem 5.6. There exists a constant C such that the following holds. For all 0 < r < 1, there exists N = N (r) such that for n ≥ N , 1 C r 3/4 ≤ Es(rn, n) ≤ Cr 3/4 . Proof. Fix 0 < r < 1. The lower bound follows immediately from Lemma 5.5 and the fact that if the discrete processes intersect each other so too will the continuous curves. To prove the upper bound we introduce some notation that will be used only in this proof. Let S[0, . . . , σ n ] denote the discrete set of points in Λ visited by S between S 0 and S(σ n ). We will write S[0, σ n ] to denote the continuous set of points in C visited by the continuous curve S t from S 0 to S(σ n ). We use similar notation for S n . In addition, we let η 2 = η 2 rn,n S n [0, . . . , σ n ] be the terminal part of the discrete LERW curve and η 2 = η 2 rn,n S n [0, σ n ] be the terminal part of the continuous LERW curve. As in the proof of Lemma 3.11, one can choose δ > 0 small enough so that for all n sufficiently large, and for all z ∈ ∂B n , P 0 {S[0, σ n ] ∩ B δn (z) = ∅} < r 3/4 . Furthermore, given such a δ, we can choose ǫ > 0 and N such that for all n ≥ N , and all z ∈ ∂B n , the following holds. Let y ∈ Λ be the closest point to (1 − ǫ)z. Then, Lemma 5.8. Fix z ∈ B n . Let S be a random walk and let X be an independent random walk started at z conditioned to hit 0 before leaving B n . Then P z ∈ S n [0, σ n ] = G n (0, z)P z L(X[0, ξ X 0 ]) ∩ S[1, σ n ] = ∅ . We finally have all the tools needed to prove our main theorem. Hence, As before, the last quantity is comparable to Es(r). Therefore, for all z such that n/4 ≤ |z| ≤ 3n/4, P z ∈ S n [0, σ n ] ≥ cG n (0, z) Es(r). This proves the lower bound since ǫ was arbitrary.
2009-10-27T04:32:22.000Z
2008-06-02T00:00:00.000
{ "year": 2008, "sha1": "e169e22ef9a4cd53434460e9d46cad4acd409840", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1214/ejp.v14-651", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "e169e22ef9a4cd53434460e9d46cad4acd409840", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
265322578
pes2o/s2orc
v3-fos-license
Breathless Revelation: Unmasking Acute Myeloid Leukemia Through Acute Respiratory Failure Establishing a diagnosis of acute myeloid leukemia (AML) in a patient presenting with acute respiratory failure is rare. Here, we present a case of AML initially appearing as hypoxemic respiratory failure linked to presumed community-acquired pneumonia. This case report unravels the intricate diagnostic odyssey of an atypical AML presentation masquerading as an acute respiratory failure, accentuating the multifaceted challenges clinicians encounter in discerning the actual underlying pathology amidst the haze of mimicry. Upon meticulous diagnostic expedition, infection was ruled out as a cause of respiratory failure, and the patient underwent a malignancy workup, ultimately culminating in the diagnosis. This case underscores the importance of broader diagnostic vigilance. Comprehensive assessments, combined with interdisciplinary collaboration, emerged as crucial for accurate diagnosis, emphasizing the need to consider hematologic pathologies despite seemingly unrelated clinical presentations. Introduction Acute myeloid leukemia (AML) is the most common leukemia among the adult population and accounts for about 80% of all cases.It is characterized by clonal expansion of immature "blast cells" in the peripheral blood and bone marrow, resulting in ineffective erythropoiesis and bone marrow failure [1].Symptomatic leukemic infiltration of the lung is the least common cause of pulmonary infiltrates in patients with acute leukemia [2].A retrospective study reported leukemic infiltration to account for (3.8%) of pulmonary complications in AML.Infection was the most common cause of pulmonary complications in AML; although most cases represent infectious pneumonia, with bacteria (28.3%) and fungi (26.5%) as the most prevalent pathogens, noninfectious etiologies such as pulmonary embolism (7.5%), pneumothorax (3.8%), and cardiac disease (9.4%) also must be taken into account [3].AML can be diagnosed incidentally or may present with non-specific constitutional symptoms (e.g., fatigue and pallor, secondary to anemia).This case chronicles the diagnostic journey of a 60-year-old former firefighter involved in 9/11 rescue operations.His initial presentation of shortness of breath, suggestive of community-acquired pneumonia, resulted in hemodynamic instability requiring ICU-level supportive care, eventually leading to the unexpected diagnosis of AML. Case Presentation A 60-year-old male, with a past medical history of hypertension, hyperlipidemia, and exposure working as a firefighter in the aftermath of the terror attack in New York City on September 11, 2001 (for which he received annual cancer screening), presented to the hospital for a one-month history of generalized malaise.Before admission, he was diagnosed with sinusitis and started a course of azithromycin.During his treatment, he developed a papular, non-painful, non-pruritic rash along with flat, flesh-colored patches on his trunk, sparing his face, neck, and extremities, followed by a five-day history of fevers and chills, measuring a maximum of 103°F at home and shortness of breath, prompting his presentation to the emergency department.On admission, the patient presented with normal vitals: BP of 124/64 mmHg, heart rate of 81 bpm, respiratory rate of 17 breaths per minute, and oxygen saturation of 95% breathing ambient air.The initial workup included an ECG, demonstrating sinus rhythm with a low-voltage QRS and shortened PR interval (Figure 1.) Given the dyspnea, there was an initial concern for a pulmonary embolism.While the initial computed tomography (CT) angiogram of the chest (Figure 2) demonstrated no pulmonary embolism, it did reveal numerous patchy bilateral lower lobe predominant opacities/nodules and multiple prominent indeterminate upper abdominal lymph nodes.The patient's complete blood count throughout admission is listed in the table below (Table 1).His complete metabolic panel was notable for transaminitis, demonstrating an ALP of 143 IU/L, AST of 44 IU/L, and ALT of 55 IU/L.Procalcitonin was notable at 1.17 ng/mL, and C-reactive protein was elevated at 425.5 mg/L.Initial urinalysis and urine culture were negative.Troponins drawn at the time of admission were <0.1 ng/mL.Given the patient's fever, relatively low oxygen saturation, and CT findings, the patient was diagnosed with pneumonia and admitted to the general medicine service, where he was started on empiric antibiotic treatment with vancomycin, cefepime, and azithromycin.Despite continued treatment, including completion of an empiric course of antibiotics, his respiratory and mental status both acutely worsened.Diagnostic evaluation for infectious etiology was negative, including blood cultures, respiratory viral panel, and urinary antigens for Legionella and Pneumococcus.By hospital day five, the patient required an ICU upgrade for worsening dyspnea on noninvasive mechanical ventilation.His lymphocytosis was noted in the initial blood work; however, as the WBC count was down-trending, it was attributed to infection versus an adverse reaction to the outpatient antibiotics that the patient had been taking for sinusitis.However, the persistence of abnormal blood counts raised suspicion for a hematological disorder.Hematology/oncology was consulted for leukocytosis and thrombocytopenia.A peripheral blood flow cytometry demonstrated blasts of 12% and an increase in monocytes (57%) with immature forms.A repeat chest CT performed on hospital day seven for worsening respiratory status (Figure 3) demonstrated increased bilateral ground glass and consolidative opacities concerning worsening pneumonia.On hospital day nine, a bone marrow biopsy was performed, revealing hypercellular (>95%) marrow, and was replaced mainly by blasts/blast equivalents with minimal normal trilineage hematopoiesis.Concurrent flow cytometry (76-FL-23-5902) revealed an aberrant myeloblast population (19%) and monoblastic/monocytic population (21%).PML-RARA FISH (76-FH-23-98572) is negative.These findings confirmed the diagnosis of AML with monocytic differentiation.The patient was eventually transferred to another hospital to initiate therapy for AML. Discussion The clinical trajectory of this patient highlights the diagnostic challenges when AML presents with acute respiratory failure suggestive of pneumonia.The overlap of symptoms between AML and respiratory distress mandated a nuanced diagnostic approach.Considering that the patient's medical history did not include any hematologic disease, his ongoing condition was initially attributed to an infectious process, and all interventions were focused on managing an infectious disease and its complications. Initially, the patient's presentation with fever, leukocytosis, thrombocytopenia, transaminitis, and hemodynamic instability naturally steered the diagnostic focus toward infection.However, as the patient's condition failed to improve despite aggressive antimicrobial treatments and a proper history elicited his involvement in 9/11 rescue operations, a shift in diagnostic direction became imperative.The persistence of constitutional symptoms despite antimicrobial interventions added suspicion to our thought that this could be a malignancy, prompting a reevaluation of the running diagnoses.The comprehensive infectious workup, while necessary to eliminate microbial sources, also played a role in redirecting the investigative pathway. Pulmonary infiltration by leukemia cells is commonly observed in the advanced stages of both acute and chronic leukemia [4].Nevertheless, the occurrence of symptomatic pulmonary infiltrates as the primary presentation of acute leukemia is infrequent [5].Acute monocytic leukemia is associated with a high risk of leukemic pulmonary infiltration.Azoulay et al. reported 20 patients with acute respiratory failure related to leukemic pulmonary involvement from leukostasis or leukemic infiltration.All 20 patients had acute respiratory failure as the presenting manifestation of acute leukemia.All patients had the same type of acute myeloid leukemia involving monocytic cells [5]. Conclusions The early involvement of hematologic-oncologic expertise, prompted by evolving leukocytosis and monocytosis, allowed for a bone marrow biopsy and flow cytometry to be obtained prior to dischargeultimately establishing the diagnosis.The trajectory of AML presenting as an infectious syndrome underlines the need for meticulous diagnostic exploration, especially when clinical manifestations belie the deeper pathology.Interdisciplinary collaboration, combining clinical acumen with various diagnostic tools, emerges as the linchpin in deciphering such enigmatic clinical scenarios. FIGURE 1 : FIGURE 1: EKG on admission, demonstrating sinus rhythm with short PR, low-voltage QRS, possible inferior infarct, and age undetermined. FIGURE 2 : FIGURE 2: CTA scan of the chest performed on hospital day two to rule out acute pulmonary embolism demonstrating patent central airways, numerous patchy bilateral lower lobe predominant nodules/opacities. FIGURE 3 : FIGURE 3: Repeat CT chest performed on hospital day seven to evaluate the patient's respiratory status, which demonstrated increased bilateral ground glass and consolidative opacities, concerning worsening pneumonia.
2023-11-22T16:07:02.889Z
2023-11-01T00:00:00.000
{ "year": 2023, "sha1": "8768cc91654ce9a292af343c2c2a414e03643911", "oa_license": "CCBY", "oa_url": "https://assets.cureus.com/uploads/case_report/pdf/187314/20231120-964-ya4fmp.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "317d0f5da06a1902f8c226a858a5c379fd02c5ef", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
237708883
pes2o/s2orc
v3-fos-license
THE EFFECT OF DIFFERENT POLLINATORS ON FRUIT SET AND SOME FRUIT CHARACTERISTICS IN APPLE Apple is one of the fruit species in which self-incompatibility is seen. For economical apple production, pollination and fertilization are required. In this study, it was aimed to determine effect of different pollinators on the fruit set ratios and some fruit characteristics of some apple varieties and apple genotypes originating from Kyrgyzstan by hybridization breeding method in 2020 year. According to the results, the highest fruit set ratio was obtained with the combination number 54 × 36 with 7.37%, while the lowest value was determined with the combination number 54 × 56 with 1.88%. Especially the precipitation that occurred during the fruit set time negatively affected the results. In the fruit characteristics results, the combination number 54 × 36 gave better results than the other combinations in fruit length and fruit width values. On the other hand, 21 × Elstar combination produced the best result in terms of WSDM (water solid dry matter) value and significant differences occurred between combinations in terms of seed number. Evaluating the obtained findings as a guide especially in the breeding studies to be made on apples and in the new orchards to be established may be beneficial for the producers. INTRODUCTION Apple (Malus communis L.) is widely grown both naturally and economically due to its wide adaptability, richness of species and varieties in the World. The homeland regions of the apple are considered as East Asia, Central Asia, West Asia-Europe and North America (Uzun et al., 2019). These regions also include Kyrgyzstan and Turkey, which have a lot of species and varieties in nature. Malus sieversii (Ledeb.) is a species of wild apple and is found in the mountainous regions of Kyrgyzstan (Yan et al., 2008;Volk et al., 2009). As in many fruit species such as apple, there is a relationship between yield and fruit set. Most of the apple varieties have a self-incompatibility mechanism and this prevents fertilization, which is among the basic stages in fruit setting (Nettancourt, 2001;Broothaerts, 2003;Liu et al., 2018). As a result of the incompatibility caused by the fact that the varieties are in the same allele structures, disruptions occur in the fertilization event. When an orchard is established with a single variety or varieties have the same s allele, there are disruptions in harvesting and economic losses (Schneider et al. 2001;Garratt, 2014;Shogo et al., 2018). As a result of the incompatibility, pollinator variety is needed in order to get efficiency. The pollinator variety not only affects the attitude of the fruit, but also has effects on the quality of the fruit. In different species such as tangerines (Citrus reticulata L.) (Yıldız and Kaplankıran, 2017), cherry (Prunus avium L.) (Cırtlık and Beyhan, 2012) and apricot (Prunus armeniaca L.) (Yaman and Uzun, 2020), the effect of pollinator variety on fruit quality has been investigated. In this study, it was aimed to determine the effect of different pollinators on fruit set and some fruit characteristics in some standard apple varieties and apple genotypes of Kyrgyzstan origin. MATERIALS AND METHODS The study was carried out in 2020 year by using some apple genotypes and standard varieties originating from Kyrgyzstan in the Apple Genetic Resources Collection Parcel of Erciyes University Faculty of Agriculture, Department of Horticulture. Plants in the study material are 5-6 years old and grafted on M111 rootstock. Routine cultural practices (irrigation, soil tillage, pruning) were performed accordingly. In Kayseri province, where the study was carried out, the terrestrial climatic conditions prevailing in the Central Anatolia Region are seen. Meteorological data of the region, which are especially effective in fruit setting, are given in Table 1. When these values are examined, it is seen that the precipitation that occurs especially during the fruit set times has a negative effect on the fruit set. Determination of Fruit Set Ratios Pollens of selected cultivars were gathered from non-burst flower buds at balloon stage and pollens were rubbed onto emasculated flowers with the aid of a watercolor paint brush and hybridization procedures were performed accordingly (Yaman and Uzun, 2020). Hybridization was made on a different number of flowers for each hybridization combination. Fruit set rates were determined by dividing the harvested fruits by the number of pollinated flowers and multiplying the obtained results by 100. Determination of Fruit Characteristics Some parameters such as fruit length, fruit width, WSDM (water solid dry matter) and seed number were investigated in the fruits in which fruit set was observed after the hybridization process. Experimental data were subjected to statistical analyses with the aid of software SPSS 15.0 (IBM Company, USA) and significant means were compared with Duncan's multiple range test at P <.05 significance level and the values of the varieties are presented as mean ± standard deviation (SD). RESULTS AND DISCUSSIONS Fruit Set Ratios In apple hybridization studies, pollination was performed on different numbers of flowers ranging from 95 (54 × 36) to 210 (56 × Box) for each combination. As a result of these processes, fruit setting numbers varying between 3 (54 × 56, 56 × Elstar) and 9 (56 × Chest) were determined in combinations. Depending on the number of fruit setting, fruit set rates were determined as the lowest 1.88% (54 × 56) and the highest 7.37% (54 × 36) ( Table 2). Pollinators has positive effects on fruit set in apples. In studies conducted by different researchers in apples, it was determined that fruit set values ranged from 10% to 33% (Akkurt et al., 2020), and in another study, fruit set values ranged from 4% to 35% depending on varieties and pollinators used (Maklad et al., 2020). In addition to the pollinator in the apple, can effect on the fruit set most climatic factors such as precipitation, moisture, etc. In the current study, the precipitations that occurred during pollination and fertilization in Kayseri ecological conditions negatively affected the fruit set and different results were obtained from the studies in the literature. Some Fruit Characteristics In the results obtained to determine the effect of the pollinator variety on fruit quality, statistical differences were observed in all of the parameters examined. The highest result in fruit length values was obtained from the combination numbered 54 × 36 with 62.91 mm, and the lowest value was found in the combination numbered 54 × 55 with 30.81 mm. The best result in terms of fruit width values is the combination of 54 × 36 as in the fruit length and this value is 73.25 mm. WSDM value is an important fruit criterion for apple as in most fruit species. Among the combinations, the highest WSDM value was determined as 56 × Elstar and 56 × 36 combinations as 21.00% and 20.80%, respectively. In the last parameter examined, the number of seeds differed between 13,60 and 1,66 in combinations. Due to the carpel structure in the apple, there should be a minimum of 5 seeds in order to produce quality fruit (Childers et al., 1995). In a study conducted to determine the pollinator efficiency in Vista bella apple variety, it was determined that the WSDM values varied between 9.99% and 13.96% (Akkurt et al., 2020). In another study, it was found that fruit length values varied between 18 mm and 89.5 mm in fruits obtained as a result of free pollination of apple species originating from Kyrgyzstan (Uzun et al., 2018). The results of the study were similar to these studies in the literature, as well as different results. The reasons for this difference, the different study material, the effects of ecology and hybridization on fruit quality can be shown. CONCLUSIONS As a result, in this study, it was aimed to determine effect of different pollinators on the fruit set ratios and some fruit characteristics of some apple varieties and apple genotypes originating from Kyrgyzstan by hybridization breeding method in 2020 year. In the apple, where the selfincompatibility mechanism is observed, economic product losses arise due to pollination and fertilization. Even if pollination and fertilization normally occur, there are occasional adverse events in product quality in connection with environmental conditions and annual cultural practises. The results from the current study revealed that the father parent had positive effects, especially in combinations created with the same mother and different father pollinators.
2021-09-09T20:44:25.651Z
2021-07-31T00:00:00.000
{ "year": 2021, "sha1": "b05dfbecedfc8ab95e43a02158fb1c6b341851d7", "oa_license": "CCBYNCSA", "oa_url": "https://natsci.upit.ro/media/2122/022yaman-et-al.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "4c4bc5945037b9728dfd5cd2ebe01baf3c934a88", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
52182966
pes2o/s2orc
v3-fos-license
Triglyceride to high density lipoprotein cholesterol ratio among adolescents is associated with adult hypertension: the Kangwha study Background The triglyceride to high density lipoprotein cholesterol (TG/HDL-C) ratio associated with hypertension in adults. However, whether the TG/HDL-C ratio in adolescents predicts future hypertension remains unclear. Here, we evaluated the prospective association between the TG/HDL-C ratio in adolescents and hypertension in early adulthood. Methods The Kangwha Study is an ongoing prospective cohort study that has tracked the blood pressure of first grade elementary school students since 1986. We followed up 272 participants who completed health examinations at the age of 16 and 35 years. We excluded 27 participants with adolescent hypertension, defined as those whose blood pressures were above the age- and sex-specific 95th percentiles of the Korean population, and finally analysed 245 participants. We defined high and low TG/HDL-C ratio groups according to the age- and sex-specific 75th percentile of the TG/HDL-C ratio (1.04 for boys and 0.81 for girls) of the Korean population. Adult hypertension was defined by a systolic/diastolic blood pressure ≥ 140/90 mmHg or by taking antihypertensive medication at the age of 35 years. Logistic regression analysis was performed to evaluate the association between adolescent TG/HDL-C ratio and adult hypertension after adjusting for age at follow-up, sex, baseline systolic blood pressure, waist circumference, and total cholesterol and fasting glucose levels. Results During the 20-year follow-up, 11 (18.3%) individuals developed hypertension in the high TG/HDL-C ratio group and 10 (5.4%) individuals developed hypertension in the low TG/HDL-C ratio group. The adjusted odds ratio for incident hypertension in the high TG/HDL-C ratio group, compared with the low TG/HDL-C ratio group, was 3.40 (95% confidence interval 1.24–9.31). Conclusions High TG/HDL-C ratio in adolescence is associated with hypertension in early adulthood. Electronic supplementary material The online version of this article (10.1186/s12944-018-0861-y) contains supplementary material, which is available to authorized users. Background Dyslipidaemia, including elevated triglyceride (TG), elevated low-density lipoprotein cholesterol, and low high-density lipoprotein cholesterol (HDL-C), are independently associated with hypertension or other cardiometabolic risk factors [1][2][3][4]. The combinations of these lipids are also useful for predicting cardiovascular risks. Some studies have reported that the TG to HDL-C (TG/HDL-C) ratio is a useful predictor of hypertension [5][6][7]. In one study of 947 individuals, significantly higher values of both systolic and diastolic blood pressure were observed in the high TG/HDL-C ratio group, categorised by a cut-off value of ≥1.1 for women and ≥ 1.5 for men than in the low TG/HDL-C ratio group [5]. Another study of 1566 participants, which used the same cut-off points as above, also showed similar results [6]. One prospective study followed middle eastern women for over 6.4 years and found that an increase of one standard deviation in TG/HDL-C ratio increases the risk of incident hypertension by 18% [7]. The highest TG/HDL-C ratio quartile had a 70% higher risk of hypertension than the lowest quartile did. Studies on an adolescent population have also obtained similar findings [8]. A cross-sectional study of 893 male subjects, aged 10-26 years, reported that the highest TG/HDL-C ratio tertile had the highest systolic and diastolic blood pressure [8]. The study also reported that several measurements of arterial stiffness, such as augmentation index and pulse wave velocity, were impaired in the highest TG/HDL-C ratio group. However, among these studies, none have prospectively evaluated the predictive value of the TG/HDL-C ratio for hypertension incidence at adulthood using an adolescent population. Thus, this study evaluated whether the TG/HDL-C ratio in adolescents could predict hypertension incidence in young adults. Methods The Kangwha Study is an ongoing community-based prospective cohort study that started in 1986, with the purpose of tracking blood pressure and evaluating related determinants. The initial participants included 484 first-grade students from elementary schools in Kangwha island, most of whom were aged 6 years at enrolment. During annual follow-ups until high school graduation (year 1997), classmates of the initial participants were newly enrolled into the cohort. We followed up 742 individuals who participated in 1996 when blood tests were conducted until the latest adulthood follow-up health examination (2014-2017). Informed consent was obtained from all participants in our study. The study protocol was approved by the institutional review board of the Severance Hospital at Yonsei University Health System (4-2014-0914). At the baseline examination in 1996, we measured the height, weight, waist circumference, and blood pressure and conducted blood tests for all the participants. Systolic blood pressure (SBP) and diastolic blood pressure (DBP) were measured from the participant's right brachial artery with a standard mercury sphygmomanometer (Baumanometer, New York, USA) after a 5-min rest period in the sitting position. SBP was measured at the first Korotkoff sound, and DBP was measured at the fourth Korotkoff sound. We measured blood pressure at least two times per participant. Blood samples were collected after overnight fasting (at least ≥8 h). Total cholesterol, TG, and HDL-C levels were measured by enzymatic methods (Hitachi-747, Japan). At the adulthood follow-up health examination during 2014 and 2017, we conducted health surveys and blood pressure measurements. Information on demographic characteristics, diagnosis of hypertension, and antihypertensive medication status were obtained through the questionnaire. Right arm blood pressure was measured after a rest of at least 5 min, in the sitting position, and the participants were asked to remain still and relax during the measurements. A cuff tailored to their individual mid-arm circumference was used, and trained research personnel conducted blood pressure measurements using an automated oscillometric device (HEM-7080, Omron Health, Matsusaka, Japan), in accordance with a standardised protocol. SBP and DBP were measured three times each, at 2-min intervals. In addition to those who visited our centre and received health examinations, we used surveys via mail and online for those who could not visit our centre. We obtained self-reported health status, which included diagnoses of hypertension, and personally-measured blood pressure. As blood pressure was measured multiple times (twice during adolescence and thrice during adulthood follow-up), the average value was used for statistical analyses. Adolescent hypertension was defined as SBP/DBP higher than the sex and height specific 95th percentiles of the Korean population at age of 16 years (Additional file 1: Table S1). Adult hypertension was defined as SBP/DBP ≥140/90 mmHg or as taking antihypertensive medication at the follow-up examination. Written informed consent was obtained from each participant. Body mass index (BMI) was calculated as the weight in kilograms divided by the square of height in meters. The most commonly used cut-off point of TG/HDL-C ratio in previous studies was based on the highest 75th percentile in their study population. Therefore, we classified the participants into the high, and the low TG/HDL-C ratio groups according to the age, and sex-specific 75th percentile of the TG/HDL-C ratio in the Korean adolescent population [9]. The cut-off point of the TG/HDL-C ratio was 1.04 for boys and 0.81 for girls. Continuous variables were expressed as means and standard deviation, and categorical variables were expressed as numbers with proportions. Student's t-tests and chi-square tests were used to compare the baseline characteristics of participants according to participation in follow-ups, and to the TG/HDL-C ratio groups. Multiple linear logistic regression models were used to evaluate the association between TG/HDL-C ratio and adult hypertension, and the results were expressed as odds ratios and 95% confidence intervals. We used several regression models, including a crude model, and multi-adjusted models. Model 1 shows the unadjusted regression analysis between the TG/HDL-C ratio groups and adult hypertension. Model 2 was adjusted for sex and age at follow-up. Model 3 was adjusted for sex, age at follow-up, baseline SBP, waist circumference, total cholesterol, and fasting glucose. We conducted a sensitivity analysis, excluding participants who were followed up via mail or online. All analyses were performed with Statistical Analyses Software (SAS, version 9.4, SAS, Inc., Cary, NC, USA), and p < 0.05 were considered to indicate statistical significance. Results Of the 742 participants who were enrolled at baseline in 1996, 256 participants underwent health examination follow-ups from 2014 to 2017, and 16 participants responded to the survey either online or by mail. We excluded 27 participants (25 participants who underwent health examination and 2 participants who responded online or by mail) who had adolescent hypertension. Finally, 245 participants were analysed to evaluate the association between the TG/HDL-C ratio in adolescence and hypertension in adulthood. Table 1 shows the baseline characteristics of the participants. The mean TG/HDL-C ratio was 1.17 (standard deviation 0.74). Table 2 shows the comparison of baseline characteristics of participants according to the high and low TG/ HDL-C ratio groups. Among the 14 participants who had been evaluated by online or mail surveys, 7 were in the high TG/HDL-C ratio group and 7 were in the low TG/HDL-C ratio group. A higher mean waist circumference was observed in the high TG/HDL-C ratio group than in the low TG/HDL-C group (70.9 vs. 67.7 cm, p = 0.001). Differences in the other variables were not statistically significant. Table 3 shows the results of the multiple regression analyses. After about 20 years of follow-up, a total of 21 participants experienced hypertension at adulthood. A higher prevalence of hypertension was observed in the high TG/HDL-C group than in the low TG/HDL-C group (18.3 vs. 5.4%). The unadjusted odds ratio for hypertension in the high TG/HDL-C ratio group, compared with the low TG/HDL-C ratio group, was 3.93 (95% confidence interval 1.58-9.79). After adjusting for sex and age at follow-up, the odds ratio was 3.62 (95% confidence interval 1.43-9.17). Additionally, after adjusting for baseline SBP, waist circumference, total cholesterol, and fasting glucose, the odds ratio was 3.40 (95% confidence interval 1.24-9.31). Discussion We analysed 245 adolescents from the age of 16 years until adulthood. After a follow-up of about 20 years, adolescents in the high TG/HDL-C ratio group showed a higher prevalence of hypertension in early adulthood than those adolescents in the low TG/HDL-C ratio group. The risk for hypertension was about 3-fold higher in the high TG/HDL-C group, and the association was statistically significant, after controlling major confounders. The TG/HDL-C ratio is a simple and useful index to identify apparently healthy individuals who are insulin resistant and at increased cardiometabolic risk [10][11][12]. High TG/HDL-C ratio is associated with metabolic syndrome [13], increased arterial stiffness [14], diabetes, and coronary heart disease [11,15]. The ability of the TG/HDL-C ratio to predict mortality from coronary heart disease or cardiovascular diseases is equivalent to or better than that of metabolic syndromes [15]. Similar associations between TG/HDL-C ratio and cardiovascular risks have been observed in children and adolescents [16,17]. Body mass index and waist circumferences were higher in adolescents with higher TG/HDL-C ratios [8,18], and adolescents in the highest TG/HDL-C ratio tertile had the stiffest vessels, as measured by brachial distensibility, augmentation index, and carotid-femoral pulse-wave velocity [8]. While the TG/HDL-C ratio has been reported to be associated with insulin resistance and cardiometabolic risks, relatively few studies have reported the association between TG/HDL-C ratios and blood pressure or hypertension. One study followed up 5971 middle eastern women for about 6.4 years and reported that higher TG/HDL-C ratios were associated with significantly higher risks of hypertension [7]. In this study, the highest TG/HDL-C ratio quartile group had a 1.7-fold higher risk of hypertension than the lowest quartile. This study also compared the predictability of hypertension by several types of lipid measurements. TG, HDL-C, and TG/HDL-C ratio were the strongest predictors of hypertension. Similar results have also been found from studies on adolescents [8,17,19]. In a cross-sectional study of adolescents, the highest TG/HDL-C ratio tertile group had higher SBP, DBP, and arterial stiffness levels [8]. TG/HDL-C ratios positively correlated with systolic, diastolic, and mean blood pressure in 67 obese children, who were 6-12 years of age [17]. SBP and DBP significantly increased from the lowest to the highest TG/HDL-C ratio quantiles in 884 children between the ages of 6 and 16 years [17,19]. Previously, we reported elsewhere that serum lipid levels in adolescent can predict dyslipidaemia in adulthood using the Kangwha Study data [20]. Our data support that measurement of serum lipids in an individual's early lifetime, could be useful to predict dyslipidaemia and also hypertension. Other previous studies have reported the predictability of cardiovascular diseases or related risk factors, including metabolic syndromes and atherosclerosis by using serum lipid levels [21][22][23]. We could assess future cardiovascular risk by measuring serum lipids in early lifetime. However, this issue should be carefully considered from multiple perspectives, including cost, benefit, and the harm of taking blood samples from healthy children and adolescents. According to the United States Preventive Services Task Force, the current evidence is insufficient to assess the usefulness of screening for lipid disorders in children and adolescents [24,25]. The major strength of this study is that we followed up our study participants over a long duration, from adolescence to adulthood. Among the many studies that have evaluated the association between lipid profiles and The cut-off point for high and low TG/HDL-C ratio was 1.04 for boys and 0.81 for girls, according to the sex-specific 75th percentiles for TG/HDL-C ratio in the Korean population at age of 16 years cardiovascular diseases and risk factors, this study may be the only one to observe participants ranging from adolescents to early adults. There are also several limitations to this study. First, the sample size was small, and many participants were lost to follow up. Participants who completed the follow-up had a higher baseline height than those who did not, but there were no other significant differences in baseline characteristics between these groups (Additional file 1: Table S2). These findings support the idea that the follow-up was not affected by baseline TG/HDL-C ratios. The participants were residents on a rural island in Korea; therefore, the generalizability is limited. Further prospective studies that include urban populations or participants from other countries, are needed. Second, the method of blood pressure measurement changed between baseline and follow-up examinations. However, all examinations were conducted by investigators trained with a standardised measurement protocol to minimize the measurement error. Third, we did not assess the exact age at baseline, because the date of examination was not recorded. There could be residual confounding factors; however, as the participants were in the same grade at baseline, the variation of the baseline age would be small. Conclusions In conclusion, adolescents with high TG/HDL-C ratios are at risk for future hypertension in early adulthood. Further evidence is required to support this finding and to evaluate whether to introduce screening tests early in an individual's lifetime. Additional file Additional file 1: Table S1. Reference value (sex-height-specific 95th percentile of blood pressures) for adolescent hypertension. Table S2. Baseline characteristics of the participants according to follow-up. Availability of data and materials The datasets that were analysed during the current study are not publicly available, since the datasets of this study was generated through the use of multiple government funds, and specific procedure were needed to share our research data. However, the datasets may be available from the corresponding author upon reasonable request. Authors' contributions YH analysed and interpreted the patient data and contributed to drafting the manuscript. LJM and JY contributed to data collection and management and provided advice for the analysis. KHC and SI critically reviewed and revised the manuscript. All authors read and approved the final manuscript. Ethics approval and consent to participate Informed consent was obtained from all participants in our study. The study protocol was approved by the institutional review board of the Severance Hospital at the Yonsei University Health System (4-2014-0914). Consent for publication Not applicable.
2018-09-14T14:11:45.530Z
2018-09-10T00:00:00.000
{ "year": 2018, "sha1": "310c52fe61d7230d413d3ab5e01fd64d2b35dc45", "oa_license": "CCBY", "oa_url": "https://lipidworld.biomedcentral.com/track/pdf/10.1186/s12944-018-0861-y", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3d61d4f1e85e9254d44de92d8408436e632f1a6f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
219969702
pes2o/s2orc
v3-fos-license
FTY720 Regulates Mitochondria Biogenesis in Dendritic Cells to Prevent Kidney Ischemic Reperfusion Injury Dendritic cells (DCs) are central in regulating immune responses of kidney ischemia-reperfusion injury (IRI), and strategies to alter DC function may provide new therapeutic opportunities. Sphingosine 1-phosphate (S1P) modulates immunity through binding to its receptors (S1P1-5), and protection from kidney IRI occurs in mice treated with S1PR agonist, FTY720 (FTY). We tested if ex vivo propagation of DCs with FTY could be used as cellular therapy to limit the off-target effects associated with systemic FTY administration in kidney IRI. DCs have the ability of regulate innate and adaptive responses and we posited that treatment of DC with FTY may underlie improvements in kidney IRI. Herein, it was observed that treatment of bone marrow derived dendritic cells (BMDCs) with FTY induced mitochondrial biogenesis, FTY-treated BMDCs (FTY-DCs) showed significantly higher oxygen consumption rate and ATP production compared to vehicle treated BMDCs (Veh-DCs). Adoptive transfer of FTY-DCs to mice 24 h before or 4 h after IRI significantly protected the kidneys from injury compared to mice treated with Veh-DCs. Additionally, allogeneic adoptive transfer of C57BL/6J FTY-DCs into BALB/c mice equally protected the kidneys from IRI. FTY-DCs propagated from S1pr1-deficient DCs derived from CD11cCreS1pr1fl/fl mice as well as blunting mitochondrial oxidation in wildtype (WT) FTY-DCs prior to transfer abrogated the protection observed by FTY-DCs. We queried if DC mitochondrial content alters kidney responses after IRI, a novel but little studied phenomenon shown to be integral to regulation of the immune response. Transfer of mitochondria rich FTY-DCs protects kidneys from IRI as transferred FTY-DCs donated their mitochondria to recipient splenocytes (i.e., macrophages) and prior splenectomy abrogated this protection. Adoptive transfer of FTY-DCs either prior to or after ischemic injury protects kidneys from IRI demonstrating a potent role for donor DC-mitochondria in FTY's efficacy. This is the first evidence, to our knowledge, that DCs have the potential to protect against kidney injury by donating mitochondria to splenic macrophages to alter their bioenergetics thus making them anti-inflammatory. In conclusion, the results support that ex vivo FTY720-induction of the regulatory DC phenotype could have therapeutic relevance that can be preventively infused to reduce acute kidney injury. Dendritic cells (DCs) are central in regulating immune responses of kidney ischemia-reperfusion injury (IRI), and strategies to alter DC function may provide new therapeutic opportunities. Sphingosine 1-phosphate (S1P) modulates immunity through binding to its receptors (S1P1-5), and protection from kidney IRI occurs in mice treated with S1PR agonist, FTY720 (FTY). We tested if ex vivo propagation of DCs with FTY could be used as cellular therapy to limit the off-target effects associated with systemic FTY administration in kidney IRI. DCs have the ability of regulate innate and adaptive responses and we posited that treatment of DC with FTY may underlie improvements in kidney IRI. Herein, it was observed that treatment of bone marrow derived dendritic cells (BMDCs) with FTY induced mitochondrial biogenesis, FTY-treated BMDCs (FTY-DCs) showed significantly higher oxygen consumption rate and ATP production compared to vehicle treated BMDCs (Veh-DCs). Adoptive transfer of FTY-DCs to mice 24 h before or 4 h after IRI significantly protected the kidneys from injury compared to mice treated with Veh-DCs. Additionally, allogeneic adoptive transfer of C57BL/6J FTY-DCs into BALB/c mice equally protected the kidneys from IRI. FTY-DCs propagated from S1pr1-deficient DCs derived from CD11cCreS1pr1 fl/fl mice as well as blunting mitochondrial oxidation in wildtype (WT) FTY-DCs prior to transfer abrogated the protection observed by FTY-DCs. We queried if DC mitochondrial content alters kidney responses after IRI, a novel but little studied phenomenon shown to be integral to regulation of the immune response. Transfer of mitochondria rich FTY-DCs protects kidneys from IRI as transferred FTY-DCs donated their mitochondria to recipient splenocytes (i.e., macrophages) and prior splenectomy abrogated this protection. Adoptive transfer of FTY-DCs either prior to or after ischemic injury protects kidneys from IRI demonstrating a potent role for donor DC-mitochondria in FTY's efficacy. This is the first evidence, to our knowledge, that DCs have the potential to protect against kidney injury by donating mitochondria to splenic macrophages to alter their bioenergetics thus making them anti-inflammatory. In conclusion, the results support INTRODUCTION The pathogenesis of kidney injury following kidney ischemia reperfusion (IR) involves a complex interaction between altered microcirculatory hemodynamics, renal parenchymal cells (endothelial and epithelial) and infiltrating immune cells (1,2). Dendritic cells (DCs), the major leucocyte subset in the kidney (3)(4)(5), contributes to both the innate and adaptive immunity of kidney IR injury (IRI) (6) through aberrant activation of immune cells (7)(8)(9). Considerable data supports that the immune system mediates acute kidney injury (AKI) (10), yet many of the underlying mechanisms still remain unclear. In preclinical mouse models, anti-inflammatory pharmacologic treatments have been shown to significantly attenuate tissue injury and loss of function (11)(12)(13)(14). However, the side effects of these common anti-inflammatory therapies combined with the lack of clinical data, supporting the involvement of the immune system in AKI pathogenesis, have hindered the development of clinically tenable anti-inflammatory options. Therefore, as of now dialysis remains the only treatment option available to AKI patients, underscoring the need to develop novel approaches to tackle this hurdle to ultimately improve patient quality of life. Our previously published work using mouse models of AKI (13,14) and others (15,16) have demonstrated that modulation of Sphingosine 1 Phosphate receptors (S1PRs) significantly influences AKI development and thus progression to chronic kidney injury. These receptors belong to a family of five G-protein coupled receptors (S1pr1-5) that modulate diverse physiological responses including "cellular growth and proliferation, angiogenesis, apoptosis, and lymphocyte trafficking" (17)(18)(19)(20). Similar levels of S1PRs are expressed on both human and mouse leukocytes (21)(22)(23). FTY720, a potent immunosuppressant and a synthetic S1P agonist is currently in clinical trials for treatment of autoimmune diseases (24) and is effective in reducing graft rejection in preclinical mouse models (25,26) because it mediates a potent immunosuppression. Phosphorylated-FTY720 (FTY720-P), the active form of FTY720, is a non-selective S1P analog that binds and activates four (S1PR1, 3-5) of the five known receptors for S1P (24,27). FTY720-dependent protection or diminished disease severity has been demonstrated in varied acute and chronic disease models, such as diabetes (28)(29)(30)(31)(32)(33), multiple sclerosis [MS, review (34)], ischemic injury (35)(36)(37)(38)(39)(40)(41)(42)(43)(44)(45)(46), and even clearance of viral infection (47). To date, FTY720 is currently used as FDA-approved treatment (Gilenya) of MS patients (48). In our previously published work, we have shown that this pan-S1PR agonist, FTY720, attenuated kidney IRI by directly activating S1P1 on proximal tubule (PT) cells, independent of its previously known function through binding to S1P1 on B and T cells to induce canonical lymphopenia (14). FTY720 also reduces cisplatin-induced AKI (49). Deletion of S1P1 renderers cultured and kidney PT epithelial cells more susceptible to cisplatin-induced injury (49), whereas overexpression of S1P1 protected PT cells from injury and resistance to cisplatin induced cell death at lower doses (49). One potential mechanism that we previously reported to mediate S1P1 protection in IRI and cisplatin-induced AKI was through possible induction of mitochondrial biogenesis that resulted in higher mitochondria numbers and ultimately preserved kidney function (49). Thus, we previously concluded in these published studies that S1P1 had a central role in stabilizing mitochondrial function and FTY720 administration could represent a novel strategy in the prevention of AKI (14,49). However, use of pharmacological agents such as FTY720 has limitations due to off-target (binding to other S1P receptors) and other associated adverse side effects. On the other hand, cell-based therapeutic approaches have advantages; transferred cells are capable of sensing diverse signals, navigating to specific sites in the body, make immunological decisions and executing complex responses. Dendritic cells (DCs) are heterogeneous, professional antigen-presenting cells (APCs) and are distributed throughout the lymphoid and non-lymphoid tissues (50). Our previous studies demonstrated that S1P3 deficient (S1pr3 −/− ) mice are protected from renal IRI through a mechanism that involved BMDCs and their ability to respond as immune modulators to regulate innate and adaptive immune responses (13). Additionally, we had also tested the therapeutic advantage of using S1pr3 −/− BMDC in DCs transfer studies in mouse kidney IRI model. Compared to mice treated with wild-type (WT) DCs that had significant rise in plasma creatinine, mice that received S1pr3 −/− DCs were significantly protected from kidney IRI (12,13). S1pr3 −/− DCs did not attenuate IRI in splenectomized, Rag1 −/− , or DCdepleted (CD11c-DTR) mice (12) demonstrating that both spleen derived cells, likely macrophages (CD169 + or F4/80 + ) or DCs (CD11c + or CD103 + ) and T cells (CD4 + and Tregs) mediated this protection. The aim of this study was to determine the potential protective mechanism(s) of FTY720 stimulated BMDCs in a preclinical mouse model of kidney IRI. Treatment of BMDCs ex vivo with FTY720 avoids any adverse off-target effects associated with systemic drug injections. Herein, we demonstrate that FTY720 treated BMDCs (FTY-DC) accumulate in the recipient spleen as early as 30 min after adoptive transfer of cells via intravenous injection. FTY-DC mitochondrial content was elevated in vitro, and we posit that transfer of FTY-DC mitochondria to splenic macrophages occurs. Indeed, in spleen, FTY-DC interaction with splenic macrophages (CD169 + and F4/80 + ) was evident. Transplant of mitochondria from FTY-DC reprogrammed macrophage phenotype; macrophages were less immunogenic upon inflammatory stimuli in vivo and in vitro. Depletion of DC-derived mitochondria through varied approached demonstrated that oxidative capacity of DC was critical to protection from AKI in response to IRI. Splenectomy or pharmacologic ablation of mitochondrial function with combination treatment with rotenone and antimycin A (Rot/AA) of FTY-DC abrogated the protection observed with FTY-DCs. Likewise, inhibiting FYT720 agonism using S1P1 receptor deficient DCs (CD11cCreS1pr1 fl/fl ) also reversed FTY-DC therapeutic efficacy. Overall, the interactions between FTY-DC and splenocytes (macrophage) demonstrated that induction of the anti-inflammatory or immunosuppressive phenotype led to reduced injury, an effect that required the recipient spleen. Of note, adoptive transfer of DC worked equally well in allogeneic IRI model (C57BL/6 BMDC → BALB/c mice), suggesting that this cell-based therapy can be efficacious in transplantation. Finally, we provide seminal findings that DCs are mitochondrial donors which illustrate a novel mechanism of how DCs regulate innate immune responses in acute injury. Mice All animals were handled, and procedures were performed in adherence to the National Institutes of Health Guide for the Care and Use of Laboratory Animals, and all protocols were approved by the University of Tennessee Health Science Center and University of Virginia Institutional Animal Care and Use Committees. CD11cCre mice (Jackson Laboratories, Bar Harbor, ME) were purchased and S1pr1 fl/fl generously provided by Dr. Richard L. Proia, NIH. The lines were crossed and bred as fl/fl with Cre to generate CD11cCreS1pr1 wt/wt (control) or CD11cCreS1pr fl/fl (DC specific S1pr1 knockout) littermates. Pham fl/fl mice (51) (Jackson Laboratories, Bar Harbor, ME) were bred with CD11cCre to obtain CD11cCrePham fl/fl mice. For all transfer studies C57BL/6J and BALB/c mice were purchased from the National Cancer Institute, NCI (Frederick, MD). Mice were maintained in standard vivarium housing with a 12 h light/dark cycle on a chow diet and water was freely available. Renal Ischemia-Reperfusion Injury and Splenectomy (SPLNX) Mice were anesthetized with an intraperitoneal injection of a ketamine (120 mg/kg) and xylazine (12 mg/kg) mixture and buprenorphine (0.15 mg/kg, subcutaneous injection) was administered as an analgesic and placed on a warm pad to maintain body temperature at 34.5-36 • C. Mice were then randomized to sham or IRI operation. Bilateral flank incision was performed and either the renal vessels (vein and artery) on both sides or only on the left side were cross-clamped. Body temperature was checked and maintained throughout the ischemic period using ATC-2000 system (World Precision Instruments, Sarasota, FL). Sham-operated mice underwent the same procedure except for vessel clamping and surgical wounds were closed. Male mice (8-12 wk old, C57BL/6 and BALB/c) were subjected to bilateral IRI (26 min ischemia for C57BL/6 and 28 min for BALB/c mice followed by 20-24 h reperfusion) as previously described (3,7,52). Mice that had one kidney with no reperfusion 24 h after ischemia were excluded from all analysis. For experiments that involved splenectomy (Splnx) prior to IRI, mice were anesthetized with an intraperitoneal injection of ketamine (120 mg/kg) and xylazine (12 mg/kg). The spleen was then removed through a small flank incision. Control, shamoperated mice underwent the same procedure except for splenic artery ligation and spleen removal. Sham and splenectomized mice recovered for 7 days prior to BMDC transfer for IRI studies. Assessment of Kidney Function and Histology Blood was collected under anesthesia from the retro-orbital sinus, and plasma creatinine (mg/dL) was determined by using an enzymatic method with minor modifications from the manufacturer's protocol (twice the volume of sample; Diazyme Laboratories, Poway, CA) and as previously reported (53). For histology, kidneys were fixed overnight in 0.2% sodium periodate-1.4% DL-lysine-4% paraformaldehyde in 0.1 M phosphate buffer, pH 7.4 (4% PLP) and embedded in paraffin. Kidneys were prepared for Hematoxylin and eosin (H&E) staining as previously described (3) and viewed by light microscopy (Zeiss AxioSkop). Photographs were taken and brightness/contrast adjustment was made with a SPOT RT camera (software version 3.3; Diagnostic Instruments, Sterling Heights, MI). For quantification of tubular injury score, sections were assessed by counting the percentage of tubules that displayed cell necrosis, loss of brush border, cast formation, and tubule dilation as follows: 0 = normal; 1 = <10%; 2 = 10 to 25%; 3 = 26 to 50%; 4 = 51 to 75%; 5 = >75%. Five to 10 fields from each outer medulla were evaluated and scored in a blinded manner. The histological change was expressed as acute tubular necrosis (ATN), scored as previously described (13,54). Mitochondria Isolation and Quantification Mitochondria were isolated from mouse liver or BMDCs as previously described (60). Briefly, 2 pieces of ∼6 mm mouse liver biopsies were homogenized using homogenization buffer (300 mmol/L sucrose, 10 mmol/L HEPES-KOH, 1 mmol/L EGTA-KOH, pH 7.4) in a C tube (Miltenyi Biotec, Cambridge, MA) with GentlyMACS dissociator using the "m-mito tissue" preset program. The homogenate was incubated on ice for 10 min with 1 mg Subtilisin A protease from Bacillus licheniformis (Sigma-Aldrich, St. Louis, MO). The digested homogenate was serially filtered through 2 × 40 µm Falcon Cell Strainers (Thermo-Fisher, Waltham, MA) and 1 × 10 µm PluriSelect mesh (PluriSelect, San Diego, CA) that was saturated with ice cold homogenization buffer. Mitochondria were collected by centrifuging the filtrate at 3,500 × g at 4 • C for 10 min and resuspended in cold 1x PBS for further use. Protein concentration of isolated mitochondria were determined using Bradford assay according to manufacture recommendations. Isolated mitochondria were kept on ice and used within 1 h after isolation. In some experiments isolated mitochondria were sonicated and kept on ice before injecting, all isolated mitochondria were injected within 1 h of isolation. ATP concentrations of isolated mitochondria were using luminescent CellTiter-Glo reagent (Promega) according to the manufacturer's instructions. Isolated mitochondria were injected (i.v.; 0-100 µg/mouse) 1 day before spleen was harvested for single cells preparation for in vitro stimulation with LPS (100 ng/ml) for 6 h. RAW264.7 cells (TIB-71, ATCC, Old Town Manassas, VA) were treated with isolated mitochondria (with and without sonication, 10 µg/ml) from DCs for 24 h before stimulating with LPS (100 ng/ml) or analysis with Seahorse Bioanalyzer (Agilent Technologies, Santa Clara, CA). Seahorse Flux Bioanalyzer Seven day old BMDCs were transferred to a Seahorse 24well tissue culture plates and oxygen consumption rate (OCR) was measured, and parameters were calculated as previously described (49) with the following modification. Prior to the assay, the media was changed to unbuffered DMEM (Gibco #12800-017, pH 7.4, 37 • C), and cells were equilibrated for 30 min at 37 • C. After measuring basal respiratory rate, Oligomycin (Sigma; 2 µM; uncouples ATP-coupled respiration by inhibiting ATP synthase), FCCP (Sigma; 1.5 µM; carbonyl cyanide 4-(trifluoromethoxy)-phenylhydrazone (FCCP), mitochondrial uncoupling agent; uncouples mitochondrial respiration from ATP to determine maximal respiratory rate), and electron transport chain (complex I and III) inhibitors, rotenone (Sigma; 0.5 µM) and antimycin A (Sigma; 0.5 µM; to eliminate all mitochondrial respiration) were injected sequentially during the assay. OCR was measured in 3 min periods of time (over a total period of 2 h). Basal mitochondrial respiration, ATP-linked respiration, proton leak (non-ATP linked oxygen consumption), maximal respiration, non-mitochondrial respiration, reserve respiratory capacity, respiratory control ratio, and coupling efficiency were determined in whole cells according to Brand et al. (61), N = 4-5 wells were used for each experimental group and experiments were repeated a minimum of 3 times. Flow Cytometric Analysis, Western Blot, ELISA, and 32-Plex Luminex Flow cytometry was used to analyze kidney leukocyte content. In brief, kidneys were extracted, minced, and digested (1 mg/ml collagenase) as described (7). After blocking nonspecific Fc binding with anti-mouse CD16/32 (2.4G2), fresh kidney suspensions were incubated with fluorophore-tagged anti-mouse CD45 (30-F11) to determine total leukocyte cell numbers. CD45-labeled samples were further used for labeling with different combinations of fluorophore-tagged anti-mouse F4/80 (BM8), GR-1 (Ly6G), CD11b (M1/70), CD11c (integrin alpha X chain-HL3). 7-AAD (BD Biosciences) was added 15 min before analyzing the sample to separate live from dead cells. Appropriate fluorochrome-conjugated, isotypematched, irrelevant mAbs were used as negative controls. Flow cytometry data acquisition was performed on a FACS Calibur (Becton Dickinson, San Jose, CA) with Cytek 8-color flow cytometry upgrade (Cytek Development, Inc., Fremont, CA). Data were analyzed by FlowJo software 9.0 (Tree Star, Ashland, OR). All antibodies (except as noted) were from eBioscience and were used at a concentration of 5 µg/ml. ELISA: Media was collected from BMDCs or splenocytes treated for 6 or 24 h with wither 25 or 100 ng/ml LPS. TNFα levels were measured by using mouse ELISA kits (Invitrogen, Carlsbad, CA) following the manufacturer's protocol. BMDCs treated with and without LPS for 24 h were used to isolated total protein using RIPA lysis buffer supplemented with protease and phosphatase inhibitor cocktail (Thermo Fisher Scientific, Vernon Hills, IL). Equal volumes of the lysate supernatants were either boiled for 10 min at 100 • C for GAPDH) or left at room temperature for 10 min for rodent OXPHOS cocktail (Abcam, Cambridge, MA) with Laemmli buffer and β-mercaptoethanol. Total of 20 µg of proteins were separated using a 10% SDS-PAGE gel and transferred to PVDF membranes. PVDF membranes were incubated overnight with primary antibody for GAPDH (1:1000, Santa Cruz Biotechnology) and rodent OXPHOS cocktail (1:1000). Blots were then washed and incubated at 1:4000 for 1 h with horseradish peroxidase-conjugated anti-mouse secondary antibodies (Santa Cruz Biotechnology). Bands were visualized by chemiluminescence according to the manufacturer's protocol with SuperSignal West Pico chemiluminescent substrate (Thermo Fisher Scientific) and quantified by Image J. The Bio-Plex Pro Mouse Cytokine 23-Plex Immunoassay was used to check serum levels 24 h after bilateral kidney IRI (BioRad, Hercules CA). Data and Statistical Analysis GraphPad Prism 8 (GraphPad Inc.), SigmaPlot 11.0 (Systat Software Inc.), and Canvas X (ACD Systems of America Inc.) were used to analyze and present the data. Data were analyzed, after transformation if needed to generate a normal distribution, by 2-tailed t-test or 1-way ANOVA with posthoc analysis as appropriate. Two-tailed unpaired t-test was used for analysis of two groups. p < 0.05 was used to indicate significance. FTY720 Induces Metabolic Reprogramming in WT BMDCs WT DCs were isolated and propagated for 8 days in presence of GMCSF and vehicle (1X PBS) or FTY720 (1 µM). Eight day old DCs were labeled with MitoTracker CMXRos Red (50 nM). Compared to vehicle treatment, FTY720 treatment increased mitochondrial content in BMDCs ( Figure 1A). Similarly, there was significantly higher labeling for MitoSox (5 µM) and MitoTracker Green (100 nM) after over-night LPS stimulation in FTY-DC compared to Veh-DC ( Figure 1B). FTY-DCs displayed significantly elevated mRNA levels for peroxisome proliferator-activated receptor gamma co-activator 1-alpha (Pgc1a) in response to LPS, but this LPS-induction was absent in Veh controls ( Figure 1C). To determine if the changes in mitochondrial content also altered mitochondrial function, bioenergetic analysis was undertaken. LPS blunted oxygen consumption compared to Veh controls in Veh-DCs as expected ( Figure 1D, blue to black line) (62). Interestingly, FTY-DCs have higher basal OCR ( Figure 1D, green to blue line at time zero). Upon treatment with uncoupler FCCP, FTY-DC demonstrate a failure to increase maximal respiratory capacity in unstimulated cells that is even more reduced with LPS stimulation demonstrating that FTY ablates spare respiratory capacity (likely because already at maximal OCR in basal state). When ATP production was quantified, LPS reduced ATP production as measured by OCR (blue to black). FTY-DC demonstrated significantly greater ATP production compared to Veh-DC in both unstimulated and LPS-stimulated DCs ( Figure 1E). These data indicate that propagation of BMDCs in presence of FTY720 increased mitochondrial content, basal OCR, and ATP production. This suggests the potential for an anti-inflammatory phenotype in DCs. FTY720 Induces Immune Reprogramming in WT BMDCs LPS stimulation of BMDCs increased expression of enzymes (iNOS) and cytokines typical of pro-inflammatory DCs. FTY-DC dramatically blunted LPS-induced iNOS expression and nitrate in media compared to Veh-DC (Figures 1F,G). Likewise, FTY significantly blunted expression of LPS-induced Il1b and Tnfa and protein concentrations of TNFα compared Veh-DC treated with LPS (Figures 1H-K). Il12p40 gene expression was not regulated by FTY (data not shown). In contrast, Il10, a cytokine often associated with anti-inflammatory immune cells was significantly increased by LPS but only in FTY treated cells ( Figure 1I). Interestingly, after LPS treatment, FTY-DC have significantly lower expression levels of co-stimulatory antigen presentation molecules (CD80, CD86, and CD40) and MHCII compared to Veh-DC (Supplemental Figure 1). Interestingly, FTY-DCs had lower expression level for PDL1 compared to Veh-DCs and maintained the PDL1/CD86 ratio after LPS stimulation compared to LPS treated Veh-DCs (Supplemental Figure 1) Figure 1J). Transfer of FTY-DC Protects Kidneys From Ischemic Injury All DCs were activated with 100 ng/ml LPS prior to transfer in all syngeneic studies (B6 BMDC to B6 mice). Half a million DCs were injected 1 day before bilateral kidney IRI. As control, mice were injected with 1x PBS as no cell (NC) controls. Compared to NC and Veh-DC treated mice, FTY-DC treated mice significantly protected the kidneys from injury (Figure 2A). Morphological changes ( Figure 2B) paralleled functional studies. FTY-DC treatment resulted in less infiltration of immune cells (CD45 labeled) compared to Veh-DC or NC treated mice ( Figure 2C). Quantitative analysis with flow cytometry further demonstrates that FTY-DC treated mice have few neutrophil infiltrations compared to NC or Veh-DC treated mice (Figures 2D-F). To determine if kidney injury genes along with S1pr1 were regulated we measured by qRT-PCR relative kidney levels in DC treated mice. Mice treated with FTY-DC have significantly lower kidney mRNA levels for S1pr1, Ngal, Kim1, and lower levels for Il6 ( Figure 2G). These data indicate that FTY-DC treated mice have significantly less inflammation (cytokine levels) that results in less infiltration of innate immune cells (PMNs) after kidney IRI. The expression levels of S1pr1 increase after IRI compared to sham operated mice in a time dependent manner (54), possibility indicating initiation of compensatory mechanism due ischemic injury. Plasma samples from Veh-and FTY-DC treated mice were check 24 h after bilateral ischemia using 23-plex Luminex. Injected DCs Transfer Mitochondria to Splenic Macrophages Next to evaluate if DCs transfer mitochondria to recipient cells we harvested BMDCs from CD11cCrePham fl/fl mice that contain a fluorescent tag in their mitochondria (Figure 3A). Half a million Veh-DC or FTY-DC that were propagated from CD11cCrePham fl/fl mice were i.v. injected and signal in spleen was evaluated 30 min or 24 h after injection. The spleen was labeled with anti-CD169 to identify marginal zone (MZ) and anti-F4/80 for red pulp (RP) macrophages and no antibody labeled area are labeled as white pulp (WP) (Figures 3B,C). Some green fluorescence signal indicative of mitochondria exchange from DCs to macrophages in CD169 + cells at the 30 min after injection time point was evident (data not shown). Strong signal in proximity and inside the various splenic macrophages at 24 h from injected DCs is demonstrated (Figures 3B,C). In addition to possibly more mitochondria transfers from FTY-DC, it appears there is disruption in the MZ macrophages (CD169) with FTY-DC treatment along with more mitochondria signal in red pulp compared to Veh-DC ( Figure 3C). This disruption in MZ macrophages in mice treated with FTY-DC could possibly be due to similar mechanism that we have previously demonstrated using S1pr3 −/− DCs; that ultimately result in higher CD4 + FoxP3 + Tregs in white pulp (12). Splenectomy (Splnx) Abrogates FTY-DC Dependent Protection Since abundant signal from injected DCs (CD11cCrePham fl/fl ) was found in spleen, to determine if spleen was important in FTY-DC-dependent protection after kidney IRI, mice underwent either sham or Splnx surgeries and were allowed to recover for 7 days. On day 8, half a million LPS treated either Veh-DC or FTY-DCs were injected 1 day before bilateral kidney IRI, as above. In absence of spleen, FTY-DC dependent protection was completely abrogated (Figure 4A). Histological evaluation also showed dramatic FTY-DC-dependent protection of kidney architecture (Figures 4B,C). Quantitative analysis with flow cytometry further demonstrates that FTY-DC treated Sham mice have few neutrophil infiltrations compared to Veh-DC treated mice (Figures 4D,E), no changes in neutrophil percentage or numbers was observed in Splnx-FTY-DC treated mice. In addition to involvement of innate immune cells (macrophages) as possible mitochondria recipients from injected DCs, it is plausible that DCs could donate mitochondria to other adaptive Dendritic Cell S1P1 Are Required for FTY-DC Dependent Protection FTY720 dependent protection is mainly due to its binding to S1P1 at low doses and potentially followed by S1P3 at higher doses. It is unclear which receptor FTY720 may be signaling through to induce such protection from IRI. To FIGURE 5 | FTY-DC require S1pr1 on BMDC to protect kidneys from IRI. (A) Protocol for experimental setup. FTY720 is a ligand for four out of five S1P receptors. Plasma creatinine (PCr, mg/dL) was measured 24 h after IRI. We tested if S1pr1 or S1pr3 were requited for FTY720 dependent regulatory DC phenotype. BMDCs were propagated from either C57BL/6 WT, CD11cCreS1pr1 fl/fl (S1pr1 −/− DC), or S1pr3 −/− and treated with FTY720. FTY-S1pr1 −/− DC do not protect kidneys from IRI. As demonstrated in our earlier published studies and again confirmed, transfer of S1pr3 −/− DC with or without FTY720 significantly protect kidney from IRI. These studies suggest only S1pr1 are necessary for FTY720 dependent regulatory DC phenotype. Next, we tested if the route of delivery was important for FTY-DC induced protection from kidney IRI. determine the mechanisms mediating downstream effects of FTY720, CD11cCreS1pr1 fl/fl (S1pr1 −/− -DC), and S1pr3 −/− mice were used to harvest BMDCs and propagate in presence of FTY720. C57BL/6J mice were injected with half million LPS activated FTY-CD11cCre (WT) DC, FTY-S1pr1 −/− DC or FTY-S1pr3 −/− DCs 1 day before bilateral kidney IRI. FTY-CD11cCre (WT) DC and as expected from our previous studies (12, 13) S1pr3 −/− DCs treated with FTY protected mice kidneys from injury. The protection was abrogated in mice treated with FTY-S1pr1 −/− DC ( Figure 5A). injected with various amounts of isolated mitochondria (0-100 µg/mouse). Spleen was harvested 1 day after mitochondria injections and single cells suspensions were treated ex vivo with 100 ng/ml LPS for 6 h and supernatant was analyzed by Elisa for TNFα. (G) Spleen from (0 µg mito) treated mouse was harvested and incubated with various amounts of isolated mitochondria (0-15 µg/well) for 1 day before stimulating with 100 ng/ml LPS for additional 6 h and supernatant was analyzed for TNFα. Data represent means ± SEM, *p ≤ 0.05, **p ≤ 0.01, and ***p ≤ 0.001, one-way ANOVA followed by Tukey's post-test. Interestingly, the protection by FTY-DCs is lost if BMDCs are administered either through intraperitoneal or subcutaneous injections and if DCs are only treated acutely (overnight) with FTY720 ( Figure 5B). Therapeutic use of FTY-DCs is maintained even if given 4 h after kidney ischemia (Figure 5C) or if tested in allogeneic transfer experiments (C57BL/6J DCs to BALB/c) in mice ( Figure 5D). In allogeneic transfer studies the BMDCs were not activated with LPS prior to transfer and equally protect mice kidneys from IRI. This could be due to involvement of adaptive immunity with FTY-DC. Mitochondria Function Is Critical in FTY-DC Dependent Protection Next, we tested if transferred FTY-DC (1) (Figures 6D,E). Splenocytes cultures from mice treated with FTY-DC (Rot/AA) (blue) or FTY-DC (CytoD) (yellow) had higher levels of TNFα compared to FTY-DC treated mice. Possibly due to less mitochondria transfers from [(Rot/AA) or (CytoD)] treated FTY-DC to splenocytes. We noted that splenocytes that were isolated from mice treated with Veh-DCs had significantly less production of TNFα with 100 ng/ml LPS, suggesting that Veh-DCs also donate mitochondria to splenocytes (as also shown in Figure 3) although to lesser extent compared to FTY-DCs. To test the hypothesis that increasing mitochondria numbers as expected with FTY-DCs are responsible for inhibitory effect on TNFα production from LPS treated splenocytes, we treated mice with different doses of healthy isolated mitochondria. To test if dose dependent uptake of mitochondria was responsible for inducing an anti-inflammatory phenotype in splenocytes, we injected mice with various amounts of isolated labeled mitochondria and found that injected mitochondria signal was mainly found in splenic macrophages as early as 30 min after injection (data not shown). In order to check if injected mitochondria are found in recipient mouse spleen, we either used CD11cCrePham fl/fl BMDCs or human HEK293 cells to isolate mitochondria in separate experiments. Mice injected with mitochondria isolated from CD11cCrePham fl/fl BMDCs were imaged after 24 h after injection (i.v.). As shown in Figure 3, in these mice, systemically injected labeled mitochondria in spleen was predominantly found in F4/80 + and CD169 + macrophages (data not shown). However, unlike data shown in Figure 3, signal associated with systemically injected mitochondria was also found in various other tissues including kidneys (data not shown). Using 50 µg mitochondria isolated from human HEK293 cells we were able to check with RTPCR the expression using human and mouse mtDNA primers in various tissues over time. In spleen relative levels of h-mtDNA/m-mtDNA expression increased as early as 30 min after injected and was higher at 24 h after injection compared to uninfected mice (data not shown). Mice were intravenously injected with various amounts (0-100 µg/mouse) of isolated mitochondria isolated from mouse liver. Splenocytes from mitochondria treated mice were cultured 24 h after injection and stimulated with LPS for 6 h. Total splenic single cell suspensions (∼100,000/well) were treated with 100 ng/ml LPS for 6 h. Mice treated with mitochondria have a significant dose dependent decrease in TNFα production compared to control (0 µg mitochondria) treated mice ( Figure 6F). Additionally, treatment of control splenocytes (0 µg mitochondria) with isolated mitochondria ex vivo (0-15 µg/well) 1 day before treatment with LPS also significantly reduced TNFα in 6 h cultures in a dose dependent manner ( Figure 6G). FTY-DC Are More Efficient Mitochondria Donors Compared to Veh-DC Using coculture of BMDCs and RAW264.7cells (mouse macrophage cell line), we tested the efficiency of FTY-vs. Veh-DCs to donate mitochondria. Prior to setting up the co-culture, RAW264.7 cells were labeled blue using CellTrace TM Violet (CT-Violet) proliferation dye. All analysis (imaging and flow cytometry) was done after 24 h. Compared to Veh-DCs, coculture of RAW264.7 cells with FTY-DCs had more transfer of mitochondria by immunofluorescence (Figures 7A,B) and quantification by gating on CT Violet (RAW264.7) and evaluating the amount of donor (Pham) green mitochondria signal ( Figure 7C). Co-culture of RAW264.7 (Blue, CT Violet) cells with FTY-DC (green mitochondria) have significantly more mitochondria donation compared to Veh-DC co-cultures ( Figure 7D). Uptake of Healthy Mitochondria by Macrophages Induce a Less Immunogenic Phenotype To determine if uptake of healthy mitochondria by RAW264.7 cells changes their responses to LPS, we repeated the above study with isolated mitochondria rather than DC-dependent donation. BMDCs were again propagated from CD11cCrePham fl/fl mice and mitochondria was isolated from 8 day old Veh-DCs. RAW264.7 cells were treated with 10 µg/well of isolated mitochondria for 24 h. Some of the treated RAW264.7 cells were used for seahorse analysis and rest were treated with 100 ng/ml LPS for additional 24 h for gene analysis. As control, equal amounts of sonicated (Son) mitochondria (Son-Mito) were added in separate wells for 24 h. Treatment of RAW264.7 cells with healthy mitochondria significantly induced an increase in basal oxygen consumption rate (OCR) and ATP production compared to vehicle treated cells and use of Son-Mito abrogated these effects (Figures 8A-C). Mitochondria treated RAW264.7 cells were also analyzed for uptake of labeled mitochondria 24 h after incubation, the added mitochondria signal appears perinuclear in location (white arrows, Figure 8D). We next tested if addition of mitochondria regulated gene expression in RAW264.7 cells stimulated with LPS. Compared to untreated RAW264.7 cells (Veh/LPS), cells treated with healthy mitochondria had lower mRNA expression levels for Nos2, Tnfa, Il1b, and Il16 after LPS stimulation (Mito/LPS). However, treatment of RAW264.7 cells with Son-Mito abrogated these inhibitory changes compared to healthy functional mitochondria (Figures 8E-H) treated cells. Data was calculated as relative fold changes compared to Veh/LPS treated cells (dash line, Figures 8E-H). DISCUSSION In the current study we demonstrated that immunosuppression and protection from kidney IRI induced by adoptive transfer of FTY-DC is dependent on the recipient spleen, DC-S1P1 and functional viability of transferred DC mitochondria. Furthermore, the protective effects of FTY-DC involve donation of mitochondria to splenic macrophages making them less immunogenic. In addition, our study for the first time to our knowledge demonstrates that BMDCs like mesenchymal stem cells (63) have the potential to donate mitochondria to induce an immunosuppressive phenotype in recipient cells. Dendritic Cells in Acute Kidney Injury AKI is a major health burden without major pharmacological advances in its prevention or treatment (64). Additionally, current therapies for allograft rejection, cancer, or autoimmune diseases use non-specific immunosuppressive drugs that are associated with adverse side effects and are limited due to lack of antigen specific tolerance. DCs are a heterogeneous group of cells important in immunity or tolerance, and the idea of using tolerized DCs in cell-based therapy of cancer, autoimmune disease, and transplantation has been under investigation for the past 2 decades (65). However, most studies have focused on the induction of T cell-tolerogenic responses. Immune regulation of innate immune response via tolerogenic DCs is critically important in bridging innate and adaptive immunity and provides the foundation for use in transplant tolerance of allograft injury (66). Cell-based therapy using regulatory immune cells [Tregs (67), myeloid cells (68), or DCs (69,70)] is a strategy that induces potential antigenspecific tolerance. Pharmacological or biological strategies induce regulatory or tolerogenic DCs (Tol-DC) (71), which are immature, maturation-resistant or alternatively activated cells that express low levels of MHC and co-stimulatory molecules. Compared with mature DCs, immature DCs interact actively with T cells and direct them into a regulatory response. Depletion of DCs significantly protects mouse kidneys from IRI (6, 13) and a dose-dependent increase in BMDC numbers exacerbates kidney injury (13), suggesting that DCs play a major role in inducing AKI. As our current and previously published studies demonstrate injected BMDCs accumulate in the spleen (Figure 3) after systemic infusion (72) and can persist for two weeks post-injection (73). In kidney IRI, DCs tolerized with an A 2A R agonist (74) or DCs deficient of S1pr3 (13) attenuated AKI. Our current study further demonstrates using CD11cCrePham fl/fl mice to harvest DCs that transferred DCs can donate their mitochondria to recipient cells thus making them less immunogenic. Role of S1P Receptor Agonist (FTY720) in Kidney Injury and Dendritic Cells S1P1 activation is important for maintaining cell viability; global deletion is embryonically lethal (75). We have previously demonstrated that the protective effect of S1P1 agonists FTY720 or SEW2871 in IRI (54) and cisplatin-induced nephrotoxicity (14,49) was mediated by activation of S1P1 expressed on PT cells, independent of lymphopenia (14). Others have also shown that FTY720 can act as innate immune system immunomodulator that involves a role beyond its prominent effects on lymphocyte recirculation (76). In another study, using a mixed lymphocyte reaction (MLR), FTY720-treated human DCs exhibited reduced antigen presentation and altered cytokine production (77) and systemic injection with FTY720 was also found to block DC trafficking (78). Our current data (Supplemental Figure 1) and others have previously demonstrated that FTY720 alone does not affect the surface of BMDCs surface markers CD11c, MHCII, CD40, CD86, and indicates there is no change in viability. However, BMDCs propagated in presence of FTY720 do have immunosuppressive phenotype upon stimulation (LPS, CD40L or mixed lymphocyte reactions) and transfer of these immunosuppressive BMDCs confirms protection in various models CD80 (56,77,79,80). In many of these studies the protective effects of FTY720 treated BMDCs was due to infusion of these immunosuppressive cells to block T cell responses. FTY-DCs in our study are immature compared to Veh-DCs after LPS stimulation and transfer of these FTY-DCs could potentially have regulatory responses on adaptive immunity. In our experiments we did observed a higher number of Tregs in spleen of FTY-DC treated mice but the exact mechanism of this is yet to be determined. Role of Mitochondria in Dendritic Cells and Macrophages DC and macrophage functions are regulated by mitochondrial metabolism. Type 1 macrophages (81, 82) and immunogenic DCs have high glycolytic rates (62). The activation of DCs or macrophages by several TLR agonist (LPS or CpG) leads to rapid increase in glycolysis followed by decrease in OXPHOS and mitochondrial membrane potential (62,83,84 These data indicate that uptake of naked/free mitochondria or DC-derived mitochondria in a dose dependent manner induces an antiinflammatory phenotype, although the exact mechanism is currently unknown. Our current findings indicate that DCs have the potential to donate mitochondria to induce immunological changes in the recipient cells a protective mechanism previously shown to be employed by mesenchymal stem (86) and bone-marrowderived stromal cells (87). As demonstrated in our earlier studies (12), the injected bone-marrow-derived DCs are predominantly found in the recipient spleen as early as 30 min and signal persist up to 72 h. More importantly using transgenic mice (that contain labeled Pham mitochondria) to propagate DCs (CD11cCrePham fl/fl ) our study is the first to demonstrate in addition to homing to the spleen, injected DCs donate mitochondria to splenic macrophages. Compared to naïve DC, DCs propagated in presence of FTY720 (FTY-DC) are more efficient at donating mitochondria to recipient splenocytes mainly to macrophages. The exact mechanism of how FTY-DC donate mitochondria is currently unknown but does involve gap junctions and actin polymerization as treatment with inhibitors (Cyto D or CBX) abrogates the protection by FTY-DCs. Our current study analyzed the involvement of macrophage dependent innate adaptive immunity in FTY-DCs dependent protection. However, we did note that in spleen of FTY-DC treated mice there was an increase in labeling of white pulp CD4 + FoxP3 + cells that was in addition to disrupted CD169 + labeling of the MZ, similar to as previously demonstrated using S1pr3 −/− DCs (12). Thus, in addition to donating mitochondria to splenic macrophages FTY-DCs could also regulate adaptive immune responses resulting in higher Treg cells. Limitation of our current study using mouse kidney IRI model is that this model is acute (<2 days), thus it is possible that if mice are followed for longer time periods after infusion of FTY-DCs especially in allogenic transfers (C57BL/6J BMDCs→ BALB/c mice or inverse), we might have a significant change in adaptive immune responses including higher Treg numbers. Since FTY-DCs are immunogenically immature (low CD80, CD86, MHCII, and higher IL10) after LPS stimulation and injection of these FTY-DCs increases splenic Tregs, we are in process of testing if FTY-DCs can be used to delay rejection using allogenic mouse model of heterotopic heart transplant. Lastly, if higher mitochondria numbers in FTY-DCs indeed induce the protection we observed, it would be interesting to test if artificially increasing DC mitochondria numbers (mitochondria transplant) also have similar therapeutic advantage. This is especially important since as current study demonstrates, we must propagate DCs in presence of FTY720 from start of BMDC cultures, as acute (overnight) treatment of DCs with FTY720 does not protect kidneys from IRI. In summary we have demonstrated that BMDCs can regulate innate immune response by donating mitochondria. The antiinflammatory responses induced by FTY-DC are dependent on the spleen and presence of S1P1 receptors. In the spleen, FTY-DC donate mitochondria more efficiently compared to Veh-DC to splenic macrophages (F4/80 + and CD169 + ). Dose dependent uptake of mitochondria by splenic and RAW264.7 macrophages induces metabolic reprogramming that is a key driver of antiinflammatory phenotype. We conclude that regulatory FTY-DC may be useful in kidney IRI as well as in other inflammatory states such as transplantation and autoimmune disorders. DATA AVAILABILITY STATEMENT The datasets generated for this study are available on request to the corresponding author. ETHICS STATEMENT The animal study was reviewed and approved by University of Tennessee Health Science Center and University of Virginia Institutional Animal Care and Use Committees. AUTHOR'S NOTE The current abstract was previously presented as oral presentation at American Transplant Congress in 2019 and published online (https://doi.org/10.1111/ajt.15405).
2020-06-23T13:09:03.364Z
2020-06-23T00:00:00.000
{ "year": 2020, "sha1": "1ad4549f1cde850209d72d294e23f3b0182e05e7", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2020.01278/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "1ad4549f1cde850209d72d294e23f3b0182e05e7", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
255816170
pes2o/s2orc
v3-fos-license
Dwarfism with joint laxity in Friesian horses is associated with a splice site mutation in B4GALT7 Inbreeding and population bottlenecks in the ancestry of Friesian horses has led to health issues such as dwarfism. The limbs of dwarfs are short and the ribs are protruding inwards at the costochondral junction, while the head and back appear normal. A striking feature of the condition is the flexor tendon laxity that leads to hyperextension of the fetlock joints. The growth plates of dwarfs display disorganized and thickened chondrocyte columns. The aim of this study was to identify the gene defect that causes the recessively inherited trait in Friesian horses to understand the disease process at the molecular level. We have localized the genetic cause of the dwarfism phenotype by a genome wide approach to a 3 Mb region on the p-arm of equine chromosome 14. The DNA of two dwarfs and one control Friesian horse was sequenced completely and we identified the missense mutation ECA14:g.4535550C > T that cosegregated with the phenotype in all Friesians analyzed. The mutation leads to the amino acid substitution p.(Arg17Lys) of xylosylprotein beta 1,4-galactosyltransferase 7 encoded by B4GALT7. The protein is one of the enzymes that synthesize the tetrasaccharide linker between protein and glycosaminoglycan moieties of proteoglycans of the extracellular matrix. The mutation not only affects a conserved arginine codon but also the last nucleotide of the first exon of the gene and we show that it impedes splicing of the primary transcript in cultured fibroblasts from a heterozygous horse. As a result, the level of B4GALT7 mRNA in fibroblasts from a dwarf is only 2 % compared to normal levels. Mutations in B4GALT7 in humans are associated with Ehlers-Danlos syndrome progeroid type 1 and Larsen of Reunion Island syndrome. Growth retardation and ligamentous laxity are common manifestations of these syndromes. We suggest that the identified mutation of equine B4GALT7 leads to the typical dwarfism phenotype in Friesian horses due to deficient splicing of transcripts of the gene. The mutated gene implicates the extracellular matrix in the regular organization of chrondrocyte columns of the growth plate. Conservation of individual amino acids may not be necessary at the protein level but instead may reflect underlying conservation of nucleotide sequence that are required for efficient splicing. Background A dwarfism trait has been segregating in the Friesian horse breed for decades [1], (OMIA 000299-9796 [2]). Characteristic for the trait is the physeal growth retardation of limbs and ribs, resulting in a disproportionate form of dwarfism. The affected horses exhibit hyperextension of the fetlock joints of all limbs with varying severity. Flexor tendon laxity, which is often seen in newborn foals of all breeds, fails to recover in dwarf foals and instead tends to increase further during aging. As a consequence, these dwarf Friesians develop an abnormal gait in which the limbs undergo extreme outward rotation at the level of carpus and hocks. The ribcage is abnormal in most cases with thickened and S-shaped costochondral junctions, leading to an inward protrusion of the chest at the level of Th10-16 (Fig. 1b,c). Mature dwarfs have a head of the same size as unaffected horses, a broader chest with narrowing at the costochondral junction, a disproportionally long back and abnormally short limbs. The abdomen has a weak and rounded appearance, and the musculature over the body is poorly developed. Involvement of the hypothalamicpituitary growth axis in the pathogenesis of the condition has been excluded [3]. A monogenic recessive mode of inheritance is most likely and, considering the breed structure with a small number of founders and narrow population bottlenecks, it is expected that the dwarfs are homozygous for the responsible gene mutation [4]. A genome wide association study of 10 cases and 10 control Friesian horses has been reported earlier [4]. The dwarfism locus was assigned to the telomeric region of the p-arm of chromosome 14 (ECA14), although genome wide significance was not reached. The aim of the present study was to confirm and further define the critical chromosome region of the locus and to identify the responsible gene mutation. The identification of the gene enables the comparison of the phenotype across species and enhances the understanding of the processes of growth and development. Gene mapping To substantiate the localization of the dwarfism gene on chromosome ECA14 we performed a genome wide comparison of a group of dwarfs with a group of controls from the Friesian horse breed. The allelic association reached genome wide significance in the telomeric region of the p-arm of ECA14 with a Bonferroni corrected Pgenome = 2.90 × 10 −19 for BIEC2-239391 at position 3776009, the SNP that was most significantly associated with dwarfism (Fig. 2a). In total 35 SNPs passed the Bonferroni corrected significance level (1.68 × 10 −6 ) and all were located on ECA14 between positions 1 and 9510581. Inspection of the genotypes of the individual horses in the region showed that only the dwarfs shared a haplotype of 3 Mb homozygously, confirming that the phenotype originated from a single founder (Fig. 2b). The genotypes clearly pointed to recombination events that were evident in several cases and that placed the critical region between positions 3151847 and 6229282 on ECA14. According to the annotation release 101 of the NCBI of the equine reference genome the critical region contained 66 genes [5]. DNA sequence analysis Full genome DNA sequence data was generated of four dwarfs and three control Friesian horses by Next Generation Sequencing. The DNA sequence of the critical region of ECA14 of the dwarfs was compared with those of the controls, the reference genome, and the Quarter Horse that is publicly available [6]. As dwarfism has not been reported in the Quarter Horse breed we assumed that the causative mutation is not present in this population and that the horse was homozygous for the reference allele. The variations of the dwarfs as compared with the reference genome were filtered by snpSift [7] for possible effects on amino acid sequence or splicing (Additional file 1). Using the Integrative Genomics Viewer [8] we then searched for the variations that were absent in the Quarter Horse and not homozygously present in the control Friesian horses. Only one nonsynonymous mutation fulfilled these criteria. The mutation was ECA14:g.4535550C > T in B4GALT7 and corresponds to XM_014730464.1:c.50G > A and XP_014585950.1:p.(Arg17Lys). Arginine and lysine are both basic amino acids that are interchangeably seen in many conserved protein domains. In this case, however, the arginine residue at position 17 of the equine B4GALT7 encoded protein xylosylprotein Fig. 1 Dwarfism in the Friesian horse breed. a A female dwarf next to two normal female Friesian horses. The dwarf has a height at the withers of 1.12 m; the horse in the middle has a height of 1.54 m, which is close to the minimum allowed by the breed standard (1.53 m); the horse on the right has a height of 1.66 m. b and c Photographic illustrations of the typical pectus excavatum phenotype in the Friesian dwarf of the right b and the left side c beta 1,4-galactosyltransferase, polypeptide 7 (galactosyltransferase I) is strictly conserved in all vertebrates analyzed (Fig. 3a). Nonetheless, the mutation was considered moderate by the snpSift analysis and benign by PolyPhen-2 [9]. The association of the mutation with the dwarfism phenotype was evaluated by Sanger DNA sequencing. All 29 dwarfs of which DNA was available were homozygous for the mutation (Fig. 4c, line 2). The 8 obligate carriers were heterozygous (line 3) and of a group of 177 Friesian horses 22 were carrier of the mutation and 155 were homozygous for the reference allele (line 1). RNA analysis The mutated guanosine nucleotide is the last residue of exon 1 of B4GALT7 and the position of this first splice donor relative to the start codon of the gene is conserved in vertebrates (Fig. 3b). The nucleotide is second in the triplet coding for arginine and since this amino acid is conserved, the guanosine is conserved with it. We wondered whether the selection pressure could have worked the other way around; that is, that the nucleotide itself was conserved not due to its amino acid coding properties but due to its splicing function. The mutation of guanosine to adenosine might affect splicing of the primary RNA transcript of the gene and splicing requirements may block its propagation. According to the splice site predictor NNSPLICE 0.9 [10] the exon 1/intron 1 junction of the equine reference gene had a splice donor score of 0.96 on a scale of 0 to 1. The mutated nucleotide sequence of the dwarfs had a moderate score of 0.58, suggesting that the mutation could indeed interfere with splicing. To investigate the effect of the mutation on splicing of the primary transcript we isolated RNA from cultured skin fibroblasts of a Friesian horse dwarf, of a heterozygous carrier of the B4GALT7 mutation and of a Friesian horse that was homozygous for the reference allele. Synthesis of cDNA was followed by PCR with a forward primer derived from B4GALT7 exon 1 and a reverse primer from intron 1 or exon 2 (Fig. 4a). The RNA of the wildtype horse and the heterozygous horse yielded a splicing product with the exonic primers of the expected length of 401 bp (Fig. 4b, lanes 2 and 3). This product was less pronounced in the cDNA from the dwarf (lane 4). The primer set that included the intron 1 reverse primer produced a cDNA band derived from unspliced RNA of approximately 220 bp. This band was detected with the samples from the heterozygous horse and from the dwarf (lanes 6 and 7), but also, be it weakly, with the sample from the wildtype horse (lane 5). Combination of the 3 primers in a semi quantitative PCR showed spliced and unspliced products with similar intensities in the cDNA from the dwarf (lane 11). The wild type horse and the heterozygous horse do not display the unspliced product of 220 bp with the 3 primer PCR (lanes 9 and 10). When we analyzed the cDNA sequence of the unspliced product from the heterozygous horse (lane 6) it was derived from the mutant allele only (Fig. 4c, line 4). The cDNA sequence of the properly spliced product from the same horse (lane 3) indicated that it was derived from the normal allele (Fig. 4c, line 5). The B4GALT7 cDNA analysis confirms that the mutation r.50 g > a leads to a splicing deficiency of the primary transcript. The exon 1/exon 2 primer set produced minor fragments that were larger than the expected length (Fig. 4b, lane 4). DNA sequence analysis of the fragments derived from the dwarf showed that the fragment of approximately The aberrant splicing from the exon 1 donor due to the variant r.50 g > a was associated with a severe reduction of expression of B4GALT7 at the mRNA level. Quantitative PCR measurement of cDNA fragments indicated that the concentration of transcripts from the gene in the fibroblasts from the dwarf was only 2 % of that in fibroblasts from a Friesian horse that did not carry the mutation (Additional file 2). Discussion Disproportionate dwarfism in Friesian horses is associated with a mutation in B4GALT7. The mutation changes a conserved arginine codon to a lysine codon. Both amino acid residues are basic and the effect of the mutation is considered moderate by the snpSift prediction. The mutation also affects the last nucleotide of exon 1 of the gene. Unspliced cDNA fragments spanning the exon 1/intron 1 junction can be detected regardless of the genotype of the horses. However, the cDNA sequences from a heterozygous horse clearly show that RNA derived from the mutant allele is hardly spliced, in contrast to the RNA from the normal allele (Fig. 4c, lines 4 and 5). When an exonic and an intronic reverse primer are allowed to compete in a 3 primer PCR, only the cDNA from the dwarf displays the spliced and unspliced products in comparable amounts (Fig. 4b, lane 11). The normally spliced product is seen prominently in the wild type horse and the horse heterozygous for the mutation, but the unspliced product cannot be discerned among the products from these horses (lanes 9 and 10). This semi quantitative PCR and the cDNA sequence analysis of the products of the heterozygous horse confirms that the B4GALT7 mutation strongly reduces the splicing capacity of the exon 1/intron 1 junction. In the homozygous state, the mutation leads to low mRNA levels and the expression of the gene is strongly reduced as measured by qPCR. The improperly spliced transcripts could be prone to nonsense mediated decay. The nucleotide sequence AGgt of the exon 1 splice junction of B4GALT7 and its position with regard to the start codon are highly conserved (Fig. 3b). One could argue that the last nucleotide of the exon is expected to be conserved if the encoded arginine residue would be essential for the function of the protein. This G residue is the second nucleotide of the codon and all six triplets that code for arginine have a G residue at the second position. Thus, if the arginine is conserved, the guanosine is conserved with it. The first position of the codon under consideration is a conserved A residue, while 4 of the 6 possible arginine codons start with a C. A functional restriction on the encoded arginine residue would therefore not necessarily lead to conservation of the A residue of the AGgt splice junction. The hypothetical mutation of the A residue to a C would only lead to a moderate drop of the splice donor score from 0.96 to 0.89. According to this prediction a mutation of the second to last A residue to a C would be allowed while in fact it is highly conserved. Recently, a mutation of the A residue of a splice donor site in IBA57 with the same AGgt junction sequence as exon 1 of B4GALT7 was shown to impede proper splicing, causing a severe leukoencephalopathy [11]. This mutation did not alter the encoded amino acid and it stresses the importance of the exonic terminal nucleotide sequences for splicing at particular junctions. An in vitro splicing assay may resolve the importance of the second to last A residue of the exon 1 of B4GALT7 for proper splicing. Concurrent with our results, the NNSPLICE program assigns a much lower splice donor score of 0.58 for the mutation found in the Friesian dwarfs. Considering all our results, we conclude that the conservation of the exon 1 terminal sequence in vertebrates reflects a restriction on a splicing requirement rather than on a functional requirement of the encoded amino acid. Characterization of naturally occurring mutations that are uncovered because of an association with disease can render important insights in splicing requirements [12]. In humans, mutations in B4GALT7 cause the Ehlers-Danlos syndrome, progeroid type 1 (EDSP1, OMIM130070) and Larsen of Reunion Island syndrome (LRS). Only 7 mutations have been described in relation to the recessively inherited syndromes [13][14][15][16][17][18]. Most patients were normal with respect to length and weight at birth but soon were presented with growth retardation, osteopenia, facial dysmorphology, loose joints, bone dysplasias, loose skin and in most cases mild forms of mental retardation. Pectus carinatum was reported for a number of patients [16,17]. The human phenotype is highly variable, even in patients sharing the same mutation homozygously [17]. A founder effect in a closed population of Reunion has led to at least 22 cases of LRS that were genetically confirmed. LRS was described as a subtype of Larsen syndrome [19]. The same mutation that causes LRS was observed homozygously in two siblings from another population diagnosed with EDSP1. The progeroid aspect was not observed in any of the genetically confirmed cases of EDSP1 nor LRS and it has been suggested to remove this term from the name of the EDSP1 syndrome [16,18]. Clear similarities between the conditions in man and horse are growth retardation and hypermobile joints. Rib deformities have been observed in human as well as equine cases [1,16,17]. Pectus carinatum, reported in human cases, refers to the pectus in which the ribcartilage has been overdeveloped outward, leading to a 'chicken chest'. In the Friesian horse cases on the other hand, the ribcartilage has been overdeveloped inward, leading to pectus excavatum or 'shoemaker chest' in humans. The dwarfism in the horse is described as a disproportionate growth disturbance because all limbs are short, while the head and back appear rather normal. In contrast, almost all confirmed human patients with LRS and EDSP1 display facial dysmorphism and disproportional growth was not noted [16,17]. Cognitive functions do not seem to be impaired in dwarf horses. Another clear difference between the phenotype in man and horse is that loose skin has never been observed in Friesian dwarfs. Atrophic scarring and/or delayed wound healing has been reported for a number of human patients but is never seen in Friesian horse dwarfs. The fibroblasts from one human patient displayed reduced proliferation rates [20], while the fibroblasts of the Friesian dwarf proliferated at least as fast as the fibroblasts from normal Friesians. The differences in the clinical presentations between human patients and Friesian dwarfs may be due to the nature of the mutation in horses, which we think has predominantly an effect on the expression level of a normally functioning protein. On the other hand, the protein may have rate limiting key roles in processes that are different in the two species, such that loss of activity becomes manifest in different ways. The B4GALT7 gene is highly expressed in the proliferative zone of the growth plate in rat [16]. Deficiency of the encoded xylosylprotein beta-1,4-galactosyltransferase 7 apparently induces the irregularities of the chondrocyte columns seen in the growth plate of Friesian dwarfs [1]. The enzyme adds the second of four saccharides that form the linker between the protein core and the glycosaminoglycan moiety of proteoglycans. Proteoglycans are major components of molecular networks of the extracellular matrix. Mutations in any of the enzymes that build the saccharide linker cause a variety of rare syndromes with overlapping features, which are called linkeropathies (reviewed in [21]). Dwarfism in Friesian horses could therefore be considered as a new presentation of a linkeropathy. Remarkably, this is the second gene with a role in protein glycosylation in which we found a pathogenic mutation in Friesian horses. Earlier we found a nonsense mutation in B3GALNT2 involved in muscular dystrophy with hydrocephalus in stillborn foals [22]. The encoded beta-1,3-N-acetylgalactosaminyltransferase is involved in glycosylation of alpha-dystroglycan, which is part of the complex that connects the cytoskeleton with the extracellular matrix. Conclusions We provide evidence indicating that dwarfism in Friesian horses could be caused by a splicing deficiency of B4GALT7 that severely reduces expression of the gene. The conservation of the affected nucleotide reflects a splicing requirement rather than a functional requirement of the encoded amino acid. The clinical picture of the Friesian horse dwarfs adds to the phenotypic variability observed in human patients with B4GALT7 mutations. Crosses between carriers can be prevented by screening breeding horses for the B4GALT7 mutation and the dwarfism trait could thus be eliminated from the breed. Phenotypes, genotypes and genome-wide association study Friesian horses (n = 29) were diagnosed as being dwarfs by local equine veterinarians in consultation with the Equine Clinic of Utrecht University, usually via a digital in vivo picture for confirmation of the phenotype. Thirteen of the horses were male, 11 female and the sex of 5 dwarfs was unknown. The group of unaffected controls (n = 65) were Friesian horses without the characteristic appearance of dwarfism [1]. In addition, we obtained blood samples for DNA isolation from 8 parents of dwarfs and DNA samples from 177 Friesian horses that were part of a DNA bank maintained at the Dr. Van Haeringen Laboratory B.V.. Blood samples were taken and DNA was isolated as described by Orr et al. [4]. Genotypes of 19 dwarfs and 65 controls were obtained using the Illumina® EquineSNP50 Genotyping BeadChip containing 54,602 SNPs. Quality control was performed using the check.marker function in the GenABEL package in R [23]. SNPs with MAF <5 % and call-rate <90 % were discarded, leaving 29,840 SNPs (54.7 % of all SNPs) for the analysis. The ccfast function in GenABEL package in R [23] was used to determine the significance of allelic differences between dwarfs and unaffected horses with a χ 2 -test (1df ). The Bonferroni corrected significance level applied was 1.68 × 10 −6 . Homozygosity mapping in the significantly associated region was performed by eye to identify overlapping regions of homozygosity between dwarfs. Genome sequencing Four dwarf cases and three unrelated control were paired-end sequenced with 150 nucleotide reads for the full genome on Illumina NextSeq500 to an average coverage of 4-9x according to the manufacturers protocols. To increase the power to detect causal candidate variants as a fully homozygous variant, we merged the data for the four cases yielding an mean coverage of 36x. Detecting recessive candidate variants was done with snpSift [28] fitting the model of reference or potential carrier status in the control and homozygous state in the cases. Moreover, coverage > 10, a genotypequality of >30 and effect impact 'HIGH' or 'MODERATE' was required. Additional evaluation of the variant of interest was performed with PolyPhen-2 [9]. The observed possibly detrimental DNA variant of B4GALT7 was confirmed and evaluated in the complete cohort by Sanger sequencing of PCR fragments. The PCR primer sequences were 5'-AGTTTCTCGGAG TGTAGAG-3' (UP1F) and 5'-AGAGACATAGACCCTC AGAG-3' (IN1R). The PCR was performed with 50 ng genomic DNA, 3 U Platinum Taq DNA polymerase (Thermo Fisher Scientific, Waltham, MA), 2 mM MgCl2, 0.2 mM each dNTP, 0.5 μM each primer, 1 M betaine and 1× Platinum buffer. Temperature cycling conditions were 5 min at 95°C, 35 cycles of 30 s at 95°C , 30 s at 55°C, 30 s at 72°C, and a final elongation step at 72°C for 10 min. All amplifications were performed on an ABI 9700 Thermal Cycler (Applied Biosystems, Foster City, CA). The PRC primers were degraded by addition of 1 U Exonuclease I (Thermo Fisher Scientific, Waltham, MA) and incubation for 15 min. at 37 o C and 15 min. at 85 o C. DNA sequencing tercycle reactions were performed using BigDye v3.1 (Thermo Fisher Scientific, Waltham, MA) according to the manufacturer's protocol. The products were analysed on a 3130XL Genetic Analyzer (Applied Biosystems, Foster City, CA) and the data was analysed with Lasergene (version 11 DNASTAR). Homologous DNA and protein sequences from different species were retrieved from GenBank and aligned one by one by eye. Identities and differences were indicated by using a word processor. The species were selected arbitrary to represent close and distant members of the animal kingdom. RNA analysis Fibroblasts were grown from 6 mm punch biobsies from the skin of a dwarf, a carrier of the mutation of interest and a Friesian horse homozygous for the reference allele. The biopsies were washed in Euroflush (IMV technologies, L' Aigle, France) containing 5000 IU/ml heparin, cut with scissors and incubated in petri dishes with DMEM/ M199 (1:1) medium containing pen/strep (10,000 U/ml (all from Thermo Fisher Scientific, Waltham, MA), 2.5 ng/ml basic-FGF5 (Peprotech, Rocky Hill, NJ) and 20 % FCS at 38.5 o C with 5 % CO 2 and 5 % O 2 . Proliferating fibroblasts were harvested and passaged in culture flasks using standard procedures. RNA was isolated from cultured fibroblasts with the RNeasy kit with an oncolumn DNase digestion according to the instructions of the manufacturer (Qiagen, Hilden, Germany). The RIN value of the RNA was meausured with an Agilent 2100 Bioanalyzer (Santa Clara, CA, USA) and was found to be 9.5 or higher. cDNA was synthesized with the iScript kit (Bio-Rad Laboratories, Hercules, CA) using 500 ng of RNA in 20 μl reactions. Splicing products were PCR amplified from 0.5 μl cDNA product with B4GALT7 exonic primers 5'-CTGGGAGCTCGAGCTCCATG-3' (EX1F) and 5'-CTCAGGAAGCGGTGCATGTG-3' (EX2R) as described above. Unspliced products were amplified with primer EX1F and the intronic primer IN1R described above. In a semi quantitative experiment, the 3 primers were combined in a single PCR using the same component concentrations and cycling conditions as above. The fragments were visualized by electrophoresis on a 1 % agarose gel in 0.5x TBE with 0.5 μg/ml ethidium bromide followed by UV irradiation of the gel. For DNA sequence analysis and confirmation of the origin of the products, the bands were cut from the gel and the DNA was isolated with QIAquick gel extraction kit (Qiagen, Hilden, Germany). The DNA sequencing procedure was as described above.
2023-01-15T14:13:43.512Z
2016-10-28T00:00:00.000
{ "year": 2016, "sha1": "e903a4abb541515fe5725242bdd4fad6f3878217", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s12864-016-3186-0", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "e903a4abb541515fe5725242bdd4fad6f3878217", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [] }
5035107
pes2o/s2orc
v3-fos-license
Image Inpainting for Irregular Holes Using Partial Convolutions Existing deep learning based image inpainting methods use a standard convolutional network over the corrupted image, using convolutional filter responses conditioned on both valid pixels as well as the substitute values in the masked holes (typically the mean value). This often leads to artifacts such as color discrepancy and blurriness. Post-processing is usually used to reduce such artifacts, but are expensive and may fail. We propose the use of partial convolutions, where the convolution is masked and renormalized to be conditioned on only valid pixels. We further include a mechanism to automatically generate an updated mask for the next layer as part of the forward pass. Our model outperforms other methods for irregular masks. We show qualitative and quantitative comparisons with other methods to validate our approach. Introduction Image inpainting, the task of filling in holes in an image, can be used in many applications. For example, it can be used in image editing to remove unwanted arXiv:1804.07723v2 [cs.CV] 15 Dec 2018 (a) Image with hole (b) PatchMatch (c) Iizuka et al. [10] (d) Yu et al. [38] (e) Hole=127. 5 (f) Hole=IN Mean (g) Partial Conv (h) Ground Truth [38]. 2(e) and 2(f) are using the same network architecture as Section 3.2 but using typical convolutional network, 2(e) uses the pixel value 127.5 to initialize the holes. 2(f) uses the mean ImageNet pixel value. 2(g): our partial convolution based results which are agnostic to hole values. image content, while filling in the resulting space with plausible imagery. Previous deep learning approaches have focused on rectangular regions located around the center of the image, and often rely on expensive post-processing. The goal of this work is to propose a model for image inpainting that operates robustly on irregular hole patterns (see Fig. 1), and produces semantically meaningful predictions that incorporate smoothly with the rest of the image without the need for any additional post-processing or blending operation. Recent image inpainting approaches that do not use deep learning use image statistics of the remaining image to fill in the hole. PatchMatch [2], one of the state-of-the-art methods, iteratively searches for the best fitting patches to fill in the holes. While this approach generally produces smooth results, it is limited by the available image statistics and has no concept of visual semantics. For example, in Figure 2(b), PatchMatch was able to smoothly fill in the missing components of the painting using image patches from the surrounding shadow and wall, but a semantically-aware approach would make use of patches from the painting instead. Deep neural networks learn semantic priors and meaningful hidden representations in an end-to-end fashion, which have been used for recent image inpainting efforts. These networks employ convolutional filters on images, replacing the removed content with a fixed value. As a result, these approaches suffer from dependence on the initial hole values, which often manifests itself as lack of texture in the hole regions, obvious color contrasts, or artificial edge responses surrounding the hole. Examples using a U-Net architecture with typical convolutional layers with various hole value initialization can be seen in Figure 2(e) and 2(f). (For both, the training and testing share the same initalization scheme). Conditioning the output on the hole values ultimately results in various types of visual artifacts that necessitate expensive post-processing. For example, Iizuka et al. [10] uses fast marching [32] and Poisson image blending [23], while Yu et al. [38] employ a following-up refinement network to refine their raw network predictions. However, these refinement cannot resolve all the artifacts shown as 2(c) and 2(d). Our work aims to achieve well-incorporated hole predictions independent of the hole initialization values and without any additional postprocessing. Another limitation of many recent approaches is the focus on rectangular shaped holes, often assumed to be center in the image. We find these limitations may lead to overfitting to the rectangular holes, and ultimately limit the utility of these models in application. Pathak et al. [22] and Yang et al. [36] assume 64 × 64 square holes at the center of a 128×128 image. Iizuka et al. [10] and Yu et al. [38] remove the centered hole assumption and can handle irregular shaped holes, but do not perform an extensive quantitative analysis on a large number of images with irregular masks (51 test images in [8]). In order to focus on the more practical irregular hole use case, we collect a large benchmark of images with irregular masks of varying sizes. In our analysis, we look at the effects of not just the size of the hole, but also whether the holes are in contact with the image border. To properly handle irregular masks, we propose the use of a Partial Convolutional Layer, comprising a masked and re-normalized convolution operation followed by a mask-update step. The concept of a masked and re-normalized convolution is also referred to as segmentation-aware convolutions in [7] for the image segmentation task, however they did not make modifications to the input mask. Our use of partial convolutions is such that given a binary mask our convolutional results depend only on the non-hole regions at every layer. Our main extension is the automatic mask update step, which removes any masking where the partial convolution was able to operate on an unmasked value. Given sufficient layers of successive updates, even the largest masked holes will eventually shrink away, leaving only valid responses in the feature map. The partial convolutional layer ultimately makes our model agnostic to placeholder hole values. In summary, we make the following contributions: we propose the the use of partial convolutions with an automatic mask update step for achieving state-of-the-art on image inpainting. while previous works fail to achieve good inpainting results with skip links in a U-Net [34] with typical convolutions, we demonstrate that substituting convolutional layers with partial convolutions and mask updates can achieve state-of-the-art inpainting results. to the best of our knowledge, we are the first to demonstrate the efficacy of training image-inpainting models on irregularly shaped holes. we propose a large irregular mask dataset, which will be released to public to facilitate future efforts in training and evaluating inpainting models. Related Work Non-learning approaches to image inpainting rely on propagating appearance information from neighboring pixels to the target region using some mechanisms like distance field [3,1,32]. However, these methods can only handle narrow holes, where the color and texture variance is small. Big holes may result in oversmoothing or artifacts resembling Voronoi regions such as in [32]. Patch-based methods such as [5,15] operate by searching for relevant patches from the image's non-hole regions or other source images in an iterative fashion. However, these steps often come at a large computation cost such as in [28]. PatchMatch [2] speeds it up by proposing a faster similar patch searching algorithm. However, these approaches are still not fast enough for real-time applications and cannot make semantically aware patch selections. Deep learning based methods typically initialize the holes with some constant placeholder values e.g. the mean pixel value of ImageNet [26], which is then passed through a convolutional network. Due to the resulting artifacts, post-processing is often used to ameliorate the effects of conditioning on the placeholder values. Content Encoders [22] first embed the 128×128 image with 64×64 center hole into low dimensional feature space and then decode the feature to a 64x64 image. Yang et al. [36] takes the result from Content Encoders as input and then propagates the texture information from non-hole regions to fill the hole regions as postprocessing. Song et al. [30] uses a refinement network in which a blurry initial hole-filling result is used as the input, then iteratively replaced with patches from the closest non-hole regions in the feature space. Li et al. [17] and Iizuka et al. [10] extended Content Encoders by defining both global and local discriminators; then Iizuka et al. [10] apply Poisson blending as a post-process. Following [10], Yu et al. [38] replaced the post-processing with a refinement network powered by the contextual attention layers. Amongst the deep learning approaches, several other efforts also ignore the mask placeholder values. In Yeh et al. [37], searches for the closest encoding to the corrupted image in a latent space, which is then used to condition the output of a hole-filling generator. Ulyanov et al. [34] further found that the network needs no external dataset training and can rely on the structure of the generative network itself to complete the corrupted image. However, this approach can require a different set of hyper parameters for every image, and applies several iterations to achieve good results. Moreover, their design [34] is not able to use skip links, which are known to produce detailed output. With standard convolutional layers, the raw features of noise or wrong hole initialization values in the encoder stage will propagate to the decoder stage. Our work also does not depend on placeholder values in the hole regions, but we also aim to achieve good results in a single feedforward pass and enable the use of skip links to create detailed predictions. Our work makes extensive use of a masked or reweighted convolution operation, which allows us to condition output only on valid inputs. Harley et al. [7] recently made use of this approach with a soft attention mask for semantic segmentation. It has also been used for full-image generation in PixelCNN [20], to condition the next pixel only on previously synthesized pixels. Uhrig et al. [33] proposed sparsity invariant CNNs with reweighted convolution and max pooling based mask updating mechanism for depth completion. For image inpainting, Ren et al. [24] proposed shepard convolution layer where the same kernel is applied for both feature and mask convolutions. The mask convolution result acts as both the reweighting denominator and updated mask, which does not guarantee the hole to evolve during updating due to the kernel's possible negative entries. It cannot handle big holes properly either. Discussions of other CNN variants like [4] are beyond the scope of this work. Approach Our proposed model uses stacked partial convolution operations and mask updating steps to perform image inpainting. We first define our convolution and mask update mechanism, then discuss model architecture and loss functions. Partial Convolutional Layer We refer to our partial convolution operation and mask update function jointly as the Partial Convolutional Layer. Let W be the convolution filter weights for the convolution filter and b its the corresponding bias. X are the feature values (pixels values) for the current convolution (sliding) window and M is the corresponding binary mask. The partial convolution at every location, similarly defined in [7], is expressed as: where denotes element-wise multiplication, and 1 has same shape as M but with all elements being 1. As can be seen, output values depend only on the unmasked inputs. The scaling factor sum(1)/sum(M) applies appropriate scaling to adjust for the varying amount of valid (unmasked) inputs. After each partial convolution operation, we then update our mask as follows: if the convolution was able to condition its output on at least one valid input value, then we mark that location to be valid. This is expressed as: and can easily be implemented in any deep learning framework as part of the forward pass. With sufficient successive applications of the partial convolution layer, any mask will eventually be all ones, if the input contained any valid pixels. Network Architecture and Implementation Implementation. Partial convolution layer is implemented by extending existing standard PyTorch [21], although it can be improved both in time and space using custom layers. The straightforward implementation is to define binary masks of size C×H×W, the same size with their associated images/features, and then to implement mask updating is implemented using a fixed convolution layer, with the same kernel size as the partial convolution operation, but with weights identically set to 1 and no bias. The entire network inference on a 512×512 image takes 0.029s on a single NVIDIA V100 GPU, regardless of the hole size. Network Design. We design a UNet-like architecture [25] similar to the one used in [11], replacing all convolutional layers with partial convolutional layers and using nearest neighbor up-sampling in the decoding stage. The skip links will concatenate two feature maps and two masks respectively, acting as the feature and mask inputs for the next partial convolution layer. The last partial convolution layer's input will contain the concatenation of the original input image with hole and original mask, making it possible for the model to copy non-hole pixels. Network details are found in the supplementary file. Partial Convolution as Padding. We use the partial convolution with appropriate masking at image boundaries in lieu of typical padding . This ensures that the inpainted content at the image border will not be affected by invalid values outside of the image -which can be interpreted as another hole. Loss Functions Our loss functions target both per-pixel reconstruction accuracy as well as composition, i.e. how smoothly the predicted hole values transition into their surrounding context. Given input image with hole I in , initial binary mask M (0 for holes), the network prediction I out , and the ground truth image I gt , we first define our perpixel losses L hole = 1 where N Igt denotes the number of elements in I gt (N Igt = C * H * W and C, H and W are the channel size, height and width of image I gt ). These are the L 1 losses on the network output for the hole and the non-hole pixels respectively. Next, we define the perceptual loss, introduced by Gatys et al. [6]: Here, I comp is the raw output image I out , but with the non-hole pixels directly set to ground truth; N Ψ I gt p is the number of elements in Ψ Igt p . The perceptual loss computes the L 1 distances between both I out and I comp and the ground truth, but after projecting these images into higher level feature spaces using an ImageNet-pretrained VGG-16 [29]. Ψ I * p is the activation map of the pth selected layer given original input I * . We use layers pool1, pool2 and pool3 for our loss. We further include the style-loss term, which is similar to the perceptual loss [6], but we first perform an autocorrelation (Gram matrix) on each feature map before applying the L 1 . Here, we note that the matrix operations assume that the high level features Ψ (x) p is of shape (H p W p ) × C p , resulting in a C p × C p Gram matrix, and K p is the normalization factor 1/C p H p W p for the pth selected layer. Again, we include loss terms for both raw output and composited output. Our final loss term is the total variation (TV) loss L tv : which is the smoothing penalty [12] on R, where R is the region of 1-pixel dilation of the hole region. where, N Icomp is the number of elements in I comp . The total loss L total is the combination of all the above loss functions. Ablation Study of Different Loss Terms. Perceptual loss [12] is known to generate checkerboard artifacts. Johnson et al. [12] suggests to ameliorate the problem by using the total variation (TV) loss. We found this not to be the case for our model. Figure 3(b) shows the result of the model trained by removing L styleout and L stylecomp from L total . For our model, the additional style loss term is necessary. However, not all the loss weighting schemes for the style loss will generate plausible results. Figure 3(f) shows the result of the model trained with a small style loss weight. Compared to the result of the model trained with full L total in Figure 3(g), it has many fish scale artifacts. However, perceptual loss is also important; grid-shaped artifacts are less prominent in the results with full L total (Figure 3(k)) than the results without perceptual loss (Figure 3(j)). We hope this discussion will be useful to readers interested in employing VGG-based high level losses. Fig. 3. In top row, from left to right: input image with hole, result without style loss, result using full L total , and ground truth. In middle row, from left to right: input image with hole, result using small style loss weight, result using full L total , and ground truth. In bottom row, from left to right: input image with hole, result without perceptual loss, result using full L total , and ground truth. Irregular Mask Dataset Previous works generate holes in their datasets by randomly removing rectangular regions within their image. We consider this insufficient in creating the diverse hole shapes and sizes that we need. As such, we begin by collecting masks of random streaks and holes of arbitrary shapes. We found the results of occlusion/dis-occlusion mask estimation method between two consecutive frames for videos described in [31] to be a good source of such patterns. We generate 55,116 masks for the training and 24,866 masks for testing. During training, we augment the mask dataset by randomly sampling a mask from 55,116 masks and later perform random dilation, rotation and cropping. All the masks and images for training and testing are with the size of 512×512. We create a test set by starting with the 24,866 raw masks and adding random dilation, rotation and cropping. Many previous methods such as [10] have degraded performance at holes near the image borders. As such, we divide the test set into two: masks with and without holes close to border. The split that has holes distant from the border ensures a distance of at least 50 pixels from the border. We also further categorize our masks by hole size. Specifically, we generate 6 categories of masks with different hole-to-image area ratios: Training Process Training Data We use 3 separate image datasets for training and testing: ImageNet dataset [26], Places2 dataset [39] and CelebA-HQ [19,13]. We use the original train, test, and val splits for ImageNet and Places2. For CelebA-HQ, we randomly partition into 27K images for training and 3K images for testing. Training Procedure. We initialize the weights using the initialization method described in [9] and use Adam [14] for optimization. We train on a single NVIDIA V100 GPU (16GB) with a batch size of 6. Initial Training and Fine-Tuning. Holes present a problem for Batch Normalization because the mean and variance will be computed for hole pixels, and so it would make sense to disregard them at masked locations. However, holes are gradually filled with each application and usually completely gone by the decoder stage. In order to use Batch Normalization in the presence of holes, we first turn on Batch Normalization for the initial training using a learning rate of 0.0002. Then, we fine-tune using a learning rate of 0.00005 and freeze the Batch Normalization parameters in the encoder part of the network. We keep Batch Normalization enabled in the decoder. This not only avoids the incorrect mean and variance issues, but also helps us to achieve faster convergence. ImageNet and Places2 models train for 10 days, whereas CelebA-HQ trains in 3 days. All fine-tuning is performed in one day. Comparisons We compare with 4 methods: -Conv: Same network structure as our method but using typical convolutional layers. Loss weights were re-determined via hyperparameter search. Our method is denoted as PConv. A fair comparison with GL and GntIpt would require retraining their models on our data. However, the training of both approaches use local discriminators assuming availability of the local bounding boxes of the holes, which would not make sense for the shape of our masks. As such, we directly use their released pre-trained models 1 . For PatchMatch, we used a third-party implementation 2 . As we do not know their train-test splits, our own splits will likely differ from theirs. We evaluate on 12,000 images randomly assigning our masks to images without replacement. Qualitative Comparisons. Figure 5 and Figure 6 shows the comparisons on ImageNet and Places2 respectively. GT represents the ground truth. We compare with GntIpt [38] on CelebA-HQ in Figure 8. GntIpt tested CelebA-HQ on 256×256 so we downsample the images to be 256×256 before feeding into their model. It can be seen that PM may copy semantically incorrect patches to fill holes, while GL and GntIpt sometimes fail to achieve plausible results through post-processing or refinement network. Figure 7 shows the results of Conv, which are with the distinct artifacts from conditioning on hole placeholder values. Quantitative comparisons. As mentioned in [38], there is no good numerical metric to evaluate image inpainting results due to the existence of many possible solutions. Nevertheless we follow the previous image inpainting works [36,38] by reporting 1 error, PSNR, SSIM [35], and the inception score [27]. 1 error, PSNR and SSIM are reported on Places2, whereas the Inception score (IScore) is reported on ImageNet. Note that the released model for [10] was trained only on Places2, which we use for all evaluations. results. It can be seen that our method outperforms all the other methods on these measurements on irregular masks. User Study In addition to quantitative comparisons, we also evaluate our algorithm via a human subjective study. We perform pairwise A/B tests without showing hole positions or original input image with holes, deployed on the Amazon Mechanical Turk (MTurk) platform. We perform two different kinds of experiments: unlimited time and limited time. We also report the cases with and without holes close to the image boundaries separately. For each situation, We randomly select 300 images for each method, where each image is compared 10 times. For the unlimited time setting, the workers are given two images at once: each generated by a different method. The workers are then given unlimited time to select which image looks more realistic. We also shuffle the image order to ensure unbiased comparisons. The results across all different hole-to-image area ratios are summarized in Fig. 9(a). The first row shows the results where the holes are at least 50 pixels away from the image border, while the second row shows the case where the holes may be close to or touch image border. As can be seen, our method performs significantly better than all the other methods (50% means two methods perform equally well) in both cases. For the limited time setting, we compare all methods (including ours) to the ground truth. In each comparison, the result of one method is chosen and shown to the workers along with the ground truth for a limited amount of time. The workers are then asked to select which image looks more natural. This evaluates how quickly the difference between the images can be perceived. The comparison results for different time intervals are shown in Fig. 9(b). Again, the first row shows the case where the holes do not touch the image boundary while the second row allows that. Our method outperforms the other methods in most cases across different time periods and hole-to-image area ratios. Discussion We propose the use of a partial convolution layer with an automatic mask updating mechanism and achieve state-of-the-art image inpainting results. Our model can robustly handle holes of any shape, size location, or distance from the image borders. Further, our performance does not deteriorate catastrophically as holes increase in size, as seen in Figure 10. However, one limitation of our method is that it fails for some sparsely structured images such as the bars on the door in Figure 11, and, like most methods, struggles on the largest of holes. Extension to Image Super Resolution We also extend our framework to image super resolution tasks by offsetting pixels and inserting holes. Specifically, given a low resolution image I with height H and width W and up-scaling factor K, we construct the input I with height K*H and width K*W for the network using the following: for each pixel (x, y) in I, we put it at (K*x+ K/2 , K*y+ K/2 ) in I and mark this position to have mask value be 1. One example input setting and corresponding output with K=4 can be found in Figure 12. We compare with two well-known image super-resolution approaches SRGAN [16] and MDSR+ [18] with K=4 in Figure 13. More Comparisons on Irregular Masks Input PM GL GntIpt PConv GT Fig. 14. Comparisons on irregular masks. The abbreviations of the notations are the same as Figure 5 and Figure 6 in the paper. More Comparisons on Regular Masks Input PM GL GntIpt PConv GT Fig. 15. Comparisons on regular masks. The abbreviations of the notations are the same as Figure 5 and Figure 6 in the paper. More Results of Our Approach Input PConv Input PConv
2018-04-27T06:54:59.346Z
2018-04-20T00:00:00.000
{ "year": 2018, "sha1": "2a417a16473e2bcb1c98cd7814bc106760925e60", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1804.07723", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "2a417a16473e2bcb1c98cd7814bc106760925e60", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
252211705
pes2o/s2orc
v3-fos-license
APTx: better activation function than MISH, SWISH, and ReLU's variants used in deep learning Activation Functions introduce non-linearity in the deep neural networks. This nonlinearity helps the neural networks learn faster and efficiently from the dataset. In deep learning, many activation functions are developed and used based on the type of problem statement. ReLU's variants, SWISH, and MISH are goto activation functions. MISH function is considered having similar or even better performance than SWISH, and much better than ReLU. In this paper, we propose an activation function named APTx which behaves similar to MISH, but requires lesser mathematical operations to compute. The lesser computational requirements of APTx does speed up the model training, and thus also reduces the hardware requirement for the deep learning model. Introduction The ability of deep learning models to learn features directly from the data have made it a default approach to solve many complex problems. A simple artificial neuron is linear in nature, also expressed in Equation 1. y = ∑w i x i + b (1) Here, y is output from the neuron x i is the input to the neuron w i is the associated weights b is the associated bias When the output of this neuron is passed to an activation function the nonlinearity gets introduced in the network. When considering an activation function one important thing is that the derivative of an activation function should not be the same in its domain. Generally, activation function f is applied to the output of the neurons in the hidden layers to make the neural network learn complex features as expressed in Equation 2. The SWISH activation function is considered better than the ReLU function and its variants. But, recently developed activation function MISH is considered equivalent or even better than SWISH activation function in some cases. In this paper, we propose an activation function APTx which behaves similar to the MISH activation function but requires lesser mathematical operations. It means lesser computation is required in APTx to calculate output in the forward propagation, as a result significantly reducing the hardware requirements for training and inference phases. The derivative of APTx also has lesser operations than MISH, hence making neural networks train faster compared to MISH activation function. Related Works Vinod Nair et al. [1] studied the effect of rectified linear units (ReLU) on Restricted Boltzmann Machines. Abien Fred M. Agarap [2] made use of ReLU with convolutional neural networks on the MNIST dataset which outperformed the CNN with softmax on classification task. Glorot et al. [3] and Sun et al. [4] discussed the sparsity of ReLU as a reason for its better performance. Szandała, Tomasz et al. [5] performed a comparative analysis showing tanh and sigmoid function both having vanishing gradient problems overcome by ReLU, and showing the dying-ReLU problem for negative values. Mass et al. [6] presented an improved version of ReLU called Leaky-ReLU where instead of having zero value for negative input the function will have some negative number output. Clevert et al. [7] proposed an ELU function that was faster and better than both ReLU and Leaky-ReLUs. Ramachandran P et al [8] presented SWISH activation function having superior performance than ReLU and its variants. Misra D. et al. [9] proposed an activation function MISH having similar, and in some cases even better performance than SWISH activation function. Proposed APTx activation function We are proposing an activation function named as "Alpha Plus Tanh Times" or APTx in short. Our APTx function is presented as in Equation 3, and its derivative is shown in Equation 4. By updating the values of the parameters , , and we can make the function γ behave like a MISH activation function. The updated function and it's derivative is shown in Equation 5 and 6, where = 1, = 1 and = ½ γ For the detailed visual analysis of the behavior of our APTx its graph is shown in Figure 1, and the graph of its derivative is shown in Figure 2. Although one decides activation functions based on the type of the problem statement, there are some popular activation functions whose comparisons were already done in existing research works. First, we discuss how MISH activation function is better than SWISH, ELU, Leaky-ReLU, ReLU, Tanh and Sigmoid activation function for general scenarios. Afterwards, we compared the MISH activation function with our proposed APTx function. 4 Comparative analysis of existing activation functions The sigmoid activation function is mathematically expressed in Equation 7, and comparison of its derivative with the derivative of tanh is shown in Figure 3. One can easily notice in Figure 3 that the range of tanh derivatives is larger than sigmoid derivatives, but for numbers away from zero both tanh and sigmoid have very less output, this introduces the Vanishing Gradient Problem [5] in the larger neural networks. The ReLU activation function provided a solution to the Vanishing Gradient Problem at least for the positive inputs [3][4], but for the negative inputs it suffers from the Dying-ReLU problem [5], as its derivative for negative value is Zero. Leaky-ReLU [6] was able to solve the Dying-ReLU problem upto some extent. ELU [7] showed better performance than Leaky-ReLU in most of the tasks as it tends to converge cost to zero faster and produce accurate results. For the positive input ReLU, Leaky-ReLU, and ELU all behave in the same manner, but the difference lies for the non-positive values as shown in Figure 4 and also presented in Equations 8, 9, and 10 respectively. SWISH activation function [8] performs better than ReLU activation function, and also its variants because none of these variants have managed to replace the inconsistent gains (i.e. calculation of derivatives). SWISH can be considered a type of self-gated function, also expressed in Equation 11. (11) ( ) = * ( ) Although introduction of SWISH solved both vanishing gradient and providing consistent gains, development of MISH activation function [9] turned out to provide equivalent and in many tasks it had even better performance than SWISH activation function. Its mathematical form is presented in Equation 12. (12) ( ) = * ℎ( (1 + )) Graphs of the derivatives of SWISH and MISH functions are plotted in Figure 5. Interestingly, despite the fact that the derivative of the APTx function requires fewer operations than the derivative of MISH and also SWISH. The derivative graphs of APTx and MISH are presented in Figure 6 showing similar behavior for the positive domain part, useful for backpropagation. Interestingly, APTx function with parameters = 1 , = ½ and = ½ behaves like γ the SWISH(x, 1) activation function, and APTx with = 1 , = 1 and = ½ behaves γ like SWISH(x, 2). Our APTx activation function requires lesser computations in forward propagation and its derivative also needs lesser computations during backward propagation when compared with MISH activation function. Conclusion MISH has similar or even better performance than SWISH which is better than the rest of the activation functions. Our proposed activation function APTx behaves similar to MISH but requires lesser mathematical operations in calculating value in forward propagation, and derivatives in backward propagation. This allows APTx to train neural networks faster and be able to run inference on low-end computing hardwares such as neural networks deployed on low-end edge-devices with Internet of Things. Interestingly, using APTx one can also generate the SWISH(x, ρ) activation function at parameters = 1 , = ρ/2 and = ½.
2022-09-14T06:43:15.525Z
2022-07-05T00:00:00.000
{ "year": 2022, "sha1": "802723574bfa15e39dd2d50cbe26e0a09b2f2ac4", "oa_license": null, "oa_url": "https://doi.org/10.51483/ijaiml.2.2.2022.56-61", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "eab54bb874181fc458494ce81585c526a83ab6dd", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
39009283
pes2o/s2orc
v3-fos-license
Utility of Bone Scan in Pre-Liver Transplant Work-Up of Patients with hepatocellular carcinoma Ahmed Alqarni1, Monther Kabbani1, Faisal Abaalkhail1,2, Alicia Chorley1, Hany Elbeshbeshy1, Waleed Al-Hamoudi1,3, Saleh Alabbad1, Mark Stuarduvant1, Mohammad Alsofayan1, Wael AlKattan2, Baderaldeen Ahmed1, Mohamed Al Sebayel1, HussienElsiesy1, 2* 1Liver Transplantation, King Faisal Specialist Hospital & Research Center 2Alfaisal University 3Gastroenterology, King Saud University, Riyadh, Saudi Arabia Journal of Gastroenterology, Pancreatology & Liver Disorders Open Access Research article Introduction Hepatocellular carcinoma (HCC) is the most common primary liver tumor accounts for over 80 % [1,2] and the burden of this devastating disease is expected to increase.HCC is the second leading cause of cancer-related death worldwide [3].Incidence rates are not uniform across different geographical regions, but Results During the study period, 275 LTs were performed, including 183 LDLTs and 92 DDLTs.Fifty-two patients had HCC, of whom bone scan was performed in 34 patients.The average age was 58 (22-72), and 20 patients were males.The etiology of liver disease was hepatitis C (21 patients), cryptogenic cirrhosis (7 patients), and HBV (6 patients).The median follow up was 38 months.Sixteen patients had HCC within Milan criteria and 12 patients within University of California at San Francisco criteria (UCSF), and 6 patients were beyond UCSF criteria.Twenty-four patients underwent LDLT, whereas 10 patients underwent DDLT (Figure 1). Of the 34 patients, 33 patients had negative bone scan.One patient with suspicious positive bone scan had negative PET scan and no clinical evidence of bone metastases on follow up and she still alive.she was 61 years old with NASH and MRI report showed liver cirrhosis with two HCC both located in segment 4A one 3.4cm x 2.7cm and the second HCC 1.4cm x 1.6cm within UCSF criteria, she was underwent living-related liver transplant (Right Lobe) and histopathology showed moderately differentiated, hepatocellular carcinoma largest focus measures 3cm in maximum dimension, no vascular invasion, uninvolved liver shows macro and micro liver cirrhosis, vascular resection margins are free of tumor (Table 1). The explant showed well differentiated HCC in 16 patients, moderately differentiated HCC in 10 patients, poorly differentiated in 1 patient, combined HCC & cholangiocacinoma in 1 patient and complete necrosis after the loco-regional therapy in 6 patients. Three patients died, one within UCSF criteria and two beyond UCSF criteria, two due to hepatic artery thrombosis and one due to sepsis and multi organ failure. Discussion Technetium-99m methylene diphosphonate (Tc-99m MDP) bone scintigraphy (BS) has been widely used to detect skeletal metastasis in practice.The advantage of BS lies in its ability to effectively survey the entire skeletal system in a single scan, which takes only a short period of time [11]. Once a diagnosis of HCC has been confirmed, disease staging is essential for treatment selection [1].Curative treatments, such as resection or liver transplantation (LT), are offered to patients with early stage tumors according to the Barcelona Clinic Liver Cancer Group (BCLC) classification [1], that is, patients with a single lesion <5 cm or up to three lesions ≤3 cm without macrovascular invasion.These criteria, known as the Milan criteria [12], identify a group of patients with HCC who may experience good disease-free survival after LT and outcomes comparable to those of patients receiving LT for other indications.Therefore, the majority of transplantation programs use these criteria For selection of patients with HCC.The Milan criteria are used as part of our policy for selection of patients with HCC as candidates for LT.To identify possible subclinical metastases, our program requires that all patients with HCC undergo computed tomography (CT) of the chest and a bone scan to be included on the LT waiting list.However, the recommendation to screen for metastasis in patients with early-stage HCC as a requirement for inclusion on the LT waiting list is not evidence-based.Recently, some research groups investigated the frequency of bone metastases (BM) and the utility of bone scans (BSs) in the pre-LT assessment of this patient population [13,14].These studies showed that, in view of the low frequency of metastases, BSs should only be performed in this setting in patients with clinical signs or symptoms indicative of bone metastasis.Rodríguez S, etal.[15], Koneru et al. [16] and Witjes et al. [14] also concluded that BSs are not cost-effective and do notimprove selection of LT candidates.We report our experience in evaluating the role of bone scan in detecting HCC bone metastases in pre-liver transplant patients and find that our results are in agreement with other reports evaluating the role of bone scan in patients with HCC. We have shown that bone scan has no utility in patients with HCC within Milan criteria and should not be routinely used as part of the liver transplant work-up. Conclusion We have shown that bone scan has low utility in patients with HCC within Milan criteria and should not be routinely used as part of the liver transplant work-up. Table 1 : Bone Scan result in relation to tumor size.
2017-10-11T06:46:28.750Z
2017-01-14T00:00:00.000
{ "year": 2017, "sha1": "38fef5022af3d03cf96bc524adf9a61bd106f767", "oa_license": "CCBY", "oa_url": "https://symbiosisonlinepublishing.com/gastroenterology-pancreatology-liverdisorders/gastroenterology-pancreatology-liverdisorders79.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "31c34f3c649e39d17839404a3c1a1f25e374ad9b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
259737497
pes2o/s2orc
v3-fos-license
Integration of digital imagery for topology optimization . To manufacture high-quality products with low manufacturing costs and optimal performance, better design concepts are required. The initial design concept can lead to inef fi cient structural design and higher manufacturing costs if the topology is not optimal. Topology optimization enables designers to reach their design goals faster, more accurately, and cost-effectively. However, the geometry obtained through topology optimization is not manufacturing-ready due to non-smooth boundaries and gray level images, which require post-processing design implementation by engineers. Various researchers have used different image processing techniques to convert the gray image into a binary map to address this issue. This paper focuses on using image processing to evaluate the differences in optimal designs induced by meshing. This study aims to aid in the parametric understanding of different designs targeting the same application by introducing two new parameters: similarity ratio and conformity ratio. The results compare an optimal geometry obtained using structured and unstructured meshes. Topological optimization algorithms applied to mechanical problems allow for reducing a structure ’ s mass while ensuring its rigidity. However, the fi nal structures may differ for the same problem depending on whether they were meshed regularly or irregularly. This article characterizes the differences between the two fi nal structures using an image processing approach. Introduction Due to the complexity of its calculations, topological optimization and meshes are closely linked. Most of the studies reported in the literature have been carried out using a regular mesh on simple mechanical problems such as embedded beams and double-supported beams [1,2]. Although the number of elements strongly influences the final design of the optimized structure, and the complexity of the structure evolves with it, very few studies deal with the influence of this parameter. Sigmund et al. [2] discuss this in their paper but leave the question open: "should the refinement of the mesh model the same optimal structure with a better description of the boundary conditions and not give a different structure with more detail and quality?" To our knowledge, there is very little research dealing with topological optimization from an irregular mesh, even though this type of mesh is better suited to complex geometries. Thore et al. [3] conducted a study to compare the results of topological optimization of a regularly and irregularly meshed double-supported beam. The primary goal of the paper was to evaluate the influence of the penalty factor on additive fabrication stresses. However, the authors conducted this study without any penalty filter and thus obtained topological optimization results from the two meshes discussed previously. Despite some local dissimilarities, the two structures presented fairly similar designs. Therefore, the present article aims to explore whether there is a fixed topological optimization profile obtained after convergence of compliance. Positioning the problem The study was conducted on a 20 Â 10 mm beam, fixed at one end and subjected to a 100 N force at the other end. The dimensions were chosen to satisfy the small disturbance hypothesis. The material parameters considered are: E = 200 GPa and n = 0.3. The topological optimization was carried out using the in-house developed ELiOT software [4], which allows calculations from a model created by FreeCAD finite element calculation codes and meshed with GMSH [5,6]. The constraint applied to the ELiOT code consisted of halving the mass of the initial object (volume fraction, f v = 0.5), and the optimization algorithm used was the OC (optimal criteria) method. The beam was meshed in two different ways: regularly ( Fig. 1) and irregularly ( Fig. 2) with quadrangular elements. As the number of elements was not the same for both meshes, the inter-node distance was considered as the parameter to be varied (Tab. 1). The coordinate table contains the coordinates of each node in the mesh. It is represented by a matrix that has as many columns as the number of dimensions in the problem and as many rows as the number of nodes in the mesh. The connectivity table is used to define the connections between the nodes in the mesh. The number of rows in the table corresponds to the number of elements in the mesh, and the number of columns corresponds to the number of edges in the element being considered. A mesh is considered regular (Fig. 1) when all its edges have the same connectivity. It is created by replicating an elementary mesh shape multiple times, and each element can be defined from another one. A mesh is said to be irregular when its elements are arranged in a disorderly manner (Fig. 2). As the numbering of nodes and elements is often random, only the coordinate and connectivity tables can be used to navigate this type of mesh [7]. Given these considerations, it is challenging to make a direct comparison between these two meshes. To address this issue, a comparison by image processing has been proposed. Mechanics and image treatment In the literature, image processing is predominantly utilized in mechanical problems with experimental settings, to measure the displacement of a point during manipulation. For this, a speckle is applied to the studied piece and photographs are taken at different instances during the test. Image correlation software is then utilized to track the path followed by the point of interest, to determine its displacement and subsequently, its deformation field Chu et al. [8]. A less significant, yet equally interesting objective for industry, is to determine the spatial distribution of the material parameters characterizing the studied sample Beliis et al. [9]. Image correlation also allows for tracing the history of a crack and identifying its origin [10,11]. To establish a connection between image processing and digital simulation, it is essential to define a mathematical model before conducting this study. As the optimized beam has intermediate densities specific to the OC method, it is considered heterogeneous. Thus, it is possible to construct a Representative Elementary Volume (REV) to homogenize the heterogeneities. This requires defining different scales, each with a characteristic size, including one for the structure, one for the pixel, and one for the element. The pixel serves as the REV and its size influences the accuracy of the final results. In brief, this method involves drawing a regular grid, with a tile size that adheres to certain conditions, on the optimized structures obtained. This step allows for the regularization of any mesh, and each tile becomes a pixel in the final image with a single value. In the case of the OC method, the topological optimisation algorithm follows the following objective function: {minr:c(r)=UeT⋅Ke⋅Uesubjectto:{V(r) V0=fvK⋅U=F10À3<r<1 (1) Where: -c is the compliance -U e is the displacement of the considered element. -V is the volume of the optimized structure. -V 0 is the initial volume of the structure. -r is the intermediate density. For the following, two writing will be used: -N p is the number of pixels of the image. -N e is the number of elements of the structure. The volume of the structure after optimisation is, by definition: with r e the intermediate density of the considered element. To link the three scales presented above, the homogenisation method will be used here. It consists in averaging the effects of heterogeneities within a REV in order to determine the macroscopic effects of the studied structure. To be able to apply this method, the following two conditions must be respected: -The pixel size must be very small in front of that of the structure in order for the structure to be considered as a continuous environment, thus N p ≫ 1. -The pixel size must be very large in front of the element size in order to neglect the fluctuation in behaviour between a point and its neighbour. The material is then considered to be macroscopically homogeneous. Thus, it is necessary that N e ≫ N p . The intermediate density of a pixel is then equal to the average of the intermediate densities of the elements contained in that pixel: Depending on the intermediate pixel density, the volume of the structure after optimisation is therefore: Finally: Remark: In practice, it is the contrast of the C p (between 0 and 255) that will be measured, so the intermediate density of a pixel is: Image creation from the optimized structure The files generated by the topological optimization code contain the element number and the assigned intermediate density. Using equation (6), these intermediate densities can be converted into contrasts. In both mesh cases, the files are sorted in ascending order based on the elements, and the images are constructed in rows. However, for the irregular mesh, the first element of the mesh may not correspond to the first pixel of the image, and therefore, the elements need to be ordered to obtain an accurate image. An algorithm is used to calculate the center of gravity of each element and sort them in ascending order of rows and columns. One of the main assumptions of this study is that the number of elements in the structure is much greater than the number of pixels in the image. Therefore, it is necessary to average the intermediate densities of the elements within a pixel using equation (3). This step is accomplished by a pooling operation, which is commonly used in convolutional neural network algorithms Sosnovik et al. [12]. This operation groups elements together and creates a subsampled image. Calculation of the image dissimilarity Each pixel of a greyscale image has a value between 0 and 255 and corresponds to the average of the intermediate densities of the elements contained in that pixel. Let's consider two images resulting from topological optimisation. One obtained from a regular mesh I reg and the other from an irregular mesh I irreg . The dissimilarity between two images is defined by Ardeshir et al. [13]: It has to be noted that equation (7) is a variation of the Manhattan distance. Remark 1: For further calculations, equation (7) will be used. However, for visual results, equation (7) will become: The value 255 in this formula will invert the contrast so that the result is more visual (black on white instead of white on black). When subtracting, the algorithm will remove the identical material between the two images, symbolised by white in the final image, and will finally display only the excess material. Remark 2: In order for the calculation to be carried out, the images must have the same dimensions. Considering (6), equation (7) becomes: However, if only one image is taken into account, the equation becomes: For an image, the dissimilarity corresponds to the final volume of the structure after optimisation. Thus, the measurement of dissimilarities between two images corresponds to the difference in final volume between the two optimised structures. The equation thus becomes: Given the definition of the volume fraction in equation (1) and equation (11), it will be possible to retrieve the volume fraction (Eq. (12)) and the difference in volume fraction (Eq. (13)) between two structures via an image processing method: and: Two coefficients will be defined for the continuation of the: -The compliance ratio, which ideally should be 1. This calculation can be done on each image to be studied and will determine whether the image of the optimised structure complies with the desired final volume. It is defined by : The dissimilarity rate, which ideally should be 0. This calculation can only be performed to compare two images and will determine whether the two images are similar. It is defined by: In these two expressions (Eqs. (14) and (15)), V f,th corresponds to the theoretical final volume of the optimised structure, i.e. the one imposed by the user at the beginning of the topological optimisation code. It should be noted that the results of equations (10) and (11) will be a number of pixels and not a volume as conventionally defined. An important remark is also to be taken into consideration for the compliance rate and the dissimilarity rate: these are coefficients defined in a global way. This means that they perform calculations on the entire image without determining where the differences will be most marked. Study of mechanical properties The parameter that takes into account mechanical properties in the available topological optimization code is compliance. This parameter will be examined by varying the characteristic length of the different meshes (Fig. 3). Other parameters, such as volume fraction, could have been considered. However, the quality of the result will be influenced more by the accuracy of the machine used for the calculations rather than the mesh size. Figure 3 clearly shows that the smaller the size of the element, i.e. the more elements in the mesh, the lower the compliance. A convergence step can be observed from an element size of 0.25 mm upwards. Note that the relative deviation between the compliance values obtained from a regular mesh and an irregular mesh is less than 2% for an element size of 0.5 mm, and finally decreases to 0.3% for an element size of 0.08 mm. Thus, the mechanical properties after topological optimization of a regularly or irregularly meshed structure are similar. The two meshes are therefore equivalent. Influence of the REV size on the volume fraction In the mechanics and image processing section, it was observed that the Representative Elementary Volume (REV) is equivalent to a pixel and represents the average of intermediate densities of the elements composing that pixel. A study was conducted on the number and shape of elements that constitute the REV. The regular structure with 80,000 elements was used for comparison. Various sizes and shapes of REV were applied to this structure to determine their potential effects on the structure's volume fraction. The results are summarized in Table 2, which includes the facies of the images obtained after applying the REV to the initial image (measuring 200 Â 400), the size of the REV, the size of the final image, and the number of elements per REV. Study of dissimilarity between regular and irregular meshes For this study, the code allowing the elements to be arranged in a certain order is only functional for the regular mesh. For irregular meshing, since the number of elements per column is not fixed according to the number of rows, it is difficult to construct an image from this data. The images required for this study were therefore cut directly from Paraview using the capture tool. The results are summarised in Table 3. It contains the facies of regular and irregular structures, the volume fraction calculated with the image processing code, the compliance rate and the dissimilarity rate. Figure 4 shows the influence of the size of the REV on the volume fraction. First of all, it is useful to point out that, despite a fairly marked distribution of points, the results remain close to the imposed volume fraction, i.e. 0.5. Thus, for Figure 4, the results, although close to reality, seem to depend on the number of elements that make up the REV: between 2 and 32 elements per REV, the volume fraction does not vary greatly. However, from 64 elements per REV, the volume fraction increases; this means that the differences in intermediate densities between the elements become too great not to influence the quality of the REV. Influence of REV size on the volume fraction Some REV have the same number of elements but not the same volume fraction. Figures 5 and 6 are intended to determine the influence of the REV dimensions on the quality of the volume fraction. Concerning Figure 5, the number of lines in the REV does not appear to have a significant impact because a convergence of the volume fraction can be observed for 1 and 2 lines. Figure 6 displays constant volume fractions for 1, 2, 4, and 8 columns. The REV composed of 4 rows and 8 columns seems to be the most suitable for this structure as it will be closest to the value imposed in the topological optimization code. The final image will correspond to the dimensions (50 Â 50). Study of the dissimilarity between regular and irregular meshes A first visual observation can be made regarding the facies of regular and irregular meshes (Tab. 3): the smaller the element size, the greater the number of internal branches in the structure. However, it can be noted that for an element size of 0.08 mm, the facies of the regular structure appears identical to that obtained for 0.1 mm and the number of internal branches decreases for the irregular one. The volume fractions have, however, significantly degraded compared to the values entered by the user in the topological optimization code. This degradation is due to the manual cutting of images. The compliance rate has also been affected, not exceeding 80%. Nevertheless, given the direct correlation between compliance rate and volume fraction (Eqs. (12) and (14)), the volume fractions obtained by image processing on the final regular and irregular structures appear consistent with reality. For the image subtraction, white areas correspond to common material between regular and irregular meshes. The shaded areas correspond to excess material in one structure over the other. As the subtraction is absolute, it is not possible to determine which structure the excess comes from. Note that for visual purposes, each pixel has been subtracted by 255 to obtain a lighter image (black on white instead of white on black). However, this subtraction is not performed for the calculation of the dissimilarity rate. The dissimilarity rate tends to decrease with the element size, except for 0.2 mm where it is highest. This can be observed in the facies with numerous almost black branches. The dissimilarity rate is minimal for 0.08 mm, with a 24% difference between the two regular and irregular images. Further research could be conducted to reduce the Conclusion Although the current results do not allow us to determine whether there is a link between the facies of the structure after topological optimization and its mechanical properties, they do highlight the potential applications that image processing could have on mechanical topological optimization problems. It would be useful to define a way of finding the compliance using the image processing method, which would allow us to determine if a link exists between the structure's features and its mechanical properties. Since the application of the Representative Elementary Volume (REV) and image reduction did not have a significant influence on the volume fraction, if this result could be repeated for compliance, it would show that for a given mechanical problem, the result of the topology optimization would be independent of the structure's dimensions. Coupling this method with artificial intelligence would allow for a drastic reduction in the calculation times of the algorithm. To improve the accuracy of the dissimilarity ratio, an image should be directly created from the unstructured mesh to avoid manually cutting the images. Increasing the conformity ratio could also have an impact on the dissimilarity ratio. Based on the two features of the topology optimization, a manufacturability index could be characterized to determine which of the two structures is the most manufacturable. Perspectives It would be useful to define a way of calculating or finding compliance using image processing to determine whether there is a link between the mechanical properties of a structure and its facies. Since the application of a REV, and thus the reduction of an image, does not have a significant influence on the volume fraction of the structure, if this result were repeated for compliance, it would show that, for a given mechanical problem, the result of the topological optimization would be independent of the structure's dimensions. Coupling this method with artificial intelligence, which is currently being developed for the topological optimization code, would drastically reduce the calculation times of the algorithm. To improve the accuracy of the dissimilarity rate, it would be wise to create an image directly from the irregularly meshed structure to eliminate the need to cut out the images manually. Increasing the compliance rate could have a direct impact on the dissimilarity rate. In view of the facies resulting from the topological optimization of both regularly and irregularly meshed structures, a manufacturability index could be characterized to determine which structure is more manufacturable.
2023-07-12T07:57:02.778Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "6ee76787ce86fb26e6f1c6813a6a3ae83f2cf50a", "oa_license": "CCBY", "oa_url": "https://www.ijsmdo.org/articles/smdo/pdf/2023/01/smdo220045.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "67c3b782c11fc0bd7ece41673bab71ccc2a9f16f", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [] }
9709343
pes2o/s2orc
v3-fos-license
Circuit synthesizable guaranteed passive modeling for multiport structures In this paper we present a highly efficient algorithm to automatically generate circuit synthesizable dynamical models for passive multiport structures. The algorithm is based on a natural convex relaxation of the original nonconvex problem of modeling multiport devices from frequency response data, subject to global passivity constraints. The algorithm identifies a collection of first and second order passive networks interconnected in either series or parallel fashion. Passive models for several multiport structures, including Wilkinson type combiners, power and ground distribution grids and coupled on-chip inductors are provided to corroborate the theoretical development and show efficacy of the implemented algorithm. To demonstrate the practical usage of our algorithm, the identified models are also interfaced with commercial simulators and used to perform time domain simulations while being connected to highly nonlinear power amplifiers. INTRODUCTION Automatic generation of accurate, compact and passive dynamical models for multiport passive interconnect structures from frequency response is a crucial part of the design flow for complex analog systems. Typically, passive structures are simulated by a field solver which computes frequency response data samples in the desired frequency band. Based on the frequency response samples extracted by the solver or collected from measurements, a reduced model is developed which can be incorporated into a circuit simulator for time domain simulations of a larger system containing also nonlinear devices. A model violating any basic physical property, such as passivity, can cause convergence issues for the simulator, huge errors in the response of the overall system, and the results may become completely nonphysical. There exist different approaches to generate dynamical models from frequency response data. The problem of finding a passive multiport model from complex frequency response data is highly nonlinear and non convex. Given a set of frequency response samples {H i , ω i }, where H i = H( jω i ) are the transfer matrix samples of some unknown multiport linear system, the compact modeling task is to construct a low-order rational transfer matrixĤ(s) such thatĤ( jω i ) ≈ H i . Formulated as an L 2 minimization problem of the sum of squared errors, it can be written as 2 subject toĤ( jω) passive (1) Even after ignoring the passivity constraint in (1), the unconstrained minimization problem is non-convex and is therefore very difficult to solve. Direct solution using nonlinear least squares have been proposed, such as Levenberg-Marquardt [1]. However, there is no guarantee that such approach will converge to the global minimum, and quite often the algorithm will yield only a locally optimal result. Over the past years considerable effort has been put into finding a convex relaxation to the original problem including the passivity constraint such as [2][3][4]. Although these techniques provide an analytical formulation, they are often criticized as be-ing still computationally quite expensive. Most of these techniques rely on enforcing some formulation of the positive real lemma by constraining the real part of the impedance matrix to be positive definite over all frequencies. Although such a constraint can be certifiably enforced by using a Sum-Of-Squares (SOS) relaxation, it is normally a costly operation, specially when the constraints are defined on frequency dependent matrices such as in [3,4]. Some iterative techniques also exist, such as [5,6]. In these techniques a stable but non-passive model is first identified. This nonpassive model is then checked for passivity violations by examining if there exist pure imaginary eigen values of the corresponding Hamiltonian matrix. Finally, some parameters of the initially identified non-passive model are perturbed to correct for passivity violations. These techniques are computationally efficient, however since perturbing the system is an ill-posed problem, there is no guarantee that the final passivated model is optimal for accuracy. In this paper we present a theoretical and analytical formulation and a highly efficient implementation of a procedure for identifying passive dynamical models from frequency response data for multiport structures. We solve the problem in two steps, first a set of common poles is identified using already established techniques [3,7,8]. Next, we identify residue matrices while simultaneously enforcing passivity using frequency independent linear matrix inequalities. Although, similar conditions for passivity were derived in [9,10], the conditions were used only to 'check' for passivity violations as opposed to our proposed algorithm where these conditions are built into the model identification procedure to 'enforce' passivity. Also, no efficient algorithm was proposed in [9,10] to rectify for passivity violations. For example in [9] it was proposed that the pole-residue pairs violating passivity conditions should be discarded, this is highly restrictive and can significantly deteriorate the accuracy. We instead propose that the identified residue matrices should conform to passivity conditions during the identification procedure such that there are no passivity violation in the final model. The formulation presented in this paper, being convex, is guaranteed to converge to the global minimum and can be easily implemented using publicly available convex optimization solvers such as SeDuMi [11]. Also, since the constraints presented in this paper are frequency independent, for the same model accuracy we get orders of magnitude improvement in terms of speed compared to other convex optimization based techniques such as [3,4] where the the constraints are frequency dependent and are expensive to enforce. The scheme presented in this paper can potentially be extended to generate parameterized models with apriori global passivity certificate. Finally, the models generated by our algorithm can readily be synthesized into an equivalent passive network and can be interfaced with commercial circuit simulators by generating either a spice-like netlist or a Verilog-A model. The remainder of the paper is organized as follows: Section 2 describes the rational fitting of transfer matrices and the notion of passivity. Section 3 formulates the problem of passive fitting for multiport LTI systems. Section 4 details the full algorithm for our modeling approach. Finally, Section 5 demonstrates the effectiveness of the proposed approach in modeling various multiport structures. Rational Transfer Matrix Fitting The problem of constructing a rational approximation of multi port systems consists of finding residue matrices R k , poles a k and the matrices D & F such that the identified model, defined by the transfer functionĤ(s) in (2), minimizes the mismatch between the reduced model and the original system as described in (1). where R k , D and F are T ×T residue matrices (assuming the system has T ports) and a k are poles. Since most of the passive structures have a symmetric response, R k , D and F are symmetric matrices. Passivity of Immitance 1 Transfer Matrix Passivity is the inability of a system (or model) to generate energy. Since arbitrary connections of passive systems are guaranteed to be passive, passivity becomes an essential requirement if the model is to be used for time domain simulations while being interconnected with other subsystems. While it may be possible for a non-passive model to provide high accuracy in the frequency domain, the same model when used in time domain simulation could produce extremely inaccurate results resulting from passivity violations. Passivity for an impedance or admittance system corresponds to 'positive realness' of the transfer matrix. To be positive real, the transfer matrixĤ(s) must satisfy the following constraintŝ Where ℜ{ } denotes the real part and † indicates the hermitian transpose. The first condition (3a), commonly known as conjugate symmetry, ensures that the impulse response corresponding toĤ(s) is real. The second condition (3b) implies stability of the transfer function. A causal linear system in the transfer matrix form is stable if all of its poles are in the left half of the complex plane, i.e. all the poles have negative real part. The third and final condition (3c), which is positivity condition, implies positive realness of the symmetric part of the transfer matrix on the jω axis. Problem Formulation We expand the summation forĤ(s) in (2). Also, since we are mainly interested in the properties of H(s) on the imaginary axis, we replace s with jω. Here κ r and κ c denote the number of purely real and the number of complex poles, respectively. Also, R r k ∈ R T ×T , R c k ∈ C T ×T , a r k ∈ R, a c k ∈ C ∀k, and D, F ∈ R T ×T , where T is the number of ports. In the following subsections, we consider one by one the implications of each passivity condition in (3) on the structure of (4). Conjugate Symmetry The terms in (4) corresponding to the matrices D and F, and to the summation over purely real poles satisfy automatically the first passivity condition (3a). On the other hand such condition requires that the complex-poles a c k and complex residue matrices R c k always come in complex-conjugate-pairŝ The proof is given in Appendix A. In (5) ℜ and ℑ indicate the real and imaginary parts respectively. Note that the summation for complex poles now extends only upto κ c /2. Rewritting (5) compactly: Stability The second condition (3b), which requires analyticity ofĤ(s) in ℜ{s} > 0, implies stability. For a linear causal system in poleresidue form (2), the system is strictly stable if all of its poles a k are in the left half of complex plane i.e. they have negative real part (ℜ{a k } < 0). Positivity The positivity condition for passivity (3c) is the most difficult condition to enforce analytically. We present here an extremely efficient condition which implies (3c). We consider the case when all the building blocks in the summation (6), namely: purely real poles/residuesĤ r k ( jω), complex-conjugate pairs of poles/residueŝ H c k ( jω), and the direct term matrix D are individually positive real. Please note that the jωF term in (6) has purely imaginary response and therefore does not affect positivity condition. The sum of positive-real, complex matrices is positive real. Lemma 3.1 describes a sufficient, but not-necessary, condition for (3c). In the following subsections we derive the equivalent conditions of positive realness on each term separately. Purely Real Pole-Residues In this section we derive the condition for the purely real pole/residue termĤ r k ( jω) in the summation (6) to be positive real. Such a condition can be obtained by rationalizingĤ r k ( jω) as in (7), which results into: Complex Conjugate Pole-Residues In this section we derive the positive realness condition for the complex pole/residue termĤ c k ( jω) in the summation (6). Since complex terms always appear conjugate pairs, we first add the two terms forĤ c k ( jω) in (8) resulting into: In order to obtain positive realness condition onĤ c k ( jω) we rationalize (12) to form (13). The resulting condition for ℜĤ c k ( jω) 0 is given in (14) Direct Term Matrix Since D is a constant real symmetric matrix, we require D to be a positive semidefinite matrix, i.e. D 0 The Constrained Minimization Problem We combine all the constraints derived earlier and formulate a constrained minimization problem as follows: Here H i are the given frequency response samples at frequencies ω i ;Ĥ r k andĤ c k are defined in (7) and (8) respectively; a r k and a c k denotes the real and complex poles respectively. The detailed expressions for ℜĤ r k ( jω) 0 and ℜĤ c k ( jω) 0 are described in (11) and (14) respectively. IMPLEMENTATION In this section we describe in detail the implementation of our passive multiport model identification procedure based on solving the constrained minimization framework developed in Section 3. The optimization problem in (15) is non-convex because both the objective function and the constraints are non-convex. The non-convexity in (15) arises mainly because of the terms containing products and ratios between decision variables such as ratio of residue matrices, R k , and poles, a k , in the objective function, and product terms and ratios of R k and a k in the constraints. Since the main cause of non-convexity in (15) is the coupling between R k and a k , it is natural to uncouple the identification steps of the unknowns, namely R k and a k in order to convexify (15). We propose to solve the optimization problem in (15) in two steps. The first step consists of finding a set of stable poles a k for the system. The second step is to find a passive multiport dynamical model for the system, given stable poles from step 1. In the following sections we describe how to solve the two steps. Step 1: Identification of stable poles Several efficient algorithms already exist for the identification of stable poles for multiport systems. Some of the stable pole identification approaches use optimization based techniques such as in [3]. Some schemes such as [7,8] find the location of stable poles iteratively. Any one of these algorithms can be used as the first step of our algorithm, where we identify a common set of stable poles for all the transfer functions in the transfer matrix. As mentioned before, to enforce conjugate symmetry, the stable poles can either be real or be in the form of complex-conjugate pairs. Step 2: Identification of Residue Matrices In this section we formulate the convex optimization problem for the identification of residue matrices using the stable poles from step 1. We first revisit the conditions for passivity (11) and (14), and later we shall develop the convex objective function. Purely Real Pole-Residues Let us consider the positive realness condition on the purely real pole residue term H r k ( jω) as in (11). The constraint (11) requires frequency dependent matrices to be positive semidefinite for all frequencies. This is in general very expensive to enforce. However, a careful observation of (11) reveals that the denominator, which is the only frequency dependent part of (11) is a positive real number for all frequency. Hence we can ignore the positive denominator which leaves us enforcing −a r k R r k 0. Since we are already given stable poles (i.e. a r k < 0), the constraint in (11) reduces to enforcing positive semidefiniteness on R r k , hence Such a constraint is convex and can be enforced extremely efficiently using SDP solvers [11]. Complex Conjugate Pole-Residues In this section we reconsider the positive realness condition on the complex conjugate pole residue pair term H c k ( jω) as in (14). As before, a closer examination of the frequency dependent denominator in (14) reveals the fact that it is positive for all frequencies. Given that we have a fixed set of stable poles, and the denominator is always positive, we rewrite the constraint (14) only in terms of the variables i.e. ω and R c k . Also, we replace the constant expressions of ℜa c k and ℑa c k in (14) with generic constants c i . We finally obtain the following equivalent condition The problem is however still not solved since the condition in (17) is frequency dependent. PROOF. Direction ⇒ Given X 1 + ω 2 X 2 0 we consider the following limits: Direction ⇐ follows from the fact that a non-negative weighted sum of positive semidefinite matrices is positive semidefinite. We define Convex Optimization to Find Residue Matrices In this section we summarize the final convex optimization identifying the residue matrices which correspond to to passive H( jω), given stable poles a k . This final problem (22) is convex, since the objective function is a summation of L 2 norms. All the constraints in (22) are linear matrix inequalities. This convex optimization problem is a special case of semidefinite programming, requiring only few frequency independent matrices to be positive semidefinite. This problem formulation is extremely fast to solve, compared to other convex formulations [3,4] where the unknown matrices are frequency dependent. Equivalent Circuit Synthesis From the circuits perspective, the algorithm identifies a collection of low-pass, band-pass, high-pass and all-pass passive filter networks. These passive blocks can be readily synthesized into an equivalent passive circuit networks, and can be interfaced with commercial circuit simulators either by generating a spice-like netlist, or by using Verilog-A. Alternatively, we can develop equivalent state space realizations for our passive multiport models, for example a Jordan-canonical form can be obtained as described in [8] and then diagonalized. The Complete Algorithm In this section we present the description of the complete framework in Algorithm 1. This algorithm minimizes a cost function Algorithm 1 Complete Passive Multiport Model Identification Input: The set of frequency response samples {H i , ω i }, the number of poles N Output: Passive modelĤ( jω) 1: Find the stable system with N poles a k 2: Solve the optimization problem (22) for R k 3: Construct the model in pole/residue form as in (4) 4: Synthesize the equivalent passive circuit and generate the corresponding netlist or verilogA model file based on L 2 norm subject to linear matrix inequalities. Such a formulation can be solved very efficiently and is guaranteed to converge to the global minimum. However, the fact that this algorithm provides analytical expressions to enforce passivity in a highly efficient manner has an enormous potential such as in future extensions to parameterized passive multiport models; or to include designers specific constraints such as ensuring a good match for qualify factors in RF inductor dynamical models. Wilkinson Combiner in a LINC Amplifier In this section we shall present an example illustrating the usefulness of our proposed methodology for modeling and simulating a LINC (LInear amplification with Nonlinear Components) power amplifier. The architecture, as described in Figure 1, consists of a signal splitter, two power amplifiers, and a Wilkinson type power combiner. This architecture is designed to operate at 40GHz. PA1 and PA2 are class B amplifiers designed in 130nm SiGe process using BJTs. The Wilkinson combiner is designed on alumina substrate with characteristic impedance of 50Ω and operating frequency of 40GHz. Input, v in , to this architecture is a 64 − QAM signal. The signal splitter decomposes the input QAM signal into two phase modulated fixed amplitude signals. Let v in = V in ∠φ be the input signal; v 1 = V 0 ∠φ 1 and v 2 = V 0 ∠φ 2 be the two signals generated by the splitter then, The splitted signals are amplified by individual nonlinear power amplifiers. The outputs of these two power amplifiers are added using a Wilkinson type power combiner. This 3-port Wilkinson combiner, is simulated inside a full wave public domain field solver [13] available at [14]. Using the frequency response samples generated by the field solver, a closed form state space model of order m = 30 is identified using our passive modeling algorithm. To demonstrate the accuracy of this model in frequency domain Figure 2 compares the impedance parameters from field solver (dots) and frequency The algorithm took only 2seconds to generate the entire model, whereas for the same order and simular accuracy the algorithm described in [3] took 83seconds; giving us a speed-up of 40×. A model is passive if there are no purely imaginary eigen values of the associated Hamiltonian matrix. Figure 3 is a zoomed-in plot of the eigen values of the associated hamiltonian matrix for the identified model. It is clear that the model passes the passivity test since there are no purely imaginary eigen values. Finally, the overall amplifier architecture is simulated inside a commercial circuit simulator after connecting the linear model for the combiner with the rest of the circuit components including the nonlinear amplifiers, as shown in Figure 1. Practically speaking, as verified in Figures 4 and 5, the passive nature of the identified model for the Wilkinson combiner guarantees that transient simulations for the overall architecture converge, and the final output signal v out is also a 64 − QAM signal similar to the input v in . Power Distribution Grid The second example we present is a power & ground distribu- tion grid used in systems on chip or on package. The 3D layout for this power grid is shown in Figure 6, and is composed of V dd (red or dark grey) and Gnd (green or light grey) segments placed along both x and y axes. External connections, given by solder balls in a flip chip technology, are modeled with bond wires running vertically. This structure was simulated using 52390 unknowns in the full wave mixed potential integral equation (MPIE) solver, Fast-Maxwell [13], to obtain frequency response samples up to 12 GHz. The multiport simulation was arranged by placing eight ports: four at the grid corners and four inside the grid. Ports are illustrated in Figure 6 as black strips. For this example our proposed algorithm identified an 8 × 8 passive transfer matrix of order m = 160 in 74seconds, whereas the algorithm in [3] ran out of memory and did not generate the model. To demonstrate the accuracy, Figure 7 compares the real and imaginary impedance respectively of our reduced model with the field solver data. Although the models are passive by construction, the passive nature was verified by the absence of purely imaginary eigen values of the associated hamiltonian matrix. On-chip RF Inductors The third example is a collection of 4 RF inductors on the same chip or package that are used in the design of multichannel receivers. The layout is shown in Figure 8. The structure has four ports in total, configured at the input of each inductor. This structure was simulated using 10356 unknowns in the full wave field solver, FastMaxwell [13] which captures substrate using a Green function complex image method. For this example a 4 × 4 passive transfer matrix of order m = 92 was identified. The algorithm took 72seconds to identify the passive model, compared to the algorithm in [3] which ran out of mem- ory and did not generate the model. Figure 9 shows impedance parameters both from the field solver and from our identified model. The passive nature of this model was verified by the absence of pure imaginary eigen values of the associated hamiltonian matrix. CONCLUSION In this paper we have proposed a new semidefinite programming based algorithm to solve the original nonlinear and non-convex identification problem for passive multiport models. The identified models, because of the passive nature of our construction, can be readily synthesized into equivalent circuits and hence can be interfaced with commercial simulators easily. The theory is supported by modeling and simulation of various multiport structures.Using our approach we were able to get a speed-up of 40× over [3], while for moderately large problems we were able to converge within a reasonable amount of time whereas approaches such as [3] ran out of resources and did not generate the model.
2015-07-15T00:15:54.000Z
2010-09-01T00:00:00.000
{ "year": 2010, "sha1": "c07bec4f427f09e39a63aa16dcbea27e416b6c59", "oa_license": "CCBYNC", "oa_url": "https://dspace.mit.edu/bitstream/1721.1/72205/1/Daniel-Circuit%20synthesizable.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "7ced920359b1cc1ffac7fb1ab5a3a522ff610d0c", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Computer Science" ] }
10134774
pes2o/s2orc
v3-fos-license
Stochastic population growth in spatially heterogeneous environments: the density-dependent case This work is devoted to studying the dynamics of a structured population that is subject to the combined effects of environmental stochasticity, competition for resources, spatio-temporal heterogeneity and dispersal. The population is spread throughout n patches whose population abundances are modeled as the solutions of a system of nonlinear stochastic differential equations living on \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$[0,\infty )^n$$\end{document}[0,∞)n. We prove that r, the stochastic growth rate of the total population in the absence of competition, determines the long-term behaviour of the population. The parameter r can be expressed as the Lyapunov exponent of an associated linearized system of stochastic differential equations. Detailed analysis shows that if \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ r>0$$\end{document}r>0, the population abundances converge polynomially fast to a unique invariant probability measure on \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(0,\infty )^n$$\end{document}(0,∞)n, while when \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ r<0$$\end{document}r<0, the population abundances of the patches converge almost surely to 0 exponentially fast. This generalizes and extends the results of Evans et al. (J Math Biol 66(3):423–476, 2013) and proves one of their conjectures. Compared to recent developments, our model incorporates very general density-dependent growth rates and competition terms. Furthermore, we prove that persistence is robust to small, possibly density dependent, perturbations of the growth rates, dispersal matrix and covariance matrix of the environmental noise. We also show that the stochastic growth rate depends continuously on the coefficients. Our work allows the environmental noise driving our system to be degenerate. This is relevant from a biological point of view since, for example, the environments of the different patches can be perfectly correlated. We show how one can adapt the nondegenerate results to the degenerate setting. As an example we fully analyze the two-patch case, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n=2$$\end{document}n=2, and show that the stochastic growth rate is a decreasing function of the dispersion rate. In particular, coupling two sink patches can never yield persistence, in contrast to the results from the non-degenerate setting treated by Evans et al. which show that sometimes coupling by dispersal can make the system persistent. Introduction The survival of an organism is influenced by both biotic (competition for resources, predator-prey interactions) and abiotic (light, precipitation, availability of resources) factors. Since these factors are space-time dependent, all types of organisms have to choose their dispersal strategies: If they disperse they can arrive in locations with dif-ferent environmental conditions while if they do not disperse they face the temporal fluctuations of the local environmental conditions. The dispersion strategy impacts key attributes of a population including its spatial distribution and temporal fluctuations in its abundance. Individuals selecting more favorable habitats are more likely to survive or reproduce. When population densities increase in these habitats, organisms may prosper by selecting habitats that were previously unused. There have been numerous studies of the interplay between dispersal and environmental heterogeneity and how this influences population growth; see Hastings (1983), Gonzalez and Holt (2002), Schmidt (2004), Roy et al. (2005), Schreiber (2010), Cantrell et al. (2012), Durrett and Remenik (2012), Evans et al. (2013) and references therein. The mathematical analysis for stochastic models with density-dependent feedbacks is less explored. In the setting of discrete-space discrete-time models there have been thorough studies by Benaïm and Schreiber (2009);Schreiber (2010); . Continuous-space discrete-time population models that disperse and experience uncorrelated, environmental stochasticity have been studied by Hardin et al. (1988aHardin et al. ( , b, 1990. They show that the leading Lyapunov exponent r of the linearization of the system around the extinction state almost determines the persistence and extinction of the population. For continuous-space continuous-time population models Mierczyński and Shen (2004) study the dynamics of random Kolmogorov type PDE models in bounded domains. Once again, it is shown that the leading Lyapunov exponent r of the linarization around the trivial equilibrium 0 almost determines when the population goes extinct and when it persists. In the current paper we explore the question of persistence and extinction when the population dynamics is given by a system of stochastic differential equations. In our setting, even though our methods and techniques are very different from those used by Hardin et al. (1988a) and Mierczyński and Shen (2004), we still make use of the system linearized around the extinction state. The Lyapunov exponent of this linearized system plays a key role throughout our arguments. Evans et al. (2013) studied a linear stochastic model that describes the dynamics of populations that continuously experience uncertainty in time and space. Their work has shed some light on key issues from population biology. Their results provide fundamental insights into "ideal free" movement in the face of uncertainty, the evolution of dispersal rates, the single large or several small (SLOSS) debate in conservation biology, and the persistence of coupled sink populations. In this paper, we propose a density-dependent model of stochastic population growth that captures the interactions between dispersal and environmental heterogeneity and complements the work of Evans et al. (2013). We then present a rigorous and comprehensive study of the proposed model based on stochastic analysis. The dynamics of a population in nature is stochastic. This is due to environmental stochasticity-the fluctuations of the environment make the growth rates random. One of the simplest models for a population living in a single patch is where U (t) is the population abundance at time t, a is the mean per-capita growth rate, b > 0 is the strength of intraspecific competition, σ 2 is the infinitesimal variance of fluctuations in the per-capita growth rate and (W (t)) t≥0 is a standard Brownian motion. The long-term behavior of (1.1) is determined by the stochastic growth rate a − σ 2 2 in the following way (see Evans et al. 2015;Dennis and Patil 1984): • If a − σ 2 2 > 0 and U (0) = u > 0, then (U (t)) t≥0 converges weakly to its unique invariant probability measure ρ on (0, ∞). Organisms are always affected by temporal heterogeneities, but they are subject to spatial heterogeneities only when they disperse. Population growth is influenced by spatial heterogeneity through the way organisms respond to environmental signals (see Hastings 1983;Cantrell and Cosner 1991;Chesson 2000;Schreiber and Lloyd-Smith 2009). There have been several analytic studies that contributed to a better understanding of the separate effects of spatial and temporal heterogeneities on population dynamics. However, few theoretical studies have considered the combined effects of spatio-temporal heterogeneities, dispersal, and density-dependence for discretely structured populations with continuous-time dynamics. As seen in both the continuous (Evans et al. 2013) and the discrete (Palmqvist and Lundberg 1998) settings, the extinction risk of a population is greatly affected by the spatio-temporal correlation between the environment in the different patches. For example, if spatial correlations are weak, one can show that populations coupled via dispersal can survive even though every patch, on its own, would go extinct (see Evans et al. 2013;Jansen and Yoshimura 1998;Harrison and Quinn 1989). Various species usually exhibit spatial synchrony. Ecologists are interested in this pattern as it can lead to the extinction of rare species. Possible causes for synchrony are dispersal and spatial correlations in the environment (see Legendre 1993;Kendall et al. 2000;Liebhold et al. 2004). Consequently, it makes sense to look at stochastic patch models coupled by dispersion for which the environmental noise of the different patches can be strongly correlated. We do this by extending the setting of Evans et al. (2013) by allowing the environmental noise driving the system to be degenerate. The rest of the paper is organized as follows. In Sect. 2, we introduce our model for a population living in a patchy environment. It takes into account the dispersal between different patches and density-dependent feedback. The temporal fluctuations of the environmental conditions of the various patches are modeled by Brownian motions that are correlated. We start by considering the relative abundances of the different patches in a low density approximation. We show that these relative abundances converge in distribution to their unique invariant probability measure asymptotically as time goes to infinity. Using this invariant probability measure we derive an expression for r , the stochastic growth rate (Lyapunov exponent) in the absence of competition. We show that this r is key in analyzing the long-term behavior of the populations. In Appendix A we show that if r > 0 then the abundances converge weakly, polynomially fast, to their unique invariant probability measure on (0, ∞) n . In Appendix B, we show that if r < 0 then all the population abundances go extinct asymptotically, at an exponential rate (with exponential constant r ). Appendix C is dedicated to the case when the noise driving our system is degenerate (that is, the dimension of the noise is lower than the number of patches). In Appendix D, we show that r depends continuously on the coefficients of our model and that persistence is robust-that is, small perturbations of the model do not make a persistent system become extinct. We provide some numerical examples and possible generalizations in Sect. 4. Model and results We study a population with overlapping generations, which live in a spatio-temporally heterogeneous environment consisting of n distinct patches. The growth rate of each patch is determined by both deterministic and stochastic environmental inputs. We denote by X i (t) the population abundance at time t ≥ 0 of the ith patch and write X(t) = (X 1 (t), . . . , X n (t)) for the vector of population abundances. Following Evans et al. (2013), it is appropriate to model X(t) as a Markov process with the following properties when 0 ≤ t 1: • the conditional mean is where a i ∈ R is the per-capita growth rate in the ith patch, b i (x i ) is the per-capita strength of intraspecific competition in patch i when the abundance of the patch is x i , and D i j ≥ 0 is the dispersal rate from patch i to patch j; • the conditional covariance is The difference between our model and the one from Evans et al. (2013) is that we added density-dependent feedback through the We work on a complete probability space ( , F, {F t } t≥0 , P) with filtration {F t } t≥0 satisfying the usual conditions. We consider the system where D i j ≥ 0 for j = i is the per-capita rate at which the population in patch i disperses to patch j, D ii = − j =i D i j is the total per-capita immigration rate out of patch i, E(t) = (E 1 (t), . . . , E n (t)) T = B(t), is a n × n matrix such that = = (σ i j ) n×n and B(t) = (B 1 (t), . . . , B n (t)) is a vector of independent standard Brownian motions adapted to the filtration {F t } t≥0 . Throughout the paper, we work with the following assumption regarding the growth of the instraspecific competition rates. Assumption 2.1 For each i = 1, . . . , n the function b i : R + → R is locally Lipschitz and vanishing at 0. Furthermore, there are Remark 2.2 Note that condition (2.2) is biologically reasonable because it holds if the b i 's are sufficiently large for large x i 's. We provide some simple scenarios when Assumption 2.1 is satisfied. It is easy to show that Assumption 2.1 holds. b) Particular cases of (a) are for example, any b i : R + → R that are locally Lipschitz, vanishing at 0 such that lim x→∞ b i (x) = ∞. c) One natural choice for the competition functions, which is widely used throughout the literature, is b i (x) = κ i x, x ∈ (0, ∞) for some κ i > 0. In this case the competition terms become − where f i are locally Lipschitz this can always be rewritten in the form (2.1) with Therefore, our setting is in fact very general and incorporates both nonlinear growth rates and nonlinear competition terms. A distinctive property of cooperative systems is that comparison arguments are generally satisfied. We refer to Chueshov (2002) for more details. Remark 2.4 If the dispersal matrix (D i j ) has a normalized dominant left eigenvector α = (α 1 , . . . , α n ) then one can show that the system converges as δ → ∞ to a system (X 1 (t), . . . ,X n (t)) for which whereX (t) =X 1 (t) + · · · +X n (t) andX is an autonomous Markov process that satisfies the SDE As such, our system is a general version of the system treated in Evans et al. (2015). One can recover the system from Evans et al. (2015) as an infinite dispersion limit of ours. We denote by X x (t) the solution of (2.1) started at X(0) = x ∈ R n + . Following Evans et al. (2013), we call matrices D with zero row sums and non-negative offdiagonal entries dispersal matrices. If D is a dispersal matrix, then it is a generator of a continuous-time Markov chain. Define P t := exp(t D), t ≥ 0. Then P t , t ≥ 0 is a matrix with non-negative entries that gives the transition probabilities of a Markov chain: The (i, j)th entry of P t gives the proportion of the population that was initially in patch i at time 0 but has dispersed to patch j at time t and D is the generator of this Markov chain. If one wants to include mortality induced because of dispersal, one can add cemetery patches in which dispersing individuals enter and experience a killing rate before moving to their final destination. Our model is a densitydependent generalization of the one by Evans et al. (2013). We are able to prove that the linearization of the density-dependent model fully determines the non-linear densitydependent behavior, a fact which was conjectured by Evans et al. (2013). Furthermore, we prove stronger convergence results and thus extend the work of Evans et al. (2013). Analogous results for discrete-time versions of the model have been studied by Benaïm and Schreiber (2009) for discrete-space and by Hardin et al. (1988a, b) for continuousspace. We will work under the following assumptions. Assumption 2.2 The dispersal matrix D is irreducible. Assumption 2.3 The covariance matrix is non-singular. Assumption 2.2 is equivalent to forcing the entries of the matrix P t = exp(t D) to be strictly positive for all t > 0. This means that it is possible for the population to disperse between any two patches. We can always reduce our problem to this setting by working with the maximal irreducible subsets of patches. Assumption 2.3 says that our randomness is non-degenerate, and thus truly n-dimensional. We show in Appendix C how to get the desired results when Assumption 2.3 does not hold. Throughout the paper we set R n + := [0, ∞) n and R n,• + := (0, ∞) n . We define the total abundance of our population at time ). An application of Itô's lemma to (2.1) yields We can rewrite (2.4) in the following compact equation for (Y(t), where Y(t) lies in the simplex := {(y 1 , . . . , y n ) ∈ R n + : y 1 + · · · + y n = 1}. Let • = {(y 1 , . . . , y n ) ∈ R n,• + : y 1 + · · · + y n = 1} be the interior of . Consider Equation (2.5) on the boundary ((y, s) : y ∈ , s = 0) (that is, we set S(t) ≡ 0 in the equation for Y(t)). We have the following system on the simplex . We also introduce the linearized version of (2.1), where the competition terms b i (x i ) are all set to 0, and let S(t) = n i=1 X i (t) be the total population abundance, in the absence of competition. The processes (X 1 (t), . . . , X n (t)),Ỹ(t) and S(t) have been studied by Evans et al. (2013). Evans et al. (2013, Proposition 3.1) proved that the process (Ỹ(t)) t≥0 is an irreducible Markov process, which has the strong Feller property and admits a unique invariant probability measure ν * on . LetỸ(∞) be a random variable on with distribution ν. We define Remark 2.5 We note that r is the stochastic growth rate (or Lyapunov exponent) of the total population S(t) in the absence of competition. That is, The expression (2.8) for r coincides with the one derived by Evans et al. (2013). We use superscripts to denote the starting points of our processes. For example (Y y,s (t), S y,s (t)) denotes the solution of (2.4) with (Y(0), S(0)) = (y, s) ∈ × (0, ∞). Fix x ∈ R n + and define the normalized occupation measures, These random measures describe the distribution of the observed population dynamics up to time t. If we define the sets S η := {x = (x 1 , . . . , x n ) ∈ R n,• + : |x i | ≤ η for some i = 1, . . . , n}, t (S η ) is the fraction of the time in the interval [0, t] that the total abundance of some patch is less than η given that our population starts at X(0) = x. Definition 2.1 One can define a distance on the space of probability measures living on the Borel measurable subsets of R n + , that is on the space (R n + , B(R n + )). This is done by defining ·, · TV , the total variation norm, via μ, ν TV := sup Theorem 2.1 Suppose that Assumptions 2.2 and 2.3 hold and that r > 0. The process X(t) = (X 1 (t), . . . , X n (t)) t≥0 has a unique invariant probability measure π on R n,• + that is absolutely continuous with respect to the Lebesgue measure and for any q * > 0, and P X (t, x, ·) is the transition probability of (X(t)) t≥0 . Moreover, for any initial value x ∈ R n + \{0} and any π -integrable function f we have Remark 2.6 Theorem 2.1 is a direct consequence of Theorem A.2, which will be proved in Appendix A. As a corollary we get the following result. (2014), we say that the model (2.1) is stochastically persistent if for all ε > 0, there exists η > 0 such that with probability one, Definition 2.2 Following Roth and Schreiber for t sufficiently large and x ∈ S η \{0}. Corollary 2.1 If Assumptions 2.2 and 2.3 hold, and r > 0, then the process X(t) is stochastically persistent. Proof By Theorem 2.1, we have that for all x ∈ R n,• + , Since π is supported on R n,• + , we get the desired result. Biological interpretation of Theorem 2.1 The quantity r is the Lyapunov exponent or stochastic growth rate of the total population process (S(t)) t≥0 in the absence of competition. This number describes the long-term growth rate of the population in the presence of a stochastic environment. According to (2.8) r can be written as the difference μ − 1 2 σ 2 where • μ is the average of per-capita growth rates with respect to the asymptotic distri-butionỸ(∞) of the population in the absence of competition. • σ 2 is the infinitesimal variance of the environmental stochasticity averaged according to the asymptotic distribution of the population in the absence of competition. We note by (2.8) that r depends on the dispersal matrix, the growth rates at 0 and the covariance matrix of the environmental noise. As such, the stochastic growth rate can change due to the dispersal strategy or environmental fluctuations. When the stochastic growth rate of the population in absence of competition is strictly positive (i.e. r > 0) our population is persistent in a strong sense: for any starting point (X 1 (0), . . . , X n (0)) = (x 1 , . . . , x n ) ∈ R n,• + the distribution of the population densities at time t in the n patches (X 1 (t), . . . , X n (t)) converges as t → ∞ to the unique probability measure π that is supported on R n,• + . Definition 2.3 We say the population of patch i goes extinct if for all x ∈ R n + \{0} We say the population goes extinct if the populations from all the patches go extinct, that is if for all Theorem 2.2 Suppose that Assumptions 2.2 and 2.3 hold and that r < 0. Then for any i = 1, . . . , n and any x = (x 1 , . . . , x n ) ∈ R n + , Biological interpretation of Theorem 2.2 If the stochastic growth rate of the population in the absence of competition is negative (i.e. r < 0) the population densities of the n patches (X 1 (t), . . . , X n (t)) go extinct exponentially fast with rates r < 0 with probability 1 for any starting point (X 1 (0), . . . , X n (0)) = (x 1 , . . . , x n ) ∈ R n + . In Appendix A, we prove Theorem 2.1 while Theorem 2.2 is proven in Appendix B. Degenerate noise We consider the evolution of the process (X(t)) t≥0 given by (2.1) when Assumption 2.3 does not hold. If the covariance matrix = T coming for the Brownian motions E(t) = (E 1 (t), . . . , E n (t)) T = B(t) is singular, the environmental noise driving our SDEs has a lower dimension than the dimension n of the underlying state space. It becomes much more complex to prove that our process is Feller and irreducible. In order to verify the Feller property, we have to verify the so-called Hörmander condition, and to verify the irreducibility, we have to investigate the controllability of a related control system. We are able to prove the following extinction and persistence results. Theorem 2.3 Assume thatỸ(t) has a unique invariant probability measure ν * . Define r by (2.8). Suppose that r < 0. Then for any i = 1, . . . , n and any x = (x 1 , . . . , In particular, for any i = 1, . . . , n and any Remark 2.7 The extra assumption in this setting is that the Markov process describing the proportions of the populations of the patches evolving without competition,Ỹ(t), has a unique invariant probability measure. In fact, we conjecture thatỸ(t) always has a unique invariant probability measure. We were able to prove this conjecture when n = 2-see Remark 3.1 for details. The process X(t) = (X 1 (t), . . . , X n (t)) t≥0 has a unique invariant probability measure π on R n,• + that is absolutely continuous with respect to the Lebesgue measure and for any q * > 0, lim t→∞ t q * P X (t, x, ·) − π(·) TV = 0, x ∈ R n,• + , (2.14) where ·, · TV is the total variation norm and P X (t, x, ·) is the transition probability of (X(t)) t≥0 . Moreover, for any initial value x ∈ R n + \{0} and any π -integrable function f , we have (2.15) Remark 2.8 We require as before thatỸ(t) has a unique invariant probability measure. Furthermore, we require that there exists some time T > 0 such that if we observe the process (Y(t), S(t)) at the fixed times T, 2T, 3T, . . . , kT, . . . it is irreducible (loosely speaking this means that the process can visit any state) and aperiodic (returns to a given state occur at irregular times). Case study: n = 2 Note that the two Theorems above have some extra assumptions. We exhibit how one can get these conditions explicitly as functions of the various parameters of the model. For the sake of a clean exposition we chose to fully treat the case when n = 2 and b i (x) = b i x, x ≥ 0, i = 1, 2 for some b 1 , b 2 > 0 (each specific case would have to be studied separately as the computations change in each setting). As a result, (2.1) becomes where σ 1 , σ 2 are non-zero constants and (B(t)) t≥0 is a one dimensional Brownian motion. The Lyapunov exponent can now be expressed as (see Remark 3.1) where ρ * 1 is given in (3.5) later. If σ 1 = σ 2 =: σ , one has (see Remark 3.1) (2.17) Theorem 2.5 Define r by (2.16) if σ 1 = σ 2 and by (2.17) if σ 1 = σ 2 = σ . If r < 0 then for any i = 1, 2 and any x = (x 1 , x 2 ) ∈ R 2 Define r as in Theorem 2.5. If r > 0 then the conclusion of Theorem 2.4 holds. Remark 2.9 Once again the parameter r tells us when the population goes extinct and when it persists. To obtain the conclusion of Theorem 2.4 when r > 0, we need The condition σ 1 = σ 2 tells us that the noise must at least differ through its variance. If σ 1 = σ 2 then we require measures the dispersion rate of individuals from patch 2 to patch 1 averaged by the inverse relative competition strength of patch 2. In particular, if b 1 = b 2 we have that that is twice the difference of the dispersal rates cannot equal the difference of the growth rates. The dynamics of the system is very different if these conditions do not hold (see Sect. 3.2 and Theorem 2.7). Theorem 2.7 Suppose that σ 1 = σ 2 = σ, b 1 = b 2 and 2(β − α) = a 2 − a 1 . In this setting one can show that the stochastic growth rate is given by r Then we get the following results The proof of Theorem 2.7 is presented in Sect. 3.2. Robust persistence and extinction The model we work with is an approximation of the real biological models. As a result, it is relevant to see if 'close models' behave similarly to ours. This reduces to studying the robustness of our system. Consider the process then we call X a θ -perturbation of X. Theorem 2.8 Suppose that the dynamics of (X(t)) t≥0 satisfy the assumptions of Theorem 2.1. Then there exists θ > 0 such that any θ -perturbation ( X(t)) t≥0 of (X(t)) t≥0 is persistent. Moreover, the process ( X(t)) t≥0 has a unique invariant probability measure π on R n,• + that is absolutely continuous with respect to the Lebesgue measure and for any q * > 0 where P X (t, x, ·) is the transition probability of ( X(t)) t≥0 . Biological interpretation of Theorem 2.8 As long as the perturbation of our model is small, persistence does not change to extinction. Our model, even though it is only an approximation of reality, can provide relevant information regarding biological systems. Small enough changes in the growth rates, the competition rates, the dispersion matrix and the covariance matrix leave a persistent system unchanged. Theoretical and numerical examples This subsection is devoted to some theoretical and numerical examples. We choose the dimension to be n = 2, so that we can compute the stochastic growth rate explicitly. Remark 3.1 If an explicit expression for r is desirable, one needs to determine the first and second moments for the invariant probability measure ν * . One can show that ρ * , the density of ν * with respect to Lebesgue measure, satisfies where μ i (y) and v i, j (y) are the entries of and ρ * is constrained by ρ * (y)dy = 1 with appropriate boundary conditions. The boundary conditions are usually found by characterizing the domain of the infinitesimal generator of the Feller diffusion processỸ(t), which is usually a very difficult problem. However, following Evans et al. (2013), in the case of two patches (n = 2) and non-degenerate noise the problem is significantly easier. Let = diag(σ 2 1 , σ 2 2 ). The system becomes ( 3.2) It is easy to find the density ρ * 1 ofỸ 1 (∞) explicitly (by solving (3.1)) and noting that 0, 1 are both entrance boundaries for the diffusionỸ 1 (t)). Then where C > 0 is a normalization constant and One can then get the following explicit expression for the Lyapunov exponent where σ 1 , σ 2 are non-zero constants and (B(t)) t≥0 is a one dimensional Brownian motion. SinceỸ 1 (t) +Ỹ 2 (t) = 1, to find the invariant probability measure ofỸ(t), we only need to find the invariant probability measure ofỸ 1 (t). The degenerate case Suppose that a 1 = a 2 or that b 1 = b 2 . This system is degenerate since both equations are driven by a single Brownian motion. In this case, the unique equilibrium of (3.7) It can be proved easily that this equilibrium is asymptotically stable and that lim t→∞Ỹ1 (t) = y . Thus, if a 1 = a 2 As a result Therefore, the assumptions of Theorem 2.6 hold. If r < 0, by Theorem 2.5 the population goes extinct, while if r > 0, the population persists by Theorem 2.6. The degenerate case when the conditions of Theorem 2.6 are violated We analyse the system If r < 0 then lim t→∞ X 1 (t) = lim t→∞ X 2 (t) = 0 almost surely as the result of Theorem 2.5. We focus on the case r > 0 and show that some of the results violate the conclusions of Theorem 2.6. If we set Assume Z (0) = 1 and without loss of generality suppose Z (0) > 1. This implies One can further see from (3.11) that Z (t)−1 tends to 0 exponentially fast. If Z (0) = 1 let X 1 (0) = X 2 (0) = x > 0. Similar arguments to the above show that To gain more insight into the asymptotic properties of (X 1 (t), X 2 (t)), we study We have from Itô's formula that, By the variation-of constants formula (see Mao 1997, Section 3.4), we have Thus, It is well-known that is the solution to the stochastic logistic equation By the law of the iterated logarithm, almost surely We have In view of (3.12), we can use L'hospital's rule to obtain almost surely. By the law of the iterated logarithm, lim t→∞ e rt+σ B(t) e (r −ε)t = ∞ and lim t→∞ e rt+σ B(t) e (r +ε)t = 0 for any ε > 0. Applying this and (3.11) to (3.13), it is easy to show that with probability 1 Since lim t→∞ Z (t) = 1 almost surely, we also have lim t→∞ X 1 (t) U (t) = 1 almost surely. Thus, the long term behavior of X 1 (t) and X 2 (t) is governed by the onedimensional diffusion U (t). In particular, both X 1 (t) and X 2 (t) converge to a unique invariant probability measure ρ on (0, ∞), which is the invariant probability measure of U (t). In this case, the invariant probability measure of X(t) = (X 1 (t), X 2 (t)) t≥0 is not absolutely continuous with respect to the Lebesgue measure on R 2,• + . Instead, the invariant probability measure is concentrated on the one-dimensional manifold Biological interpretation The stochastic growth rate in this degenerate setting is given by r = a 1 − α + β − σ 2 2 . We note that this term is equal to the stochastic growth rate of patch 1, a 1 − σ 2 2 , to which we add β, the rate of dispersal from patch 1 to patch 2, and subtract α, the rate of dispersal from patch 2 to patch 1. When one has extinction. In particular, if the patches on their own are sink patches so that a 1 − σ 2 2 < 0 and a 2 − σ 2 2 < 0 dispersion cannot lead to persistence since cannot hold simultaneously. The behavior of the system when r > 0 is different from the behavior in the non-degenerate setting of Theorem 2.1 or the degenerate setting of Theorem 2.6. Namely, if the patches start with equal populations then the patch abundances remain equal for all times and evolve according to the one-dimensional logistic diffusion U (t). If the patches start with different population abundances then X 1 (t) and X 2 (t) are never equal but tend to each other asymptotically as t → ∞. Furthermore, the long term behavior of X 1 (t) and X 2 (t) is once again determined by the logistic diffusion U (t) as almost surely X i (t) U (t) → 1 as t → ∞. As such, if r > 0 we have persistence but the invariant measure the system converges to does not have R 2,• + as its support anymore. Instead the invariant measure has the line {x = (x 1 , x 2 ) ∈ R 2,• + : x 1 = x 2 } as its support. Example 3.1 We discuss the case when a 1 = a 2 and σ 1 = σ 2 . The stochastic growth rate can be written by the analysis in the sections above as (3.14) Biological interpretation In the case when a 1 = a 2 , σ 1 = σ 2 and b 1 = b 2 (so that the two patches only differ in their competition rates) the stochastic growth rate r does not depend on the dispersal rate α. The system behaves just as a single-patch system with stochastic growth rate a 1 − σ 2 2 . In contrast to Evans et al. (2013, Example 1) coupling two sink patches by dispersion cannot yield persistence. However, if the growth rates of the patches are different a 1 = a 2 then the expression for r given in (3.14) yields for α In particular We note that r is a decreasing function of the dispersal rate α for large values of α (also see Fig. 1). This is different from the result of Evans et al. (2013, Example 1) where r was shown to be an increasing function of α. In contrast to the non-degenerate case, coupling patches by dispersal decreases the stochastic growth rate and as such makes persistence less likely. This highlights the negative effect of spatial correlations on population persistence and why one may no longer get the rescue effect. This is one of your main biological conclusions. Furthermore, we also recover that dispersal has a negative impact on the stochastic growth rate when there is spatial heterogeneity (i.e. a 1 = a 2 ). This fact has a long history, going back to the work by Karlin (1982). Discussion and generalizations For numerous models of population dynamics it is natural to assume that time is continuous. One reason for this is that often environmental conditions change continuously with time and therefore can naturally be described by continuous time models. There have been a few papers dedicated to the study of stochastic differential equation models of interacting, unstructured populations in stochastic environments (see Benaïm et al. 2008;Evans et al. 2015). These models however do not account for population structure or correlated environmental fluctuations. Examples of structured populations can be found by looking at a population in which individuals can live in one of n patches (e.g. fish swimming between basins of a lake or butterflies dispersing between meadows). Dispersion is viewed by many population biologists as an important mechanism for survival. Not only does dispersion allow individuals to escape unfavorable landscapes (due to environmental changes or lack of resources), it also facilitates populations to smooth out local spatio-temporal environmental changes. Patch models of dispersion have been studied extensively in the deterministic setting (see for example Hastings 1983;Cantrell et al. 2012). In the stochastic setting, there have been results for discrete time and space by Benaïm and Schreiber (2009), for continuous time and discrete space by Evans et al. (2013) and for structured populations that evolve continuously both in time and space. We analyze the dynamics of a population that is spread throughout n patches, evolves in a stochastic environment (that can be spatially correlated), disperses among the patches and whose members compete with each other for resources. We characterize the long-term behavior of our system as a function of r -the growth rate in the absence of competition. The quantity r is also the Lyapunov exponent of a suitable linearization of the system around 0. Our analysis shows that r < 0 implies extinction and r > 0 persistence. The limit case r = 0 cannot be analyzed in our framework. We expect that new methods have to be developed in order to tackle the r = 0 scenario. Since mathematical models are always approximations of nature it is necessary to study how the persistence and extinction results change under small perturbations of the parameters of the models. The concept of robust persistence (or permanence) has been introduced by Hutson and Schmitt (1992). They showed that for certain systems persistence holds even when one has small perturbations of the growth functions. There have been results on robust persistence in the deterministic setting for Kolmogorov systems by Schreiber (2000) and Garay and Hofbauer (2003). Recently, robust permanence for deterministic Kolmogorov equations with respect to perturbations in both the growth functions and the feedback dynamics has been analyzed by Patel and Schreiber (2016). In the stochastic differential equations setting results on robust persistence and extinction have been proven by and Benaïm et al. (2008). We prove analogous results in our framework where the populations are coupled by dispersal. For robust persistence we show in Appendix D that even with density-dependent perturbations of the growth rates, dispersion matrix and environmental covariance matrix, if these perturbations are sufficiently small and if the unperturbed system is persistent then the perturbed system is also persistent. In the case of extinction we can prove robustness when there are small constant perturbations of the growth rates, dispersal matrices and covariance matrices. In ecology there has been an increased interest in the spatial synchrony present in population dynamics. This refers to the changes in the time-dependent characteristics (i.e. abundances etc) of structured populations. One of the mechanisms which creates synchrony is the dependence of the population dynamics on a synchronous random environmental factor such as temperature or rainfall. The synchronizing effect of environmental stochasticity, or the so-called Moran effect, has been observed in multiple population models. Usually this effect is the result of random but correlated weather effects acting on spatially structured populations. Following Legendre (1993) one could argue that our world is a spatially correlated one. For many biotic and abiotic factors, like population density, temperature or growth rate, values at close locations are usually similar. For an in-depth analysis of spatial synchrony see Kendall et al. (2000) and Liebhold et al. (2004). Most stochastic differential models appearing in population dynamics treat only the case when the noise is non-degenerate (although see Rudnicki 2003;Dieu et al. 2016). This simplifies the technical proofs significantly. However, from a biological point of view it is not clear that the noise should never be degenerate. For example if one models a system with multiple populations then all populations can be influenced by the same factors (a disease, changes in temperature and sunlight etc). Environmental factors can intrinsically create spatial correlations and as such it makes sense to study how these degenerate systems compare to the non-degenerate ones. In our setting the n different patches could be strongly spatially correlated. Actually, in some cases it could be more realistic to have the same onedimensional Brownian motion (B t ) t≥0 driving the dynamics of all patches. We were able to find conditions under which the proofs from the non-degenerate case can be generalized to the degenerate setting. This is a first step towards a model that tries to explain the complex relationship between dispersal, stochastic environments and spatial correlations. We fully analyze what happens if there are only two patches, n = 2, and the noise is degenerate. Our results show unexpectedly, and in contrast to the non-degenerate results by Evans et al. (2013), that coupling two sink patches cannot yield persistence. More generally, we show that the stochastic growth rate is a decreasing function of the dispersal rate. In specific instances of the degenerate setting, even when there is persistence, the invariant probability measure the system converges to does not have R 2,• + as its support. Instead, the abundances of the two patches converge to an invariant probability measure supported on the line {x = (x 1 , x 2 ) ∈ R 2,• + : These examples shows that degenerate noise is not just an added technicality-the results can be completely different from those in the non-degenerate setting. The negative effect of spatial correlations (including the fully degenerate case) has been studied in several papers for discrete-time models (see Schreiber 2010; Harrison and Quinn 1989;Palmqvist and Lundberg 1998;Bascompte et al. 2002;Roy et al. 2005). The negative impact of dispersal on the stochastic growth rate r when there is spatial heterogeneity (i.e. a 1 = a 2 ) has a long history going back to the work of Karlin (1982) on the Reduction Principle. Following Altenberg (2012) the reduction principle can be stated as the widely exhibited phenomenon that mixing reduces growth, and differential growth selects for reduced mixing. The first use of this principle in the study of the evolution of dispersal can be found in Hastings (1983). The work of Kirkland et al. (2006) provides an independent proof of the Reduction Principle and applications to nonlinear competing species in discrete-time, discrete-space models. In the case of continuous-time, discrete-space models (given by branching processes) a version of the Reduction Principle is analysed by Schreiber and Lloyd-Smith (2009). k species competing and dispersing in n patches Real populations do not evolve in isolation and as a result much of ecology is concerned with understanding the characteristics that allow two species to coexist, or one species to take over the habitat of another. It is of fundamental importance to understand what will happen to an invading species. Will it invade successfully or die out in the attempt? If it does invade, will it coexist with the native population? Mathematical models for invasibility have contributed significantly to the understanding of the epidemiology of infectious disease outbreaks (Cross et al. 2005) and ecological processes (Law and Morton 1996;Caswell 2001). There is widespread empirical evidence that heterogeneity, arising from abiotic (precipitation, temperature, sunlight) or biotic (competition, predation) factors, is important in determining invasibility (Davies et al. 2005;Pyšek and Hulme 2005). However, few theoretical studies have investigated this; see, e.g., Schreiber and Lloyd-Smith (2009), Schreiber and Ryan (2011) and Schreiber (2012. In this paper we have considered the dynamics of one population that disperses through n patches. One possible generalization would be to look at k populations (X 1 , . . . , X k ) that compete with each other for resources, have different dispersion strategies and possibly experience the environmental noise differently. Looking at such a model could shed light upon fundamental problems regarding invasions in spatio-temporally heterogeneous environments. The extension of our results to competition models could lead to the development of a stochastic version of the treatment of the evolution of dispersal developed for patch models in the deterministic setting by Hastings (1983) and Cantrell et al. (2012). In the current paper we have focused on how spatio-temporal variation influences the persistence and extinction of structured populations. In a follow-up paper we intend to look at the dispersal strategies in terms of evolutionarily stable strategies (ESS) which can be characterized by showing that a population having a dispersal strategy (D i j ) cannot be invaded by any other population having a different dispersal strategy (D i j ). The first thing to check would be whether this model has ESS and, if they exist, whether they are unique. One might even get that there are no ESS in our setting. For example, Schreiber and Li (2011) show that there exist no ESS for periodic non-linear models and instead one gets a coalition of strategies that act as an ESS. We expect to be able to generalize the results of Cantrell et al. (2012) to a stochastic setting using the methods from this paper. and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Appendix A: The case r > 0 The next sequence of lemmas and propositions is used to prove Theorem 2.1. We start by showing that our processes are well-defined Markov processes. Proposition A.1 The SDE (stochastic differential equation) defined by (2.1) has unique strong solutions X(t) = (X 1 (t), . . . , X n (t)), t ≥ 0 for any x = (x 1 , . . . , x n ) ∈ R n + . Furthermore, X(t) is a strong Markov process with the Feller property, is irre- Proof Since the coefficients of (2.1) are locally Lipschitz, there exists a unique local solution to (2.1) with a given initial value. In other words, for any initial value, there is a stopping time τ e > 0 and a process (X(t)) t≥0 satisfying (2.1) up to τ e and lim t→τ e X(t) = ∞ (see e.g. Khasminskii 2012, Section 3.4). Clearly, if X(0) = 0 then X(t) = 0, t ∈ [0, τ e ) which implies that τ e = ∞. By a comparison theorem for SDEs (see Geiß and Manthey (1994, Theorem 1.2) and Remark A.2 below), where (X i (t)) t≥0 is given by (2.7). Since (2.7) has a global solution due to the Lipschitz property of its coefficients, we have from (A.1) that τ e = ∞ almost surely. Define the process Since the b i s are continuous and vanish at 0, there exists r > 0 such that for |x| ≤ r we have Moreover, since P {0 ≤ X i (t) < X i (t) for all t ≥ 0, i = 1, . . . , n} = 1, we can use standard arguments (e.g., Mao 1997, Theorem 2.9.3) to obtain the Feller property of the solution to (2.1). Remark A.1 There are different possible definitions of "Feller" in the literature. What we mean by Feller is that the semigroup (T t ) t≥0 of the process maps the set of bounded continuous functions C b (R n + ) into itself i.e. whenever x j = y j and x l ≤ y l , l = j. Remark A.2 One often wants to apply the well-known comparison theorem for onedimensional SDEs (see Ikeda and Watanabe 1989) to a multidimensional setting. Below we explain why we can make use of comparison theorems for stochastic differential equations in our setting. Consider the following two systems where W = (W 1 (t), . . . , W r (t)) t≥0 is an r -dimensional standard Brownian motion, and the coefficients a i , b i , σ jk are continuous mappings on R + × R d . Suppose (A.5) and (A.6) have explosion times θ R , θ S . Let (C0), (C1), and (C2) be the following conditions. (C0) The solution to (A.5) is pathwise unique and the drift coefficient a(t, x) is quasimonotonously (see Definition A.1) increasing with respect to x. (C1) For every t ≥ 0, j = 1, . . . , d and x ∈ R d the following inequality holds (C2) There exists a strictly increasing function ρ : R + → R + with ρ(0) = 0 and Sometimes it is assumed incorrectly that conditions (C1) and (C2) suffice to conclude that P{R(t) ≤ Y (t), t ∈ [0, θ R ∧ θ S )} = 1. Some illuminating counterexamples regarding this issue can be found in Assing and Manthey (1995, Section 3). However, if in addition to conditions (C1) and (C2), one also has condition (C0), then Geiß and Manthey (1994, Theorem 1.2 Note that, in the setting of our paper, the drift coefficient of (2.7) is quasi-monotonously increasing and we can pick ρ(x) = x, x ∈ R + . Therefore, conditions (C0), (C1), and C(2) hold, which allows us to use the comparison results. In special cases one can prove comparison theorems even when quasi-monotonicity fails; see Evans et al. To proceed, let us recall some technical concepts and results needed to prove the main theorem. Let = ( 0 , 1 , . . .) be a discrete-time Markov chain on a general state space (E, E), where E is a countably generated σ -algebra. Denote by P the Markov transition kernel for . If there is a non-trivial σ -finite positive measure ϕ on (E, E) such that for any A ∈ E satisfying ϕ(A) > 0 we have where P n is the n-step transition kernel of , then the Markov chain is called ϕ-irreducible. It can be shown (see Nummelin 1984) that if is ϕ-irreducible, then there exists a positive integer d and disjoint subsets E 0 , . . . , E d−1 such that for all i = 0, . . . , d − 1 and all x ∈ E i , we have The smallest positive integer d satisfying the above is called the period of . An aperiodic Markov chain is a chain with period d = 1. A set C ∈ E is called petite, if there exists a non-negative sequence (a n ) n∈N with ∞ n=1 a n = 1 and a nontrivial positive measure ν on (E, E) such that ∞ n=1 a n P n (x, The following theorem is extracted from Jarner and Roberts (2002, Theorem 3.6). Theorem A.1 Suppose that is irreducible and aperiodic and fix 0 < γ < 1. Assume that there exists a petite set C ⊂ E, positive constants κ 1 , κ 2 and a function V : E → [1, ∞) such that Then there exists a probability measure π on (E, E) such that The next series of lemmas and propositions are used to show that we can construct a function V satisfying the assumptions of Theorem A.1. Proof To prove this lemma, it is more convenient to work with the process X(t) that lives on R n + \{0}. Since (X(t)) t≥0 is a nondegenerate diffusion with smooth coefficients in R n,• + , by Rey-Bellet (2006, Corollary 7.2), the transition semigroup P X (t, x, ·) of (X(t)) t≥0 has a smooth, positive density (0, ∞)×R 2n,• Slightly modifying the proof of Evans et al. (2013, Proposition 3.1) (the part proving the irreducibility of the solution process), we have thatp x := P X T 2 , x, N 0 > 0 for all x ∈ R n + \{0}. Since (X(t)) t≥0 has the Feller property, there is a neighborhood N x x such that For any compact set K ∈ R n + \{0}, there are finite x 2 , . . . , x k such that K ⊂ k i=2 N x i . As a result, In view of (A.8), (A.9), and (A.10), an application of the Chapman-Kolmogorov equations yields that for any x ∈ K and any measurable set A ⊂ R n,• + , where m(·) is Lebesgue measure on R n,• + . Since the measure ν(·) = m(· ∩ N 1 ) is nontrivial, we can easily obtain that K is a petite set of the Markov chain {(X(kT )), k ∈ N}. Moreover, K can be chosen arbitrarily. Hence, for any x ∈ R n + \{0} there is p x > 0 such that Since P(T, x, ·) has a density, m(A i ) > 0 for i = 0, . . . , d − 1. In view of (A.11), we must have m(N 0 ∩ A i ) = 0 for any i = 0, . . . , d − 1. This contradicts the fact that This contradiction implies that {X(kT ), k ∈ N} is aperiodic. In the same manner, we can prove thatỸ(t) is irreducible, aperiodic and its state space, , is petite. Proof Since is a petite set of {Ỹ(t) : t ≥ 0}, in view of Meyn and Tweedie (1993, Theorem 6.1), there are γ 1 and γ 2 > 0 such that In view of (2.8) and (A.33), we have On one hand, letting M y,s (T ) be defined as (A.26), we have from Itô's isometry that With standard estimation techniques, it follows from (A.34) and (A.35) that for any ε > 0, there is a T * = T * (ε) such that for any (y, s) ∈ × (0, ∞). Y y,s (t) b(S y,s (t)Y y,s (t))dt Proof We look at three cases of the initial data (y, s). By Kallenberg (2002, Theorem 20.17), our process (Y(t), S(t)) t≥0 is either Harris recurrent or uniformly transient on • × (0, ∞). Using Kallenberg (2002, Theorem 20.21), our process cannot be uniformly transient and also have an invariant probability measure. Therefore, our process is Harris recurrent. Kallenberg (2002, Theorem 20.17) further indicates that any Harris recurrent Feller process on • × (0, ∞) with strictly positive transition densities has a locally finite invariant measure that is equivalent to Lebesgue measure and is unique up to normalization. Since we already know that (Y(t), S(t)) t≥0 has a unique invariant probability measure, this probability measure has an almost everywhere strictly positive density with respect to the Lebesgue measure. Appendix C: Degenerate diffusion in R n If the correlation matrix is degenerate, the diffusionỸ(t) from (2.6) still has an invariant probability measure ν * since it is a Feller-Markov process in a compact set. Moreover, ν * ( • ) = 1 because the property that P Ỹ (t) ∈ • , t > 0 = 1 is satisfied as long as Assumption 2.2 holds, that is, the dispersion matrix (D i j ) is irreducible. It is readily seen that the following is true. Theorem C.1 Assume thatỸ(t) has a unique invariant probability measure ν * . Define r by (2.8). Suppose that r < 0. Then for any i = 1, . . . , n and any In particular, for any i = 1, . . . , n and any Remark C.1 The Markov process {Ỹ(t), t ≥ 0} has a unique invariant probability measure if it is irreducible. Moreover, since P{Ỹ y (t) > 0 for all t > 0} = 1 for any y ∈ , we need only check its irreducibility in • . To prove that the diffusion {Ỹ(t), t ≥ 0} is irreducible in • , we pursue the following approach: • First, we show that the process {Ỹ(t), t ≥ 0} verifies Hörmander's condition. As a result, the process {Ỹ(t), t ≥ 0} has a smooth density function for any t > 0; see e.g., Rey-Bellet (2006). • Next, we show that there is an open set N ⊂ • such that for any open set N 0 ⊂ N , and y ∈ • , there is a t 0 > 0 such that P{Ỹ y (t 0 ) ∈ N 0 } > 0. This claim is usually proved by analyzing the control systems corresponding to the diffusion and using the support theorem. We refer to Kliemann (1987) and Rey-Bellet (2006) for more details. This then shows that the process {Ỹ(t), t ≥ 0} is irreducible in • . Now we consider the case r > 0. We still assume that {Ỹ(t) : t ≥ 0} has a unique invariant probability measure. In order to obtain Theorem 2.1 for our degenerate process, we have to show that there is a sufficiently large T > 0 such that the Markov chain (Y(kT ), S(kT )) k∈N is irreducible and aperiodic and every compact subset of • × (0, ∞) is petite for this Markov chain. Note that if every compact subset of • × (0, ∞) is petite with respect to (Y(kT ), S(kT )) k∈N , then any compact subset of × (0, ∞) is petite with respect to (Y(kT ), S(kT )) k∈N by the arguments in the proof of Lemma A.1. Sufficient conditions for the above properties can be obtained by verifying the wellknown Hörmander condition as well as investigating the control systems associated with the diffusion (2.4). Once we have the Markov chain (Y(kT ), S(kT )) k∈N being irreducible and aperiodic, and every compact subset of • × (0, ∞) being petite for sufficiently large T , we can follow the steps from Appendix A to obtain the following result. Theorem C.2 Assume thatỸ(t) has a unique invariant probability measure ν * . Define r by (2.8). Suppose that Assumption 2.2 holds and that r > 0. Assume further that there is a sufficiently large T > 0 such that the Markov chain (Y(kT ), S(kT )) k∈N is irreducible and aperiodic, and that every compact set in • × (0, ∞) is petite for this Markov chain. The process X(t) = (X 1 (t), . . . , X n (t)) t≥0 has a unique invariant probability measure π on R n,• + that is absolutely continuous with respect to the Lebesgue measure and for any q * > 0, where ·, · TV is the total variation norm and P X (t, x, ·) is the transition probability of (X(t)) t≥0 . Moreover, for any initial value x ∈ R n + \{0} and any π -integrable function f , we have C.1: Case study: n = 2 In what follows, we show that if r > 0, there is a sufficiently large T > 0 such that the Markov chain (Y(kT ), S(kT )) k∈N is irreducible and aperiodic, and that every compact set in • × (0, ∞) is petite for the Markov chain. For simplicity of presentation, we restrict ourselves to the n = 2 case, and assume that b i (x) = b i x, x ≥ 0, i = 1, 2 for some b 1 , b 2 > 0. As a result, (2.1) becomes where σ 1 , σ 2 are non-zero constants and (B(t)) t≥0 is a one dimensional Brownian motion. To proceed, we consider the following control system, which is associated with (C.7). Let (z φ (t, z, y), y φ (t, z, y)) be the solution to equation (C.8) with control φ and initial value (z, y). Denote by O + 1 (z, y) the reachable set from (z, y), that is the set of (z , y ) ∈ R 2,• + such that there exists a t ≥ 0 and a control φ(·) satisfying z φ (t, z, y) = z , y φ (t, z, y) = z . We first recall some concepts introduced in Kliemann (1987). Let U be a subset of R 2,• + satisfying u 2 ∈ O + 1 (u 1 ) for any u 1 , u 2 ∈ U . Then there is a unique maximal set V ⊃ U such that this property still holds for V . Such V is called a control set. A control set C is said to be invariant if O + 1 (w) ⊂ C for all w ∈ C. Finding invariant control sets for (C.8) is facilitated by using a change of variables argument. Put w φ (t) = z φ (t)y r +1 φ (t) with r = −σ 1 σ 2 . We have where h(w, y) = w a 1 − σ 2 1 2 + r a 2 − σ 2 2 2 + rβ − α −b 1 wy r − b 2 r y + βy 1−r w −1 + αr wy r −1 . Proof First, we need to show that c * is well-defined (although it can be +∞). Since lim w→0 h(w, y) = ∞, which implies that w : sup y>0 {h(w , y)} ≥ 0 for all w ≤ w is a nonempty set. Hence c * is well-defined. The claim that O + 2 (w, y) ⊃C for any (w, y) ∈ R 2,• + can be proved by standard arguments. Let us explain the main ideas here. On the phase space (w, y) ∈ R 2,• + , since the control φ(t) only appears in the equation of y φ , we can easily control vertically, that is, for any initial points y 0 and w 0 , there is a control so that y φ can reach any given point y 1 while w φ stays in a given neighborhood of w 0 . If h(w 0 , y 0 ) < 0, we can choose a feedback control such that (w φ (t), u φ (t)) reaches a point to the 'left' (w 1 , y 0 ) with w 1 < w 0 as long as h(w, y 0 ) < 0 for w ∈ [w 1 , w 0 ]. Likewise, for h(w 0 , y 0 ) > 0, we can choose a feedback control such that (w φ (t), u φ (t)) can reach a point to the 'right' (w 1 , y 0 ) with w 1 > w 0 as long as h(w, y 0 ) > 0 for w ∈ [w 0 , w 1 ]. We also have that inf y>0 {h(w, y)} = −∞ for any w > 0. Using these facts, we can follow the steps from Du et al. (2016, Section 3) to obtain the desired results. Since (Z z,y (t), Y z,y (t)) is a Markov-Feller process, there exists an open set V z,y (z, y) such that P(n z,y T, z , y , N * ) ≥ ρ u,v for all (z , y ) ∈ V z,y . Since K is a compact set, there is a finite number of V z i ,y i , i = 1, . . . , k 0 satisfying K ⊂ k 0 i=1 V z i ,y i . Let ρ K = min{ρ z i ,y i , i = 1, . . . , k 0 }. For each (z, y) ∈ K , there exists n z i ,y i such that P(n z i ,y i T, z, y, N * ) ≥ ρ K . We have shown in the beginning of Sect. 2.2. thatỸ(t) has a unique invariant probability measure ν * . Having Proposition C.2, we note that the assumptions, and therefore the conclusions, of Theorems C.1 and C.2 hold for model (C.4). This argument proves Theorems 2.5 and 2.6. on the simplex . Suppose that is positive definite. In this case, ( Y(t)) t≥0 has a unique invariant probability measure ν * . Define By standard arguments, there is a θ 2 ∈ (0, θ 1 ) such that if max a − a , D − D , − < δ 2 , then P Ỹ y (T ) − Y y (T ) < ε 6M 3 > ε 6M 4 for all y ∈ (D.5) Let y * be a -valued and F 0 -measurable random variable whose distribution is ν * . Clearly, a y − 1 2 y y ν * (dy) = E a Y y * (T ) − 1 2 ( Y y * (T )) Y y * (T ) . It follows from (D.5) that E a Y y * (T ) − 1 2 ( Y y * (T )) Y y * (T ) − a Ỹ y * (T ) + 1 2 (Ỹ y * (T )) Ỹ y * (T ) Since Evans et al. (2013, Proposition 3) focuses only on the continuity for a specific parameter rather than all parameters, we provided an alternative proof for the sake of completeness. Remark D.2 If r < 0, X(t) converges to 0 with probability 1. By virtue of Proposition D.1, if D, are constant matrices and max a − a , D − D , − is sufficiently small then X(t) converges to 0 with an exponential rate almost surely. We conjecture that this result holds for any θ -perturbation of X(t) defined by (2.20). However, when D := D(x), := (x), comparison arguments may be not applicable. Moreover, it is also difficult to analyze the asymptotic behavior of the equation without competition terms, namely d X (t) = diag( X (t)) a + D( X (t)) X (t) dt + diag( X (t)) ( X (t)) dB(t). (D.14)
2016-05-06T18:39:25.000Z
2016-05-06T00:00:00.000
{ "year": 2017, "sha1": "32caf90a02fa93e4a20196f727b7b268f5b4a3de", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00285-017-1153-2.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "32caf90a02fa93e4a20196f727b7b268f5b4a3de", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Biology", "Medicine" ] }
248893171
pes2o/s2orc
v3-fos-license
Relationship between the prevalence and severity of non‐alcoholic fatty liver disease and coronary artery disease: Findings from a cross‐sectional study of a referral center in northeast Iran Abstract Background and Aim Non‐alcoholic fatty liver disease (NAFLD) is becoming increasingly prevalent worldwide, and cardiovascular diseases are the most common cause of death in NAFLD patients. The present study aimed to evaluate the possible relationship between the presence and severity of NAFLD and coronary artery disease (CAD). Methods A cross‐sectional study was conducted on 296 patients (122 men and 174 women, with mean age 54.10 ± 9.33 years) referred to the catheterization laboratory of Imam Reza Hospital affiliated to the Mashhad University of Medical Sciences, Mashhad, Iran, for elective coronary angiography to investigate the presence and severity of CAD. Additionally, all patients underwent abdominal ultrasonography (USG) to detect NAFLD and its severity. Results Among the 296 patients, 187 (63.2%) had CAD and 160 (50.1%) had NAFLD. NAFLD patients had significantly higher prevalence of obesity (odds ratio [OR] = 1.047, 95% confidence interval [CI] = 1.002–1.094), hypertension (OR = 1.909, 95% CI = 1.027–3.55), hyperlipidemia (OR = 3.474, 95% CI = 1.862–6.482), and CAD (OR = 2.009, 95% CI = 1.100–3.669). The percentage of patients with normal vessels was higher in the non‐NAFLD group, followed by the group with mild and severe NAFLD (P < 0.001). However, single‐ and multi‐vessel disease incidences among the non‐NAFLD, mild, and severe NAFLD groups were 36.1, 43.1, and 63.7%, respectively. Interestingly, the percentage of patients with two‐vessel stenosis was significantly higher in severe NAFLD patients than mild and non‐NAFLD patients (P < 0.001). Conclusion The prevalence and severity of NAFLD were independently associated with CAD. Mild NAFLD was primarily observed among patients with normal and non‐obstructive coronary artery patients, while severe NAFLD was more frequent in extensive CAD patients with multi‐vessel disease. Introduction Non-alcoholic fatty liver disease (NAFLD) is considered the most common and emerging cause of chronic liver disease worldwide. 1 Currently, it has been estimated that the prevalence rate of NAFLD is approximately 25% globally and 27.4% in Asia. 2 The prevalence of NAFLD is still increasing as a result of the ongoing global epidemic of obesity, insulin resistance, and type-2 diabetes mellitus (DM). It has been reported that NAFLD affects over 80 million patients in the United States and may reach over 100 million in 2030. This will have a crucial impact on the public healthcare costs and the need for liver transplantation. 3,4 NAFLD patients have a higher risk of cardiovascular diseases (CVD), diabetes, and carcinoma and a higher mortality rate than non-NAFLD patients. 5 Additionally, CVD are the most common cause of death among NAFLD patients. It has been emphasized that NAFLD patients are twice more likely to die of CVD than liver diseases. 4 A meta-analysis of 34 000 individuals highlighted that the risk of developing both fatal and nonfatal cardiovascular events increases to 65% in NAFLD patients. 6 Therefore, recognizing and managing CVD in patients with NAFLD is of great importance. 7 The pathogenesis responsible for developing CVD among NAFLD patients may be related to vascular endothelial dysfunction, pro-atherogenic dyslipidemia, myocardial remodeling, and heart failure. 8 Therefore, in the present study, we aimed to investigate the association between the prevalence and severity of NAFLD and the prevalence and extent of coronary artery disease (CAD). Methods Ethical statement. This study was confirmed by the ethics committee of Mashhad University of Medical Sciences (IR. MUMS.900181), and all participants gave signed, written informed consent. Study population. This study was conducted on all 306 patients referred to the catheterization laboratory of Imam Reza Hospital affiliated to Mashhad University of Medical Sciences, Mashhad, Khorasan Razavi province, Iran, for elective coronary angiography from February 2012 to January 2013. After applying the exclusion criteria, 296 patients were found eligible and enrolled on the study (Fig. 1). Inclusion and exclusion criteria. Patients referred to the catheterization laboratory of Imam Reza Hospital for elective coronary angiography were included in the study. The indication of coronary angiography was based on the discretion of the referring cardiologist. All cases had suspected symptoms of CVDs such as chest pain with high pre-test probability for CAD, recent acute coronary syndrome (ACS), positive exercise stress testing, or myocardial perfusion imaging. 9 The exclusion criteria were patients with chronic kidney disease, known history of viral hepatitis, chronic liver disease, positive serum hepatitis B antigen or anti-hepatitis C viral antibody, history of sudden weight loss or weight loss surgery in the past year, and the use of drugs that may induce steatosis, such as corticosteroids, androgens, methotrexate, amiodarone, tamoxifen, and sodium valproate within the previous 3 months or for more than 6 months in the last 2 years. Evaluation of outcome. Demographic information of patients, including age, sex, body mass index (BMI), history of hypertension (HTN), DM, dyslipidemia, use of medications, and personal history of other diseases, was documented after enrolment using a questionnaire. In addition, fasting blood glucose and blood pressure were measured after being admitted to the hospital. All patients underwent coronary angiography by an interventional cardiologist followed by hepatic ultrasonography (USG) by a radiologist. HTN was diagnosed as systolic blood pressure ≥140 mmHg and diastolic blood pressure ≥90 mmHg or treatment with any anti-hypertensive drugs. DM was defined as fasting blood glucose ≥126 mg/dL, or random blood glucose greater than 200 mg/dL, or the current use of antidiabetic drugs. Dyslipidemia was defined as plasma triglycerides (TGs) level ≥150 mg/dL, or plasma low-density lipoprotein cholesterol (LDL-c) level ≥130 mg/dL, or plasma high-density lipoprotein cholesterol (HDL-c) level ≤50 mg/dL for women and ≤40 mg/dL for men, or the use of lipid-lowering medications. Furthermore, BMI was considered normal for BMI ≤25 kg/m 2 , overweight for 25 ≤ BMI ≤ 30 kg/m 2 , and obese for BMI ≥30 kg/m 2 . Coronary angiography. All the coronary angiograms were evaluated and reported by an interventional cardiologist who was blind to the hepatic sonography results. Coronary angiography was performed via the femoral route, using the Judkins technique and an Artis zee angiography unit (Siemens, Germany). Multiple views were obtained in all the patients, visualizing the left anterior descending and the circumflex coronary arteries, with at least four views of the left coronary system and two views of the right coronary artery. Coronary angiograms were recorded on compact disks in DICOM format. CAD was then defined as the presence of stenosis ≥50% in diameter compared to an adjacent normal segment of the main branches of the coronary artery. The extent of CAD was assessed by the number of vessels involved (vessel score) as follows: single-vessel disease (SVD), two-vessel disease (2VD), and three-vessel disease (3VD). The group of patients with the stenosis severity less than 50% on angiograms was defined as the non-obstructive coronary disease (NOB) group. 10 In this study, patients were divided into two main groups: those without significant CAD, including normal coronary arteries (NCA), and NOB patients and with CAD including SVD, 2VD, and 3VD. Hepatic ultrasonography. Abdominal USG (Siemens G40 with a 5-MHz transducer) was performed a day after or before coronary angiography and after the eighth fasting period by a radiologist who was blind to the medical history, laboratory findings, and coronary angiography results of the patients. USG was performed in the supine position. Various ultrasonographic features of focal liver lesions were observed by subcostal and intercostal approaches. Three ultrasonographic criteria for diagnosing NAFLD were studied: hepato-renal echo contrast and hyperechoic appearance of liver, posterior beam attenuation, and the blurring of vessels. In this study, patients were divided into two main groups: non-NAFLD and NAFLD, including its mild and severe forms. Mild NAFLD is described as a minimal diffuse increase in hepatic echogenicity and normal visualization of the diaphragm and intrahepatic vessel contours. Severe NAFLD contained a marked increase in the hepatic parenchymal echotexture with poor or non-visualization of the intrahepatic vessel borders, diaphragm, and posterior right lobe of the liver. 11,12 Statistical analysis. Data were analyzed using the SPSS version 22 statistical software (SPSS Inc., Chicago, IL, USA) and GraphPad Prism 8.01 software (GraphPad Software Inc., San Diego, CA, USA) and were expressed according to parametric or nonparametric as means AE SD or number with percentage, respectively. The normality of the data was checked with the Kolmogorov-Smirnov test. The comparison between categorical variables was made using the Chi-square test. When appropriate, the comparison between continuous variables was performed using one-way ANOVA for parametric data or Mann-Whitney U tests for nonparametric data. Logistic regressions were applied to introduce predictors of NAFLD and CAD. P-values ≤0.05, 0.01, and 0.001 were considered statistically significant. Results Demographic characteristics. Among the total number of 296 patients enrolled to the study, 174 (58.8%) were female and 122 (41.2%) were male, with a mean age of 54.1 AE 9.33 years ( BMI, body mass index; CAD, coronary artery disease; NAFLD, non-alcoholic fatty liver disease. Non-alcoholic fatty liver disease (P = 0.381, Table 2). The mean BMI of the individuals was 32.16 AE 6.14 kg/m 2 in the NAFLD group and 27.61 AE 6.71 kg/ m 2 in the non-NAFLD group (P < 0.001). Among the non-NAFLD patients, 75 (55.1%) had normal weight, 12 (8.8%) were overweight, and 49 (36%) were obese. In addition, NAFLD patients had a significantly higher BMI than non-NAFLD patients (P < 0.001, Table 2). Furthermore, our results also showed that NAFLD was observed in 54.9% male and 53.4% female patients (P = 0.814, Table 2). Interestingly, mild NAFLD was more frequent in men, while severe NAFLD was more frequent in women. However, there was no significant relationship between NAFLD severity and gender (P = 0.101, Table 2). As can be seen from Table 2, the mean age of CAD patients (51.04 AE 9.26) was significantly lower than that of non-CAD patients (55.84 AE 8.94, P < 0.001, Table 2). In addition, the percentage of patients with normal BMI was significantly more in the non-CAD (57.9%) group than in the CAD group (20.1%, P < 0.001, Table 2). Moreover, CAD patients had higher BMI levels than non-CAD patients (P < 0.001, Table 2). Our results also showed that CAD was present in 57.4% of men and 40.2% of women patients (P = 0.004, Table 2). In contrast, CAD was less frequent in women (59.8%) than in men (42.6%) (P = 0.004, Table 2). Frequency and severity of NAFLD according to the history of different disorders. The frequency and severity of NAFLD and non-NAFLD patients according to the history of HTN, DM, and dyslipidemia are shown in Table 3. Results of this study show that the frequency and severity of NAFLD is highly related to the incidence of HTN, DM, and dyslipidemia (P < 0.001, P = 0.025, and P < 0.001, respectively, Table 3). In addition, our results reveal that the incidence of HTN, dyslipidemia, NAFLD is significantly higher in the CAD group than in the non-CAD group (P < 0.001 for all cases, Table 3). Surprisingly, the mild NAFLD group had the highest incidence among the CAD groups (P < 0.001, Table 3). Our results also reveal that the percentage of non-NAFLD patients is higher in the non-CAD group (65%) than in the CAD groups (P < 0.001, Fig. 2b). Moreover, patients with mild NAFLD was 27.5% in NCA, 49.1% in NOB, 40.5% in SVD, 29.2% in 2VD, and 37.1% in 3VD (Fig. 2b). Additionally, severe NAFLD was mostly observed in the 2VD group (40.3%) among all groups (P < 0.001, Fig. 2b). Comparison between the non-NAFLD and NAFLD groups using Student's t-test. ‡ Comparison between the non-NAFLD, mild NAFLD, and severe NAFLD groups using Chi-square test. § Comparison between CAD and non-CAD groups using Student's t-test. ¶ Comparison between the CAD and non-CAD groups using Chi-square test. Discussion NAFLD is strongly associated with metabolic syndrome and its prevalence is increasing worldwide. 13 The present cross-sectional study included 296 patients (41.2% men and 58.8% women, with a mean age of 54.1 AE 9.33 years) who underwent coronary angiography followed by USG. Our results show that the prevalence and severity of NAFLD are independently associated with the CAD. Interestingly, mild NAFLD is observed among normal and NOB patients, while severe NAFLD is more frequent in severe CAD in patients with multi-vessel disease. We found that the percentage of patients with NAFLD was 50.1% among the patients who underwent elective coronary angiography. In accordance with our results, Perera et al. reported that NAFLD was observed in 46.67% of patients with nonfatal ACS in Sri Lanka. 14 Similarly, NAFLD was seen in 55.2% of Brazilian patients, 15 53.06% of Turkish patients, 16 and 53.78% of Finnish patients 17 who underwent diagnostic coronary angiography for ACS. Our results also found that the mean BMI was higher in NAFLD patients (32.16 AE 6.14 kg/m 2 ) than in non-NAFLD patients (27.61 AE 6.71 kg/m 2 ). Interestingly, 76% of obese patients had NAFLD. In line with our results, Dunn et al. noticed that the mean BMI was 30.8 AE 7.5 kg/m 2 in non-NAFLD patients and 36.7 AE 8.5 kg/m 2 in NAFLD patients with type 2 diabetes. 18 Additionally, the mean BMI was higher in NAFLD patients (32 AE 2.3 kg/m 2 ) than in non-NAFLD patients (27 AE 1.4 kg/m 2 ) with metabolic syndrome. 19 Olubamwo et al. also found that the mean BMI was 24.3 AE 1.9 kg/m 2 in non-NAFLD, 27.3 AE 1.9 kg/m 2 in mild NAFLD, and 30.9 AE 3.3 kg/m 2 in severe NAFLD patients with the ACS. 17 The prevalence of NAFLD in male (55%) patients was slightly higher than in female (53.5%) patients in our study; however, mild NAFLD was more frequent in males, whereas severe NAFLD was more frequent in females. Contrary to our results, Agarwal et al. reported that the prevalence of NAFLD was 58.1% in men and 56% in women with type 2 diabetes. 20 Perera et al. also found that the prevalence of NAFLD was higher in male (53.6%) patients with ACS than in female (46.4%) patients. 14 In a Korean population with a history of CVD, the percentage of male patients with NAFLD was found to increase with increase in the NAFLD stages. 21 Additionally, the prevalence of CAD was higher in male patients than in female patients in our study. In accordance, several previous studies have emphasized the higher rate of CAD in male patients. 22 The present study results show that the prevalence and severity of NAFLD are highly related to the incidence of HTN, DM, and dyslipidemia, with the latter showing the most robust relationship. These results are in line with those of multiple previous studies regarding the risk factors of NAFLD in CAD patients. 14 21 Similarly, a study in Nagasaki, Japan, suggested that NAFLD was significantly associated with hypercholesterolemia and hypertriglyceridemia in elderly men and with HTN, hypercholesterolemia, low HDL cholesterol, hypertriglyceridemia, and DM in elderly women. 27 Agarwal et al. also reported that the prevalence of HTN, DM, and dyslipidemia was 71.4, 69, 55.8%, respectively, in NAFLD patients with type-2 diabetes. 20 Our results also reveal that the prevalence of HTN and dyslipidemia is higher in CAD patients than non-CAD patients. Contrary to our results, Açikel et al. noticed that the prevalence of risk factors such as DM, dyslipidemia, and HTN was significantly higher in CAD patients than in non-CAD patients. 24 Similarly, CAD patients showed a higher incidence of hypertension, dyslipidemia, DM, and metabolic syndrome than non-CAD patients with type-2 diabetes. 28,29 Interestingly, we found that the presence and severity of NAFLD were strongly associated with the presence and extent of CAD (OR = 2.009, 95% CI = 1.1-3.669). Non-NAFLD patients were more likely to have normal angiography, while 2VD was observed in patients with most severe NAFLD. Additionally, NOB, which is related to the early stages of atherosclerosis, was mostly seen in NAFLD patients. In agreement with our results, Wong et al. found that CAD is more prevalent in NAFLD patients (84.6%) than in non-NAFLD patients (64.1%). They reported that NAFLD was associated with CAD (OR = 2.31, 95% CI = 1.46-3.64). 30 Recently, Montemezzo et al. examined the results of 136 patients with the ACS in Brazil. They found that CAD has present in 93.42% of NAFLD and in 56.45% of non-NAFLD patients; the severity of NAFLD was also correlated with the presence of CAD. 15 Choi et al. studied 134 patients who underwent elective coronary angiography in Kangwon, South Korea. They found that 80.4% of patients with CAD had NAFLD and that coronary artery stenosis was strongly associated with NAFLD in a grade-dependent manner. They also pointed out that NAFLD was a significant and independent predictor of CAD (OR = 1.685, 95% CI = 1.051-2.702). 31 Another similar study also supported the association between NAFLD and significant CAD in type 2 diabetic patients (OR = 2.128, 95% CI = 1.035-4.337). 32 The results of this study may provide a beneficial background for future studies to modify the guidelines for the management of CAD patients. In addition, NAFLD patients would benefit from advice on lifestyle and risk factor modifications. In summary, NAFLD patients had a significantly higher prevalence of obesity, HTN, and hyperlipidemia. In addition, the prevalence and severity of NAFLD were independently associated with the prevalence and extent of CAD. Mild NAFLD was mostly observed among normal vessel and NOB patients, while severe NAFLD was more frequent in extensive CAD patients with multi-vessel disease.
2022-04-29T15:12:43.307Z
2022-04-27T00:00:00.000
{ "year": 2022, "sha1": "6b450d3c15a19dccd737c894a482b9b716ab4494", "oa_license": "CCBYNCND", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/jgh3.12746", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "02ca240fa35b32cb9af3456e906bf2e9314e8c03", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
13670933
pes2o/s2orc
v3-fos-license
Transcriptomic Studies of Malaria: a Paradigm for Investigation of Systemic Host-Pathogen Interactions SUMMARY Transcriptomics, the analysis of genome-wide RNA expression, is a common approach to investigate host and pathogen processes in infectious diseases. Technical and bioinformatic advances have permitted increasingly thorough analyses of the association of RNA expression with fundamental biology, immunity, pathogenesis, diagnosis, and prognosis. Transcriptomic approaches can now be used to realize a previously unattainable goal, the simultaneous study of RNA expression in host and pathogen, in order to better understand their interactions. This exciting prospect is not without challenges, especially as focus moves from interactions in vitro under tightly controlled conditions to tissue- and systems-level interactions in animal models and natural and experimental infections in humans. Here we review the contribution of transcriptomic studies to the understanding of malaria, a parasitic disease which has exerted a major influence on human evolution and continues to cause a huge global burden of disease. We consider malaria a paradigm for the transcriptomic assessment of systemic host-pathogen interactions in humans, because much of the direct host-pathogen interaction occurs within the blood, a readily sampled compartment of the body. We illustrate lessons learned from transcriptomic studies of malaria and how these lessons may guide studies of host-pathogen interactions in other infectious diseases. We propose that the potential of transcriptomic studies to improve the understanding of malaria as a disease remains partly untapped because of limitations in study design rather than as a consequence of technological constraints. Further advances will require the integration of transcriptomic data with analytical approaches from other scientific disciplines, including epidemiology and mathematical modeling. INTRODUCTION T ranscriptomics is the quantitative or qualitative study of RNAs on a genome-wide scale (1). It is just one of several powerful approaches to undertake comprehensive or global analyses of large sets of related features, such as genes (genomics), proteins (proteomics), DNA modifications (epigenomics), or microbial communities (microbiomics). These approaches are often used for discovery rather than hypothesis-based investigation, since they can provide an unbiased description of similarities or differences between conditions of interest. The development of technologies for these high-dimensional analyses has been accompanied by novel computational and analytic approaches to deal with the vast amounts of data and has driven the emergence of the scientific discipline of bioinformatics (2). Initially, transcriptomic studies sought to quantify the expression levels of proteinencoding genes, often with the implicit assumption that this would broadly indicate changes in protein expression levels (3). However, as technologies and the understanding of noncoding RNAs have evolved, transcriptomic approaches have allowed a much deeper understanding of the complexities of the regulation of gene expression, alternate splicing events, and functions of noncoding RNAs as well as proving invaluable for the accurate construction and annotation of complex genomes (4)(5)(6)(7). Combinations of genomic, epigenomic, transcriptomic, and proteomic approaches are now increasingly being applied to provide a deeper understanding of the multiple layers of control that result in variations between cells, tissues, individuals, and populations in either health or disease (1,(7)(8)(9)(10). Malaria Malaria is a parasitic disease caused by apicomplexan parasites of the genus Plasmodium, which can infect a diverse range of vertebrate hosts. A comprehensive description of malaria epidemiology, biology, immunology, pathogenesis, and treatment is beyond the scope of this text and has been covered in recent review articles (11)(12)(13)(14)(15)(16). Here we give a brief overview, and additional background accompanies relevant sections below in this article. Five main species of Plasmodium cause most disease in humans: P. falciparum, P. vivax, P. knowlesi, P. malariae, and P. ovale. P. falciparum is the major cause of severe malaria, which can result in death, and is the focus of most of the human studies discussed in this article. The term malaria refers to the disease caused by infection with these parasites, and individuals with asexual-stage parasites in their blood without symptoms are described as having asymptomatic parasitemia (11). Plasmodium species are transmitted to humans by the bite of female Anopheles mosquitoes, and motile forms of the parasite (sporozoites) quickly make their way from the skin to blood vessels to hepatocytes, where they undergo massive intracellular asexual replication during the incubation phase of infection (Fig. 1). After one or more (1) Infection is initiated by a mosquito bite. Motile sporozoites rapidly find their way past the structural and immune cells in the skin into blood vessels and onwards to the liver. The short transit time limits opportunities for cellular interactions. (2) Sporozoites reach the liver, exit the vasculature through Kupffer cells, and then undergo massive replication in hepatocytes. Immune cells such as CD8 T cells patrol the liver and may detect and kill infected hepatocytes. (3) Parasites burst out of hepatocytes and enter the bloodstream, rapidly infecting erythrocytes. They undergo repeated cycles of asexual replication, interacting with blood leukocytes. Parasite products are carried throughout the systemic circulation, triggering inflammatory responses. (4) Parasitized red cells may be cleared by the spleen, which is a major location for the host immune response to Plasmodium. (5) Parasites may exit the asexual erythrocytic cycle to produce gametocytes, which may be taken up by another mosquito bite to allow onward transmission. Gametocytogenesis may be influenced by the host response, and most gametocyte development occurs in the bone marrow. Mature gametocytes reenter and circulate in the blood, potentially interacting with host leukocytes and the vascular endothelium. (6) Parasites can cause severe disease if they accumulate (sequester) and obstruct the microvasculature of vital organs, such as the brain. There may be both direct and indirect interactions with the vascular endothelium, leukocytes, and parenchymal cells. weeks, the brood of parasites escapes from the hepatocyte and reenters the bloodstream, but this time, the parasites rapidly invade red blood cells (RBCs), where they undergo repeated asexual reproductive cycles, with new parasites being produced every 24 to 72 h, depending on parasite species (11). For P. falciparum, around 16 to 32 daughter parasites are released approximately every 48 h. Parasite numbers increase exponentially until they are sufficient to trigger a host response, which starts to constrain this growth and also results in symptoms such as fever, muscle aches, headache, and violent shivering (rigors) (11)(12)(13). Often, symptoms occur in paroxysms, coinciding with the rupture of RBCs and the release of new parasites and their pathogen-associated molecular patterns into the circulation, triggering responses through host pattern recognition receptors (16). If parasite numbers continue to increase, because of insufficient constraint by the host response or a failure to receive antimalaria treatment, this predisposes an individual to an increasing risk of severe disease manifestations, which may include coma, lung injury, renal failure, acidosis, and severe anemia (11,13,16). These manifestations are thought to be consequences of not only high parasite loads but also high levels of inflammation; dysfunction of the vascular endothelium, which impairs its ability to regulate blood flow and prevent coagulation; and obstruction of small blood vessels by adherent parasitized erythrocytes (sequestration) (12,13,15,16). Indeed, extensive sequestration is a unique feature of P. falciparum and is largely explained by specific adhesive interactions between vascular endothelial surface molecules and parasite molecules expressed on the surface of infected RBCs (iRBCs) (17)(18)(19). However, the parasite will not benefit if inexorable growth kills its host too quickly, and at some stage in the intraerythrocytic asexual replication cycle, a proportion of parasites begins to differentiate into sexual stages (gametocytes), which can be taken up in the blood meal of another mosquito (20). The sexual phase of the life cycle then occurs in the mosquito gut, with a brief diploid stage before new haploid parasites eventually make their way to the salivary glands as sporozoites, ready to infect a new human host during a future blood meal (15). Individuals in some areas where malaria is highly endemic will be exposed to multiple infectious mosquito bites every day, and cumulative infections result in the acquisition of clinical immunity. First, individuals cease to be vulnerable to severe disease, and they then become less likely to develop clinical symptoms (12). This clinical immunity is thought to be largely antibody mediated and may reflect the ability to reduce, but not completely prevent, parasite replication through the acquisition of an increasing breadth of antibodies against polymorphic parasite antigens (14,15,21). Naturally acquired sterile immunity is thought to be very rare (perhaps never occurring), and the need to considerably improve on this natural immune response to highly antigenically diverse parasites has also produced challenges for vaccine development (14,15). Malaria as a Paradigm for Transcriptomic Studies of Systemic Host-Pathogen Interactions Clinical manifestations of malaria are due to the asexual blood stage of the parasite, and the host-parasite interactions that cause disease occur within the vasculature and its contiguous organs, such as the spleen (13,16,22). This means that many aspects of the host-pathogen interaction can be assessed through analyses of circulating blood. Whole blood (containing leukocytes and RBCs) can be used as a source of both host and parasite cells, and transcriptomic analyses can be applied to examine either cell type (23,24) or both cell types in the same sample (25). The high numbers of parasites that can be found in human blood, particularly in children with some forms of severe malaria (26,27), can yield abundant parasite RNA, making this a feasible approach. Furthermore, the pathogen load can be estimated from an examination of blood for parasites or their products (18,(26)(27)(28)(29). In our experience, parasitemia levels as low as 2% can yield sufficient parasite RNA reads for meaningful analysis with standard-depth RNA sequencing (RNA-seq) (30 million to 40 million reads), but lower parasitemia levels may require greater sequencing depth. The pathogen load is likely to be an important factor in determining the pathogenesis of many infectious diseases but is much harder to quantify as a stimulus for the systemic host response when pathogens are differentially distributed throughout multiple tissues (30). On a reductionist scale, transcriptomic studies of bacterial infections in cell culture models have identified reciprocal interactions between host and pathogen that contribute to pathogen growth (31,32), but understanding how this relates to disease requires evaluation at a much larger scale. Evaluation of host-parasite interactions in blood in malaria through transcriptomic analyses provides a paradigm for understanding the role of systemic host-pathogen interactions in general. Aims and Scope of This Review Here we aim to describe the contribution that transcriptomic studies have already made to understanding malaria and highlight how new approaches might permit greater insights. We provide an introduction to existing and new technologies and analytical approaches and outline how these technologies are transforming the depth and richness of transcriptomic data. We then consider malaria as a paradigm for transcriptional analyses of systemic host-pathogen interactions, lessons learned from malaria studies, and strategies for application to other infectious diseases. We illustrate that the traditional approach of considering variation in the host response to an invariant pathogen is too simplistic and that accumulating evidence suggests that dynamic variation in pathogen behavior also needs to be considered. TRANSCRIPTOMIC APPROACHES TO HOST-PATHOGEN INTERACTIONS There are numerous excellent reviews of transcriptomic technologies, including comparisons of their relative merits and detailed methodological considerations (4,(33)(34)(35)(36)(37), so we highlight only selected characteristics here. The development of these technologies over the last 2 decades (Fig. 2) has driven many transcriptomic studies of malaria and other infectious diseases, sometimes with more emphasis on applying new technology than on addressing important biological and clinical questions. Microarray technologies were initially the only commercial transcriptomic tools, but they have never been ideally suited to studies of host-pathogen interactions because of the limited range of species covered by commercial arrays and the major restriction that probes must be designed for known or predicted transcripts. Thus, initial microarray studies focused largely on host gene expression, and as pathogen genomes were assembled, pathogen gene expression analyses then became possible. Eventually, attempts were made to study both host and pathogen together by using custom-made arrays, but these methods never achieved great popularity because of design challenges. The inherent limitations of microarray technology (Table 1) mean that it is being superseded by RNA-seq as the preferred approach for many transcriptomic applications, including studies of infectious diseases. In theory, RNA-seq is much better suited for studies of host-pathogen interactions, although this is not always straightforward (31,56). A major challenge is that the RNA from the pathogen may comprise only a tiny proportion of the total RNA isolated from a specimen, particularly in the case of bacterial infection. One solution is to use model systems, for example, genetically modified fluorescent pathogens, to allow cell sorting and selection of infected host cells (31,32). An alternative is the specific enrichment of pathogen transcripts at the time of RNA extraction (32). Additional steps to maximize the capture of pathogen RNA, for example, enhanced lysis to release both bacterial and host RNAs from cells, and rRNA depletion to maximize the sequencing of mRNA may be needed (57). These approaches have led to the identification of host-pathogen interactions at a cellular scale, such as the regulation of invasion-associated effectors and virulence genes by bacterial noncoding RNAs (31), but there is an increasing desire to apply dual RNA-seq to infections in vivo, and deep RNA sequencing is already achieving some success (58,59). This has led to evidence that pathogen gene expression can vary within the host (59) and can be driven by the host response (58), providing a further impetus for this approach. Dual RNA-seq has also been applied to viral infections, allowing virus detection to be coupled to host transcriptome analysis (60), quantification of viral loads (61), and detection of variation in viral gene expression levels (62). FIG 2 Timeline of transcriptomic approaches for infectious diseases. Transcriptomic analysis requires the extraction of RNA from parts of the body such as peripheral blood. Methods of analysis have evolved over time. Serial analysis of gene expression (SAGE) utilizes the Sanger sequencing approach to generate and sequence short (ϳ11-nucleotide) tags and quantify transcript abundance. It is expensive and low throughput. Massively parallel signature sequencing (MPSS) generates slightly longer tags (ϳ17 to 20 nucleotides) and provides a larger library size. Cap analysis of gene expression (CAGE) is similar in principle to SAGE but targets transcription start sites. Microarray analysis is a hybridization approach that uses fluorescence-tagged probes to target transcripts of interest. RNA sequencing (RNA-seq) is a high-throughput sequencing approach capable of novel transcript discovery, noncoding RNA analysis, and alternative splicing analysis. RNA-seq has been developed to allow transcriptomic analysis at a single-cell resolution, simultaneous analysis of host and pathogen transcriptomes (dual RNA-seq), and sequencing of full-length transcripts to allow detailed analysis of transcript isoforms and direct analysis of RNA. In the future, techniques such as laser capture microdissection (LCM) may be coupled with RNA-seq to allow host cells and their interacting pathogens (such as parasites adhering to vascular endothelial cells) to be isolated and studied as defined cell groups or dual single-cell analyses. Massively parallel single-cell analyses and direct RNA-seq are also on the horizon. Eukaryotic pathogens generally contain more RNA and may have a greater capacity to change their gene expression in response to their host. Feasibility has been demonstrated, for example, in the gut of mice infected with whipworm (63) and the blood of P. falciparum-infected malaria patients (where about 10% of whole-blood reads mapped to the parasite) (25). In contrast, dual RNA-seq of brain tissue from mice infected with a different apicomplexan parasite, Toxoplasma gondii, showed a much lower proportion of reads (around 0.1%) mapping to the parasite genome (64), which likely reflects the very low abundance of parasite relative to host cells. Similarly, systemic infection with Candida albicans yielded so little fungal RNA from mouse kidneys that specific enrichment was necessary for pathogen RNA-seq (65). Therefore, obtaining appropriate data for simultaneous host and pathogen transcriptomic studies is highly dependent on the pathogen, the RNA content per pathogen, the sample type, and the pathogen load. TRANSCRIPTOMIC STUDIES OF MALARIA What Transcriptomic Studies Have Been Done on Malaria? There are many transcriptomic studies of Plasmodium species in vitro and in vivo. Here we focus primarily on those using samples from humans with P. falciparum malaria and animal models of human infection, and we draw upon pivotal in vitro studies, where necessary, to give context. Recent reviews have addressed the application of transcriptomics to specific aspects of malaria immunology, vaccinology, and host-parasite interactions (66)(67)(68). We aim to provide a broader overview, integrating findings across disciplines and across host and parasite species and highlighting the potential of the simultaneous analysis of host and parasite. Over the 15 years since the earliest of these studies was reported, the available technologies have evolved considerably (Fig. 2). Most studies considered in this review used microarray technology, although there has been a recent proliferation of RNA-seq studies. Only two reported studies conducted dual-transcriptome analyses (25,69). Here we synthesize the most important findings from these studies and consider their implications under five broad categories: technological advancement, basic biology, immune response, pathogenesis, and biomarkers. This synthesis is limited by the heterogeneity of experimental designs, technical and analytical approaches, and organs and species studied (Fig. 3), but despite this, some consistent findings and some clear holes in current knowledge emerge. What Have Transcriptomic Studies Taught Us about Malaria? Technical challenges and solutions. The first complete genome sequence of P. falciparum was reported in 2002 (70) and was soon followed by transcriptomic studies of parasite gene expression in vitro (71,72) and in vivo (23). Draft genomes of rodent malaria parasites began to be reported at the same time (73,74), and together, these resources opened the way for exciting analyses of host-parasite interactions in humans and popular experimental models. Since most transcriptomic studies have the implicit assumption that transcription and translation are tightly linked, ribosomal sequencing has only recently confirmed that this is indeed largely true for P. falciparum albeit with some evidence of additional translational regulation (75). There are specific technical challenges associated with transcriptomic analyses of Plasmodium species, including intrinsic properties of the parasite genomes; the mixing of parasite and host RNAs, which accompanies parasitism; the variation in gene expression with progression though the developmental cycle; and the inaccessibility of parasites at certain stages of the life cycle. The genome of P. falciparum has one of the most AT-rich compositions among all eukaryotes (70), which has made accurate sequencing, annotation, and assembly of the genome and transcriptome more difficult. High AT content creates PCR amplification bias and homopolymer tracks, which can bias quantitative analyses and makes read mapping more difficult because it results in more repeats. Plasmodium genomes also contain multigene families, which, particularly in P. falciparum, display extreme levels of genetic variation and present challenges for probe design and sequence mapping or assembly (19,76). The classification of genes within one of these multigene families using domains that can be defined by PCR has enabled specific groups of the var genes of P. falciparum to be associated with clinical phenotypes (77). Other multigene families, such as rif and stevor (19), have not yet been characterized in such detail, and there is also genetic diversity in many other loci (78). Workaround solutions have been developed, such as custom arrays with probes designed to capture transcripts from multiple different parasite strains (79), but it is difficult to know how much diversity they really capture. Highly polymorphic genes present a particular challenge for reference-based RNA-seq analyses, and even though de novo assembly of transcripts is possible, the best way to accurately assemble, quantify, and compare expression levels between specimens is still uncertain. Much of the Plasmodium life cycle is spent within host cells, and so another technical challenge is separating the mixed host and parasite RNAs that occur in biological samples. One of the first solutions was the simultaneous capture of host and parasite transcriptomes by using custom microarray designs with probes specific for each species (69). Next-generation sequencing (NGS) technology allowed alternative approaches to be applied, with the potential to separate signals from host and parasite RNAs (either physically or computationally) and then analyze them individually or in comparison with each other. The application of custom-designed nonrandom primers enabled the specific analysis of parasite transcripts by excluding human RNA, rRNA, and globin mRNA at the library preparation stage (80). However, unambiguous mapping of even relatively short reads to one species or the other has since been demonstrated to be possible when whole-blood host and parasite mRNAs were captured by using poly(A) selection and simultaneous sequencing (25). In vitro parasite development can be synchronized through various treatments (81), but in vivo, parasites may coexist at different developmental stages. Plasmodium exhibits stage-dependent gene expression (discussed in more detail below), which can confound the interpretation of the transcriptome obtained from analysis of mixed stages. Several analytical approaches have been developed to try to estimate the developmental stage of parasites, including a maximum likelihood approach using global gene expression (82) and a more simplistic approach using single, stage-specific marker genes (83). Despite the clear rationale for these methods to be applied, it is notable that few transcriptomic studies of malaria have used them. Parasite sequestration in the microvasculature is a pathognomonic feature of P. falciparum malaria (17,18), and the extent of sequestration is one factor that contributes to differences in developmental-stage mixtures (84). Sequestration is strongly associated with pathogenesis, and so transcriptomic analyses of sequestered parasites may reveal mechanisms of severe disease. However, sequestered parasites are absent from the circulating blood and therefore are difficult to access. One approach has been to use formalin-fixed paraffin-embedded postmortem tissue blocks from patients who died of malaria, and these specimens enable sequestered parasite gene expression profiles to be obtained from brain and other tissues (85). This potentially opens the way for exciting dual RNA-seq studies on similar specimens (Fig. 2), which may reveal much more about the interaction of sequestered parasites with the host vascular endothelium and the tissues in which they are located. Basic biology: parasite biology. One of the most fundamental applications of transcriptomics to parasite biology has been the use of RNA-seq to improve the annotation of parasite reference genomes by identifying novel transcripts, verifying or correcting gene models, and identifying splicing sites (86,87). This has proven particularly useful for the precise annotation of the P. falciparum genome and also produced major improvements in previously fragmented genomes of the common model rodent malaria parasites (88). Such a comprehensive annotation is a prerequisite for any attempt to relate quantitative gene expression to parasite biology. Microarray and RNA-seq studies have also been fundamental for understanding the variation in gene expression that accompanies the complex life cycle of Plasmodium parasites and for identifying transcription factors that control its progression (71,86,89,90). Throughout the intraerythrocytic cycle, there is a striking phasic variation in the expression of the majority of parasite genes (71,86,91), which rations protein production to occur only when the proteins are required. For example, genes involved in erythrocyte invasion are expressed only in mature schizonts so that when daughter merozoites are released into the blood, they are fully equipped to invade new red cells (71). Despite a conserved overall pattern of phasic variation, the expression of individual homologue genes is not so highly conserved among different Plasmodium species and shows the greatest variation at the stage of the greatest interaction with the host cell, during early-ring-stage development (91). The development of the gametocyte forms, required for the transmission of Plasmodium from vertebrates to mosquitoes, is accompanied by another unique gene expression profile and controlled by a master regulator transcription factor, AP2G (89,90). Since gene expression is so tightly linked to the developmental stage, it is not surprising that comparisons of gene expression levels between samples with asynchronous parasite populations (as often seen in vivo) can be misleading if no consideration is given to the composition of the mixture of parasite stages (82). However, assessment of parasite gene expression in vivo is very important, as there may be transcriptomic variation that is not seen under the standardized conditions used for parasite propagation in vitro. One of the first studies to attempt global gene expression analysis in vivo identified several different patterns in parasites drawn from malaria patients (92), and by analogy to the better-understood biology of Saccharomyces cerevisiae, these patterns were related to distinct physiological states (92). These states were also related to host factors, including cytokine profiles, which raised the intriguing possibility that they may represent a parasite response to the host environment. However, a subsequent reanalysis of these data suggested that much of the variation was due to differences in the mixtures of developmental stages between subjects rather than large changes in the gene expression of parasites at the same developmental stages (82). Recently developed single-cell RNA-seq approaches offer the potential to overcome some of the problems with bulk analyses of mixed parasite populations (90), although the practical and technical challenges for achieving unbiased transcriptome analysis are significant. Most current methodologies are restricted to polyadenylated transcripts, losing information on small noncoding RNAs and some long noncoding RNAs (93). Transcript recovery rates can be low, even for deep sequencing (94), and the low RNA content in small microbes can further diminish recovery (95). The decision of whether to examine large numbers of cells at a low sequence depth or small numbers of cells at an increased depth depends very much on the motivation for the experiment (93,96), yet carefully designed experiments with these limitations in mind are still revealing. For example, preliminary results suggest that even in synchronized parasite cultures, subtle variation in the developmental stage may create an illusion of sinusoidal patterns of gene expression during the erythrocytic developmental cycle of P. falciparum, when in reality, the pattern is much more discontinuous (97). Descriptive analyses of the distribution of parasite developmental stages in vivo have provided useful information in their own right, allowing the timing of parasite sequestration to be pinpointed to around 22 h after erythrocyte invasion and providing insights into gametocyte development (98). Very-early-ring-stage gametocytes and mature gametocytes were detectable in blood, while the intermediate stages of gametocyte development were not detectable, consistent with the concept that this occurs in sites of sequestration (20). Comparisons between Plasmodium species can give valuable insight into parasite biology. All Plasmodium genomes sequenced to date have multigene families in the subtelomeric regions of most of their chromosomes (70,88,(99)(100)(101)(102)(103)(104), which encode proteins expressed close to or on the surface of iRBCs. Depending on the species, up to 30% of the parasite genome is dedicated to these multigene families (105), suggesting that they have important roles (99). One of the best-known multigene families is the var family of P. falciparum, which encodes around 60 different copies per parasite of P. falciparum erythrocyte membrane protein 1 (PfEMP1) variants, antigenically diverse proteins expressed on the surface of iRBCs (19). The transcriptional control of var expression results in antigenic variation and immune evasion (19,(106)(107)(108)(109). var genes are also involved in the interaction with and adhesion to host cell surfaces, such as adhesion to the vascular endothelium (resulting in sequestration) or binding to uninfected RBCs (a phenomenon known as rosetting), both of which are correlated with virulence (19,108). While the var gene family is unique to P. falciparum, the PIR (Plasmodium interspersed repeat) multigene family can be found in every Plasmodium genome that has been sequenced so far. This vast gene family includes large numbers of gene loci in each species, including stevor (ϳ40 loci) and rif (ϳ180 loci) in P. falciparum, ϳ180 loci in P. berghei, ϳ800 loci in P. yoelii, ϳ200 loci in P. chabaudi, ϳ68 loci in P. knowlesi, ϳ1,200 loci in P. cynomolgi and P. vivax, ϳ250 loci in P. malariae, and nearly 2,000 loci in P. ovale (99,100,105,110,111). Transcriptomic studies have revealed fascinating insights into the role of this gene family in P. chabaudi infection. The repertoire of expression of the P. chabaudi PIR (Pc-PIR) genes is strongly influenced by whether mice are infected by a mosquito bite or serial blood passage, and these modes of transmission also determine parasite virulence (112). Serial blood passage leads to the expression of a single dominant Pc-PIR gene in blood-stage parasites and increased virulence, while mosquito transmission seemingly reverses the constraints imposed by serial blood passage, allowing a greater repertoire of Pc-PIR genes to be expressed, and parasites are less virulent. The mechanism by which this profile is reset is unknown but is hypothesized to have an epigenetic basis (112,113). An interesting parallel is that the diversity of the array of var genes expressed in an infected human host may also decrease as virulence increases following serial blood passage (112,113). The Pc-PIR genes also play a role in the establishment of chronic or persistent infection, which is important for parasite transmission. Comparisons of parasite transcriptomes in acute and chronic P. chabaudi infections revealed the differential expression of about half of the Pc-PIR genes. Most of these genes were upregulated during the acute phase of infection (113). Interestingly, this differential expression was not a consequence of immune selection but rather reflected the expression of specific clusters of Pc-PIR genes that were consistently associated with either acute or chronic infection, suggesting programmed rather than selected expression and another unknown mechanism, which will be important to delineate (113,114). These findings once again raise the question of whether the control of parasite gene expression in vivo is well represented in laboratory-adapted parasites grown for many generations in vitro, which are the basis for much of our knowledge about parasite biology. Existing evidence (albeit from small studies) suggests that parasite gene expression is generally well conserved between laboratory-adapted P. falciparum strains and parasites directly sampled from naturally infected humans (115). However, the expression of genes encoding molecules exported to the red cell surface, including those of the rif and stevor families, appears upregulated in vivo compared to that in laboratory strains. Among subjects with severe malaria, greater departures from the in vitro transcriptome have been described (116), perhaps suggesting that the more perturbed the host environment, the more the parasite must adapt its gene expression. Adaptation of parasite gene expression to the host environment was recently demon-strated when infected mice were fed either low-energy or normal diets, with changes in the parasite transcriptome leading to the identification of the putative serine/ threonine kinase KIN as a parasite sensor of host nutritional status and a regulator of parasite growth (117). Controlled approaches comparing synchronous parasites from recently cultureadapted field isolates and long-term-laboratory-adapted strains have also confirmed variation in the expression of genes encoding parasite proteins exported to the red cell and to its surface as well as higher expression levels of genes coding for sexual-stage proteins in field isolates. In a study of Kenyan parasite isolates, this differential expression appeared to be partly attributable to genetic changes, particularly copy number variation (118). Interestingly, studies of inbred mice infected with the nonlethal parasite P. yoelii 17X showed a considerable conservation of parasite gene expression over time and between hosts, even among hosts with different immune statuses. The greatest variation occurred at peak parasitemia, when there was maximal reticulocytosis, and the parasite may require different proteins for the most efficient growth in these young RBCs (119). Taken together, these findings suggest that many variations in parasite gene expression in vivo may be determined by the necessity for an optimal interaction with host erythrocytes. The question of whether additional large-scale variation in gene expression plays a causal role in severe disease in humans remains to be fully resolved. Immune response. (i) Naturally occurring immune responses. The complex life cycle of Plasmodium means that the immune response to malaria needs to be understood as a series of responses to spatially, temporally, and antigenically distinct life cycle stages (14) (Fig. 1). Superimposed on this, the intensity, duration, and timing of previous infection can modulate acquired immunity (15,21). Enormous antigenic variation creates considerable challenges for the immune system. Although a huge amount has been learned about immune responses to malaria in both humans and animal models (120), most of our understanding comes from reductionist studies, and the integrative understanding of the immune responses at a systems level remains rudimentary. Transcriptomic studies therefore play an important role in and are potential building blocks for an integrated description of immune responses to malaria. It would be ideal to obtain serial samples from all relevant tissues, starting before inoculation by a mosquito bite, through the presymptomatic and symptomatic phases of infection, and onwards until the time of death, resolution, or persistent infection. This may be possible in animal models (although it has not yet been done), but human studies are often constrained by the limited availability of any sample type other than blood and the practical and ethical challenges of longitudinal sampling (121). Thus, we must piece together a likely sequence of events from limited samples from humans and data from experimental malaria infections in other species. Insights into some of the earliest immune responses to blood-stage parasites in humans come from controlled-infection studies, whereby malaria-naive individuals are infected by a mosquito bite or the inoculation of blood-stage parasites and then intensively monitored for the detection of parasites in their blood (122,123). An early microarray study showed that large changes in peripheral blood mononuclear cell (PBMC) gene expression were already apparent at the time of the first detection of parasites on a blood film, which for most subjects preceded the onset of symptoms (124). These changes included the upregulation of cell surface and intracellular pattern recognition receptors, proinflammatory cytokines, phagocytic and scavenger receptors, and NADPH oxidase components, together indicating a coordinated activation of multiple components of the innate immune response. Interferon gamma (IFN-␥) signaling pathway, interleukin-1␤ (IL-1␤) signaling, and glycolysis pathway genes were prominently activated early in infection, as were genes involved in antigen processing and presentation for major histocompatibility complex class I (MHC-I) and MHC-II; however, the upregulation of the expression of the IL-1␤ receptor and heat shock protein genes was limited to subjects who developed fever (124). Looking slightly later in infection, when previously malaria-naive subjects first developed fever, broadly similar findings were observed by using RNA-seq, although the downregulation of Tand B-lymphocyte genes was noted (125). Gene expression profiles from naturally infected individuals at the time of clinical presentation with acute uncomplicated malaria (UM) show many similarities with those of presymptomatic experimentally induced infections (24,25,(124)(125)(126). However, the additional induction of genes related to interleukin-10, mitogen-activated protein kinase activation, and Fas ligand-induced apoptosis was detectable in PBMCs of naturally infected, symptomatic Cameroonian adults (124). Type I interferon-related genes were highly induced by infection in comparison to healthy controls and in comparisons of paired acute-and convalescent-phase samples from Brazilian patients (127,128). Comparison of whole-blood gene expression from symptomatic previously naive individuals with that from symptomatic Malian adults showed less upregulation of interferon responses but greater upregulation of B-cell receptor signaling in malariaexperienced individuals (125). Prominent neutrophil-associated signatures were additionally found in whole blood from symptomatic children (24,126). The most perplexing differences observed between studies of symptomatic individuals relate to opposing changes in MHC-, T-cell-, and B-cell-associated genes (24,(124)(125)(126). These conflicting findings may represent genetic or environmental differences between the different comparison groups, differences in parasite loads between subjects (126), or changes in the proportions of leukocyte subpopulations in infection, the effect of which is dependent on whether RNA is extracted from PBMCs or whole blood. Differences between the transcriptional profiles seen in cases of uncomplicated and severe malaria have been less studied. A paired comparison of 5 individuals who first presented with severe malaria and later returned with an episode of uncomplicated malaria found that IFN pathway and T-cell response genes were more highly expressed in the uncomplicated episodes than in the severe episodes (129). Although that study did not specifically consider the association of gene expression with parasite load, other observations (126) suggest that this difference may be largely a consequence of the lower parasite load at presentation with uncomplicated malaria. Since natural exposure often involves repeated Plasmodium infections, transcriptomic approaches have been used to understand the consequences that one or more episodes of malaria may have on subsequent responses to Plasmodium or other pathogens. Comparison of PBMC transcriptomes of Malian children 7 days after treatment for the first acute episode of the malaria season with transcriptomes just before the onset of the malaria season revealed the downregulation of inflammatory genes but the upregulation of genes expected to facilitate microbial killing and the activation of adaptive immunity (130). Stimulation of these PBMCs with infected RBCs also resulted in lower expression levels of inflammatory genes but higher expression levels of microbial killing and adaptive immunity genes in the samples from 7 days after infection. Those findings suggest that one episode of infection might be able to program a more advantageous response to subsequent infection. However, there is also abundant evidence that acquisition of immunity is inefficient, and repeated malaria exposure impairs heterologous immune responses and increases susceptibility to other infections (131,132). Dysfunctional atypical memory B-cell populations have been described for other infections associated with poor antibody production (133), and transcriptome analysis has been central to defining atypical memory B-cell populations related to chronic malaria infections, which are defective in immunoglobulin production and denoted by the surface expression of FCRL5 (134,135). There are few data on the evolution of changes in the blood transcriptome with the progression of infection in animal malaria models, although data available for P. chabaudi infections suggest that there are considerable overlaps with the human blood transcriptome in uncomplicated pediatric malaria (126,136,137) (Fig. 3). Pathway-level similarities include the upregulation of IFN response, antigen presentation, and proteasome-related genes; the downregulation of B-cell genes; and gene-level overlap of the upregulation of Fc receptors. However, T-cell signaling was upregulated in mouse whole blood, in contrast to the lower expression levels in human whole blood reported by those same authors (126). In a different rodent infection, P. berghei ANKA, the immune responses in liver, spleen, lungs, and brain were monitored at sequential time points (69). Although that study primarily aimed to investigate the pathogenesis of experimental cerebral malaria (ECM), the sequential immune response over time is noteworthy because it varied by organ and by mouse species. Overall, there were many similarities to the human immune response genes induced by malaria, particularly in spleen, liver, and lung, where Toll-like receptor (TLRs), proinflammatory cytokine, interferon-inducible, and complement-related genes were induced. Unfortunately, that study did not include blood transcriptome data, which might have allowed a better comparison with human data and inference of changes in blood gene expression that may arise from the migration of cells to and from different organs. The accessibility of fresh organs from mice permits more-refined transcriptomic analyses on specific cell populations isolated from these organs, avoiding confounding due to changes in cell populations within a whole organ or blood. Splenic cells have been the primary focus of such analyses, since the spleen is a major site of interaction between parasites and innate and adaptive immune cells, which control human and rodent malaria infections (138). For example, purified CD11c ϩ splenic dendritic cells (DCs) were examined at different time points during P. yoelii 17XNL-infected BALB/c mice to resolve a controversy surrounding their function in malaria (139). At the time of that study, opposing roles had been proposed, with DCs on the one hand enhancing innate immune responses and initiating adaptive immune responses (140) and on the other hand mediating the immunosuppressive effects of the parasite product hemozoin (141). Transcriptomic analyses revealed several distinct patterns of gene expression over time, with many immune-related genes showing sustained levels of transcription at early and late time points during infection but a notable smaller group showing differential regulation between time points (139). Perhaps important to understanding the controversy around DC function, expression levels of il-10 were higher and expression levels of il-6, il-12, and ifng were lower later during infection. Genes involved in the cell cycle, glycolysis, and purine metabolism were also extensively modulated by P. yoelii infection, in contrast to previously described "common" DC maturation signatures, suggesting that DC behavior in malaria may not easily be inferred from that observed in other situations (139). Analyses of purified splenic CD4 T cells have also contributed to understanding their role in malaria, this time in the lethal P. berghei ANKA model (142). In this model, CD4 T cells enhance pathogenicity and are ineffective at providing T-cell help to constrain the parasite load. Their early transcriptional response to infection was dominated by interferon gamma and, unexpectedly, type I interferon response genes, which led to functional studies demonstrating that type I interferon was responsible for suppressed CD4 T-cell function (142). An even deeper understanding of splenic CD4 T-cell biology followed, using single-cell isolation and RNA-seq combined with fate mapping to determine how CD4 T-cell clones differentiate into either T helper 1 or T follicular helper cells (143). Cell cycle and glycolysis genes, a recurring feature in analyses of splenic tissue (139,144,145), were upregulated in these cells around the time of fate determination at day 4 of P. chabaudi infection, and subsequent fate was determined by interactions with B cells or monocytes, which favored T follicular helper or T helper 1 development, respectively (143). Undoubtedly, cell fate mapping and single-cell sequencing will add further essential detail and complexity to our understanding of malaria immunology in the future. (ii) Vaccine-induced immune responses. Transcriptomic approaches have yielded particular success in identifying vaccine-induced protective immune responses for viral infections such as yellow fever and influenza (146,147), catalyzing the development of the new discipline of systems vaccinology (148,149). However, the development of a protective malaria vaccine has been an arduous process because a vaccine has to perform considerably better than naturally induced immunity, and until a partially protective vaccine became available, it was not possible to begin identifying correlates of vaccine protection (14). Various strategies, ranging from intravenous whole attenuated sporozoites to recombinant proteins and DNA-based vaccines, have been evaluated in attempts to target single or multiple parasite life cycle stages (reviewed in references 14, 150, and 151). To date, the only vaccine to have achieved licensure and enter into pilot implementation in countries where malaria is endemic is RTS,S/AS01 (150,152). This vaccine has demonstrated only modest efficacy and durability in clinical trials in African children, but it has the potential to make a substantial public health impact (153)(154)(155)(156). RTS,S/AS01 is a preerythrocytic vaccine, so when effective, it either prevents or delays the development of blood-stage parasites and clinical disease (14,150). The imperfect protection afforded by RTS,S has allowed transcriptomic studies of unprotected and protected individuals to be undertaken to characterize vaccine responses, which predict subsequent protection against experimental challenge. In peripheral blood mononuclear cells, plasmablast-associated transcriptional signatures, cell cycle genes, and type I interferon genes correlated positively with immunogenicity (antibody titers to circumsporozoite protein [CSP]) and vaccine-induced protection, while natural killer (NK) cell genes were negatively correlated with both outcomes (157,158). Looking at the role of the genes by functional enrichment, immunoproteasome, cell cycle, and apoptosis functions were associated with protection (158), and more-focused analyses have highlighted that vaccine-induced interferon signaling may also predict protection (159). Decreases in this response signature shortly after the final vaccination were associated with a lack of protection (160). While RTS,S is the most advanced vaccine, it is not the only preerythrocytic vaccine to be developed, and transcriptomic correlates of protection for RTS,S might be compared with those induced by other vaccines in order to find universal correlates of protection. In a schedule of priming with an adenovirus-vectored vaccine and boosting with two doses of RTS,S/AS01, a strategy designed to elicit better CD8 T-cell responses, transcriptional correlates of protection were not identical to those of RTS,S/AS01 alone (157). In the prime-boost approach, innate responses to the vaccine appeared more important, with TLR signaling, dendritic cell, and antigen presentation gene expressions all correlating with protection. The most consistent common feature between this regimen and RTS,S/AS01 alone was a negative association between protection and NK cell gene expression (157). However, these vaccine regimens also produced different immunological correlates of protection, with polyfunctional CD4 T cells rather than anti-CSP antibodies emerging as being the most important for the prime-boost regimen despite similar protective efficacies. Ex vivo restimulation of samples from two other studies with the vaccine antigen, using either two doses of RTS,S followed by modified vaccinia virus Ankara (MVA)-vectored CSP or two doses of DNA multipleepitope thrombospondin-related adhesive protein (ME-TRAP) followed by MVAvectored ME-TRAP, showed that the small number of protected subjects were characterized by the upregulation of interferon-induced and antigen presentation genes and the downregulation of hematopoietic stem cell and myeloid cell genes (161). Despite some common themes in protection-associated gene signatures from those studies, the ideal response to provide sterile protection against preerythrocytic parasite stages in humans remains to be characterized. (iii) Gene expression profiles associated with asymptomatic infection. In countries where malaria is endemic, it is common for individuals to have asymptomatic parasitemia (11,12). The likelihood of an infection being asymptomatic increases with age (12) and decreases with parasite load (27), but many believe that this must also involve the active regulation of the immune response, since parasite loads tolerated under high-transmission intensity are much higher than those causing fever under lowtransmission intensity (162). It is therefore intriguing that no significant transcriptomic response to asymptomatic infection was detected by RNA-seq despite comparing paired samples prior to infection with those during infection in Malian adults albeit with only 5 subjects per group (125). It is tempting to speculate that this may indicate that none of these individuals had reached sufficient parasitemia to trigger a response and that the set points for such a response may differ between malaria-experienced and previously naive individuals. This concept may be supported by data from a recent study comparing the transcriptional responses of asymptomatic adolescent men from two sympatric ethnic groups in Burkina Faso with similar peripheral blood parasite densities (163). Comparison of purified monocyte transcriptomes between 7 uninfected and 2 infected individuals of the Fulani tribe showed dramatic differences in gene expression, whereas the same comparison for individuals of the Mossi tribe showed negligible differences. Fulani are well known to have relative protection from malaria (164), and the authors of that study speculated that these results might indicate an immune response that is more poised for activity upon infection in the Fulani than in the Mossi. Further work in larger studies restricted to individuals with persistent asymptomatic infection will be important to investigate this issue further. Taking a different approach, a longitudinal study identified a V␦2 ϩ subset of ␥␦ T cells as being reduced following chronic malaria exposure, and their gene expression was investigated (165). Interestingly, their basal expression levels of numerous immunoregulatory genes were found to be increased, and their transcriptional inflammatory response to infected RBC stimulation was diminished, concordant with their association with a diminished likelihood of symptoms upon infection. This suggests that the infection-induced attenuation of this cell population may help to explain why repeatedly exposed individuals become less likely to develop symptoms. (iv) Effect of host immunity on parasite gene expression. To date, relatively little is known about how preexisting host immunity (naturally acquired or vaccine induced) alters parasite behavior. Transcriptomic analyses, particularly dual RNA-seq of host and parasite, could provide insights into this question. However, some surprising findings have arisen from studying the more fundamental question of whether the presence or absence of specific components of host immunity alters parasite gene expression. A common assumption has been that the expression of different members of parasite variable gene families (such as PIR genes) enables immune evasion and is at least partly determined by immune selection. For the Pc-PIR genes of P. chabaudi, this appears not to be the case. Transcriptomic comparisons of parasite gene expression levels in mice with acute and chronic infections demonstrated that the establishment of chronic infection was indeed associated with a clear shift in Pc-PIR gene expression, but this was not influenced by the removal of immune selection in mice without T cells (TCR␣ Ϫ/Ϫ mice) or without B cells and antibodies (MT mice) (113). This finding conflicts somewhat with human data that suggest that var gene expression is affected by preexisting humoral immunity (166). Pathogenesis. Malaria typically causes fevers, headache, myalgia, rigors, cough, and abdominal pain, features which are similar to those of many other systemic infections (11,12). Laboratory tests often show anemia, thrombocytopenia, and increased levels of acute-phase response proteins (12,18). Features associated with an increased risk of death include coma, renal failure, metabolic acidosis, hypoglycemia, respiratory distress, and severe anemia (167,168). The pathogenesis of the clinical and laboratory features of malaria is incompletely understood, and this is especially so for the progression from uncomplicated to severe malaria (13). Although transcriptomic approaches have the ability to broaden our understanding of the changes that occur in the host and the parasite during infection, relatively few studies have sought to associate gene expression with specific features of human disease (24,25,124,(169)(170)(171). In contrast, transcriptomic studies with animal models have frequently been used to try to provide an understanding of the pathogenesis of severe malaria. However, the interpretation of the results generated in these models is dependent on understanding both the relevance of the model to human disease and the experimental design, most importantly the severe and nonsevere comparison groups. The most common models use inbred strains of mice of specified ages and sexes. Unlike natural malaria infections in humans, the outcome of these models tends to be extremely consistent for any given combination of parasite and mouse strain. For example, the P. berghei ANKA strain causes a neurological syndrome described as ECM (172) in C57BL/6 and CBA mice but does not cause ECM in BALB/c mice (173). The closely related strains P. berghei K173 and P. berghei NK65 do not usually cause ECM in any of these mouse strains (172). Investigators have generally taken one or more approaches to identify gene expression associated with severe disease: (i) comparison of susceptible and resistant mouse strains infected with the same parasite strain (69,(173)(174)(175)(176)(177); (ii) comparison of the same mouse strain infected with different parasite strains (137,178); (iii) time course analyses to identify differences in gene expression occurring before and after the onset of severe disease (69,126,136,144,175,177,179); and (iv) comparison of rare, less severely affected mice with their more severely affected counterparts of the same strain infected with the same parasite for the same duration (136,176). All of these approaches potentially have limitations because it is difficult to disentangle expression differences associated with the genetic background from those causing severe malaria. Furthermore, features that are critical for valid comparisons in models with different mouse or parasite strains, such as temporal changes in parasitemia or total body parasite load in individual mice, have been inconsistently reported. Despite these limitations, common features emerge from transcriptomic studies using rodent models regardless of the experimental approach (Fig. 3). Unfortunately, studies of pathogenesis in humans and animal models have often been conducted in relative isolation, and the relevance of animal models, such as ECM, to human disease is often debated because of differences in key histopathological features, such as parasite sequestration (172,180). Transcriptomic studies have the potential to allow global comparisons of host and parasite gene expression between these models and human specimens, but this has not yet been done in any formal way. There are no studies to date comparing human gene expression from organs such as brain or lung in severe and uncomplicated malaria cases (Fig. 3). In contrast, relatively few animal studies have investigated gene expression in blood (126,136,137,173), and the majority of those studies focused on brain and spleen gene expression. Thus, synthesis of findings in rodent models with those in humans is challenging (Fig. 3). (i) Cerebral malaria and experimental cerebral malaria. The pathogenesis of human cerebral malaria (CM) has been extensively debated because it is currently impossible to prove the dependency of the syndrome on any specific pathogenic mechanism. In contrast to the widely used P. berghei ANKA C57BL/6 ECM model, human CM may not even be a single entity but may have several pathological subtypes (181)(182)(183)(184). The first of these subtypes is not CM at all but "false CM," with coma being due to another cause (infectious or noninfectious) in the presence of incidental parasitemia (181). In resource-poor settings, it is difficult to exclude all other possible causes of coma, and so they may be misclassified as CM. A relatively common and specific feature of CM that is not present in false CM is malarial retinopathy, which colocalizes with the sequestration of parasites in the retinal blood vessels (181,185). This has been used to define children with true CM, but it has become apparent that coma in some children without retinopathy is also at least partly caused by malaria (182,183,186). (a) Host response. Few studies have compared the human or mouse blood transcriptomes between CM and uncomplicated malaria (129,170,171) or between severe malaria phenotypes (187). In Malawian children initially treated for CM and subsequently reattending with an uncomplicated malaria episode, paired analyses of whole blood showed a striking upregulation of type I interferon-associated gene expression at presentation with uncomplicated malaria compared to the episode of severe malaria (129). Differential type I interferon responses were also a feature detected in a larger study comparing Malawian children with retinopathy-positive and -negative CM, along with a significant enrichment of cell adhesion and extracellular matrix pathways and numerous neutrophil-related transcripts being upregulated in the retinopathy-positive subjects (187). There was also an enrichment of pathways related to coagulation, platelet activation, and cytokine signaling (187), consistent with the well-described coagulopathy and inflammation that accompany CM (188,189). Supporting the relevance of the transcriptomic findings, concentrations of neutrophil primary granule proteins (elastase and myeloperoxidase), tumor necrosis factor, monocyte chemotactic protein 1, and interleukin-10 were higher in plasma of retinopathy-positive subjects, whereas the concentration of IFN-␣2 (a type I interferon) in plasma was higher in individuals with retinopathy-negative CM (187). Using more-relaxed definitions of cerebral symptoms, a study in Mali that included 5 children with prostration or coma found higher expression levels of complement, Toll-like receptor, and cytotoxic-T-cell genes than those in 5 children with uncomplicated malaria (171). Many of these findings parallel those for the blood transcriptome in ECM, obtained from comparisons of susceptible and resistant mouse strains infected with P. berghei ANKA (173) (Fig. 3). Common differentially expressed genes in ECM include the downregulation of those associated with erythropoiesis, cell surface glycosylation, ubiquitination, MHC-II, platelet-related, clotting-related, and plasmacytoid dendritic cell-related genes (many involved in type I interferon signaling) in ECM (173). In contrast, there was an upregulation of natural killer cell-and cytotoxic-T-cell-related genes, the latter of which is consistent with the known dependency of ECM on CD8 T cells (173). However, changes in neutrophil gene expression signatures were not seen in ECM (173). A different perspective comes from a recent analysis of human PBMCs rather than whole blood, where comparison of genes associated with other neurodegenerative diseases between 7 children with CM and 8 with uncomplicated malaria suggested that protein aggregation pathways may be activated and important in CM (170). In contrast to the limited studies on blood, multiple transcriptomic analyses of mouse brain tissue have been conducted and have yielded a fairly consistent picture of gene expression associated with ECM (69,144,174,175,(177)(178)(179) despite the variety of experimental approaches discussed above. Although the brain parenchyma makes up the majority of specimens for gene expression analysis, it is composed of multiple cell populations, and additional cell types may actively or passively become enriched in the blood vessels and parenchyma during infection. Few studies specified whether brains were perfused prior to RNA extraction, but this method of flushing out nonadherent cells from the vasculature could result in substantial differences in gene expression due to the removal of intravascular leukocytes and immature RBCs. As evidence of this, several studies found transcriptional signatures of suppressed erythropoiesis (144,176,179), which likely reflects analyses of cells within the brain vasculature and mirrors findings in peripheral blood (173). The clearest consistent finding was the association of immune response and defense pathways with ECM (69,144,(174)(175)(176)(177)(178). Specifically, genes associated with both type I and type II interferon signaling were enriched and upregulated in ECM versus comparators (69,144,174,175,178,179). Genes associated with T-cell activation and granzyme were also enriched in several studies, consistent with their known role in ECM (144,(174)(175)(176)(177)(178)190). In some comparisons, an upregulation of type I interferon responses was found to precede the onset of cerebral pathology (69,144), which is possibly related to sequential activation in different cell types, because isolated microglial cells showed prominent type I interferon gene expression profiles only after the onset of ECM (191). The prominence of type I interferon responses in those studies, associated with decreases in the levels type I interferon response genes in peripheral blood (173), leads us to suggest that there is likely a redistribution of cells producing type I interferon and/or a sequential pattern of upregulation followed by downregulation of type I interferon signaling, which varies in its timing by organ, progressing from peripheral blood to the brain vasculature to the brain parenchyma. Beyond implicating immunopathological mechanisms initiating ECM, brain transcriptomes have also revealed possible explanations for neurological dysfunction. The increased expression and activation of apoptosis pathways were observed in several studies, along with a variety of cellular stress response pathways (175)(176)(177)(178). However, pathological studies of ECM suggest that apoptosis is unlikely to be a major cause of ECM because it is an infrequent event, rare in parenchymal cells, and when occurring in vascular endothelial cells, it is not associated with adjacent edema or hemorrhage (180). One time course comparison of ECM-susceptible and -resistant mice reported evidence for an early downregulation of metabolic processes, such as glycolysis, in the brains of susceptible mice, which may plausibly contribute to reversible neurological dysfunction (177). However, many of the changes in gene expression associated with ECM may represent upstream events or noncausal associations, and it is possible that transcriptional changes actually play little role in the final neurological manifestations. A comparison of brains of wild-type C57BL/6 mice with ECM to those of resistant CD8 T-cell-deficient and perforin-deficient mice revealed smaller sets of differentially expressed genes than in a comparison with resistant BALB/c mice (176). In fact, it was striking that only 9 genes differed between perforin-deficient mice and wild-type mice with ECM, yet the perforin-deficient mice did not develop ECM, suggesting that the expression levels of very few genes need to change to produce the final neurological syndrome (176). The spleen plays important roles in the innate clearance of parasites, the adaptive immune response, and erythropoiesis (in mice) (22,138). Splenic gene expression has been investigated in both ECM and non-ECM severe rodent malaria models. Consistent with findings for other tissues, erythropoiesis genes were suppressed as ECM progressed (144) and were also suppressed in other lethal infections (discussed below). Metabolic pathway changes in the spleen accompanied the progression of infection, such as increased expression levels of glycolytic enzymes detected in whole spleen (144) and purified splenic CD11c ϩ dendritic cells (139), which may well represent metabolic switches necessary for immune cell proliferation and function. The induction of interferon-responsive genes in the spleen was found over the course of P. berghei ANKA infection in ECM-susceptible mice (144) but was greater at both baseline and late in infection in ECM-resistant mice, adding further complexity to understanding the roles of interferons in promoting or preventing ECM (69). In an attempt to achieve a more integrated understanding, sequential changes in gene expression in spleen, brain, lung, and liver were examined in comparisons between ECM-susceptible and -resistant mice (69). Large groups of immune response genes showed consistent differences between mouse strains at all time points, but there were also clusters of immune response genes that became differentially expressed in different organs at different times, starting with the liver and later with the spleen and lungs. These temporally differing clusters may reflect the ability of the ECM-resistant BALB/c mice to mount earlier organ-specific responses to the parasites, but it remains unclear how this might prevent subsequent damaging responses in the brain (69). (b) Parasite factors. Attempts to link parasite gene expression to the pathogenesis of CM have supported a link between the expression of specific var genes and severity but beyond this have been rather inconclusive. Targeted analysis of the var transcriptome revealed that severe malaria (both CM and severe anemia) was associated with high expression levels of var genes encoding PfEMP1 variants with cysteine-rich interdomain regions predicted to bind to the endothelial protein C receptor (192), consistent with previous functional analyses highlighting the importance of this interaction in severe malaria (193,194). Analysis of P. falciparum isolates from 58 Malawian CM patients revealed considerable variation in parasite transcriptomes between subjects (116), with some showing marked departures from profiles observed in vitro (92). The strongest determinant of differences in these profiles was peripheral blood parasitemia (116). Subsequent combination of those data with additional gene expression data from subjects with uncomplicated malaria suggested that CM-associated parasites might show increased expression levels of genes that modify cytoadhesion and the rigidity of infected erythrocytes, exported proteins, and erythrocyte invasion proteins (98). Although highly plausible, that same study also highlighted the confounding effect of variation in the parasite developmental stage (98), and a smaller study that directly addressed this issue did not find any significant residual differences between groups with severe (including CM) and uncomplicated malaria (82). Parasite gene expression in ECM has also been examined by using custom microarrays, but at that time, functional annotation of the P. berghei ANKA genome was rather limited (69). Nevertheless, organ-specific differences in parasite gene expression were detected, with the lung having the greatest detectable parasite gene expression, enriched in heat shock, ribosomal protein, and proteasome genes (69). Interpretation of such findings is challenging because differences may simply reflect differences in the distributions of parasite developmental stages associated with different organs, and a more refined analysis would be required to identify true differences in parasite gene expression in different tissues and to relate these differences to the already complex patterns of organ-specific host gene expression. (ii) Other malaria phenotypes. CM is the most studied severe malaria phenotype, but other life-threatening manifestations include severe anemia and respiratory distress (167). Respiratory distress is usually due to acidosis in children and reflects compensatory hyperventilation to raise the blood pH by the exhalation of more carbon dioxide (13). In adults, malaria-associated respiratory distress often represents true lung pathology with a picture similar to those of acute lung injury and acute respiratory distress syndrome (12). Severe anemia is probably the most common severe manifestation of malaria in very-high-transmission settings (13). If prompt blood transfusion and antimalarial treatment are available, the mortality rate can be low (195). Malaria in pregnancy is a special case that can result in severe disease manifestations in the mother but also placental dysfunction and adverse outcomes for the fetus ranging from abortion or stillbirth to growth retardation and premature birth (196). Non-ECM lethal animal models mostly lead to death through a combination of severe anemia and other organ dysfunction, which may include lung and liver pathology (197). (a) Host response. Fever is one of the key clinical features of malaria, but there is great variation in the temperatures at the time of clinical presentation among individuals infected with the same parasite species, likely reflecting the parasite load, synchronicity, and how recently iRBCs have ruptured and released parasite material (198). It is curious that body temperature has been the most common clinical variable analyzed for an association with global gene expression in humans, because it is likely to be confounded by many factors and is not useful for the prediction of clinical outcome. In whole blood, the expression of neutrophil-related (24) and lysosome-related (25) genes has been significantly associated with body temperature in acute malaria, while in PBMCs, heat shock proteins, interleukin-8 (a chemokine that promotes neutrophil chemotaxis), and interleukin-1␤ were significantly associated (124,169). Unfortunately, no statistically significant associations of gene expression with the severity of anemia or with platelet counts have been identified in the few small studies examining these more important laboratory markers of pathogenesis (24,169). The potential for the correlation of gene expression with clinical and laboratory features of malaria pathogenesis has not yet been fully exploited. Pathogenicity-associated whole-blood host transcriptional profiles in mice have predominantly been assessed during P. chabaudi infections (136,137). Comparisons between virulent CB strain and less virulent AS strain infections showed clusters of differentially expressed genes that had functional associations with platelet aggregation in dying mice and a more pronounced anemia signature and a neutrophildominated lung inflammation signature in CB-infected mice (137). Infection with the P. chabaudi AJ strain alone was analyzed in an innovative time course study (136) using gene expression to describe trajectories from health to illness and thence to either recovery or death. Intriguingly, in this model, the nadir of health in mice was preceded by NK cell gene expression but also showed transcriptional evidence of depressed erythropoiesis. A similar sequence of gene expression dynamics was inferred in humans by mapping sequential mouse data onto patterns of gene expression from a crosssectional study of subjects with uncomplicated malaria (126). This novel approach may represent an important advance because it is almost impossible to examine sequential gene expression in humans with untreated symptomatic malaria, but consequently, robust validation is also challenging. In other mouse models, analysis of the spleen transcriptome revealed reduced expression levels of genes associated with erythropoiesis at equivalent parasitemias in lethal P. berghei N67C infection compared with nonlethal P. berghei N67 infection (199) and the relative suppression of erythropoiesis for the severity of parasitemia in lethal P. yoelii 17XL infection versus nonlethal P. yoelii 17X infection (145). Type I interferons were particularly upregulated in spleen and shown to contribute to an enhanced control of parasitemia in nonlethal P. berghei N67 versus P. berghei N67C infection (199). However, this is not necessarily a universal mechanism for protection from severe disease. In a rat model using P. berghei ANKA (which does not cause ECM in this host), young rats failed to control parasitemia and died, while older rats controlled parasitemia, dependent on the differential expression of a small set of genes in the spleen, mostly related to T-cell function (200). Individuals living in areas where malaria transmission is common develop immunity through repeated infection, and many become clinically immune by childbearing age (21,196). However, women undergoing pregnancy for the first time are susceptible to infection by P. falciparum parasites expressing distinct variant surface antigens that enable them to be sequestered in the placenta (196). This is thought to be because chondroitin sulfate A (CSA) is expressed in the (placental) vasculature only during pregnancy, and so parasites that are able to bind to CSA do not have a selective advantage at any other time in life and have never been the target of the host immune response (19,196). In pregnancy, parasites expressing a CSA-binding variant of PfEMP1 can be sequestered in the placenta, avoid splenic clearance, and establish infection in an otherwise partially immune individual, and several transcriptomic studies support the concept that there is specifically increased expression levels of var2csa in parasites isolated from the placenta (80,201,202). Human gene expression in the placentas of malaria-infected women showed considerable perturbation, with distinct patterns related to both infection status and the presence or absence of histological inflammation (203). B-cell-and macrophage-related genes were particularly enriched, including CXCL13, a macrophage-derived B-cell chemokine. Transcriptomic findings, combined with additional reverse transcription-PCR (RT-PCR) and immunohistochemical analyses, led the authors of that study to speculate that macrophage CXCL13 drives B-cell recruitment to the placenta, antibody production, and further antibody-mediated activation of inflammation, in a pattern suggestive of lymphoid neogenesis (203). A subset of these differentially expressed immune response genes was negatively correlated with birth weight (203). This is a particularly important finding since low infant birth weight is a major risk factor for infant mortality (204). (b) Parasite factors. There are few studies examining parasite transcriptomes in association with the pathogenesis of noncerebral severe malaria. Dual RNA-seq revealed 126 parasite genes significantly associated with high fever, with the strongest association occurring with PF3D7_0500900 (serine/threonine protein kinase, FIKK family [FIKK5]) (25). However, those results should be interpreted with caution since fever spikes are thought to be related to parasite egress from RBCs, which is dependent on the distribution of the developmental stages of the parasites, which in turn is the greatest determinant of parasite gene expression. FIKK5, for example, shows substantial variation in expression across the developmental cycle in vitro (86). That same study identified 234 parasite genes associated with the proportion of whole-blood reads mapped to the parasite (a proxy for circulating parasite density), but these genes were poorly annotated at the functional level, making it difficult to understand how they may be related to the parasite load (25). Comparison of parasite gene expression levels has been performed with a small number of adults with noncerebral severe malaria (predominantly subjects with renal impairment) versus those with uncomplicated malaria by using a custom microarray designed to identify a broader repertoire of genes from variant gene families of parasites (205). Given the small size of that study, it was surprising that 380 genes were identified as being differentially expressed, with a notable downregulation of genes associated with host cell entry in severe malaria as well as enrichment for metabolic processes and RNA splicing and the differential expression of a range of variant surface antigens (205). The generalizability of these findings remains to be determined. In pregnancy-associated malaria, transcriptomic analyses of parasites from placental malaria have found modest numbers of differentially expressed genes other than var2csa, many of which are thought to be exported into erythrocytes but do not have well-defined functions (80,201,202). This suggests that beyond the importance of var2csa, additional mechanisms of the host-parasite interaction may also play a role in susceptibility to malaria in pregnancy. (c) Host-pathogen interaction. The association between host gene expression and parasite load is of interest because the parasite load is a determinant of severity and because restriction of the parasite load would indicate protection. Several studies of different patient groups have related host gene expression to parasitemia and found fairly consistent associations with the upregulation of neutrophil-, interferon-, phagocytosis-, complement-, and heme degradation-related genes (24,25,126). Assessment of the association of host gene expression with PfHRP2 concentrations, as a proxy for the total body parasite load, has not yet been done. The whole-body parasite load (which includes both circulating and sequestered parasites) is generally a much better predictor of outcome than circulating parasitemia (18,(26)(27)(28)(29)(30), and so this analysis will be important in future studies. A direct correlation of host and parasite gene expressions was possible in only one study to date, which used dual RNA-seq to analyze whole blood from uncomplicated P. falciparum malaria cases in Indonesia (25). Host innate immune response genes were negatively correlated with parasite metabolic process genes, perhaps suggesting a direct impact of the host response on the restraint of parasite metabolism. However, most of the correlations between human and parasite gene expressions were positive, and among these genes, the most notable were human transcription factor genes positively correlated with parasite translational regulation (25). Despite caveats about a confounding effect of the parasite developmental stage in that study, it is intriguing to speculate that those observations may indicate a reciprocal regulation of fundamental biological processes between the host and parasite. Biomarkers. Identifying reliable biomarkers can improve diagnosis and the prediction of outcomes of infectious diseases. This is particularly important for malaria because while some individuals living in high-transmission settings will have parasitemia, which is not the cause of their illness, others will definitely have malaria but despite antimalarial treatment may develop severe disease, and others will have coinfections with malaria and bacterial pathogens, both of which require treatment (11,27,28,132). Identifying these groups could stratify clinical care as well as studies of immunology, pathogenesis, and treatment. Biomarkers that can predict outcomes in animal models have the potential to improve the power of these models and to benefit the welfare of the animals by allowing accurate sample size calculations and the use of less-severe endpoints. Given the complexity of malaria pathogenesis, biomarker discovery may provide more than just a disease indicator but also greater insight into the cellular responses and physiology of the host and the pathogen. Increasingly rigorous methodology is now being applied to biomarker discovery, such as standards for reporting of diagnostic accuracy (STARD) (206), and transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD) (207). Validation is required with multiple independent data sets along with standardized reporting of how well the biomarker predicts the outcome of interest. Most transcriptomic biomarker discovery studies of malaria have not met this standard of evidence; nonetheless, they suggest that robust biomarkers might be found in the future. (i) Diagnosis of malaria. Since we already have good tools to detect Plasmodium using microscopy, point-of-care antigen tests, and even PCR-based parasite detection (11), the utility of transcriptomics for the diagnosis of malaria may not be obvious. Gene expression signatures can be derived to distinguish diseases that have similar clinical presentations, such as distinguishing bacterial from viral infection (208) or tuberculosis from nontuberculous infection (209), demonstrating a proof of principle that each disease has a unique transcriptional signature that might form the basis of a rapid test to guide initial clinical management (210,211). In a country where malaria is endemic, any such test would need to be able to distinguish malaria from other febrile illnesses, and so a gene expression signature of malaria would be essential. One of the first transcriptomic studies of malaria suggested that this would be feasible because unsupervised clustering was able to largely separate febrile malaria from other acute febrile illness (24). Clusters were particularly separated based on neutrophil-related genes and erythroid-related genes; however, modern variable selection methods and testing with an external validation data set would be needed to develop a sufficiently small, sensitive, and specific gene expression signature for diagnostic use. (ii) Predictors of severity or death. The majority of transcriptomic studies attempting to identify biomarkers of severe outcomes have been conducted with animal models rather than humans. In a comparison of gene expression patterns in mouse brains from four different strains, 31 genes were found to distinguish ECM susceptibility from resistance regardless of the mouse strain (174). The utility of gene expression biomarkers in brain tissue is limited, since this tissue cannot easily be sampled without killing mice, but it may provide a more refined experimental endpoint that does not require mice to develop end-stage ECM. Gene expression biomarkers in peripheral blood are arguably more useful, and putative examples have been identified from a comparison of the whole-blood transcriptomes of ECM-resistant and ECM-susceptible mice (173). Of a large number of differentially expressed genes detected by microarray analysis, a smaller number was confirmed by RT-PCR, and several of these genes (c1qb, DnaJC15, and tk1) were different enough between groups to be considered candidate biomarkers. C1Q was also identified as a putative biomarker of severe human malaria in a pilot study of 10 Malian children (171). Seven other candidate markers were all related to the immune response, including TLR genes. Biomarker discovery for diagnosis or prognosis is not restricted to host gene expression only. Potential parasite biomarkers in humans have been investigated by transcriptomic profiling of parasites taken from children with CM, uncomplicated malaria, and asymptomatic malaria (212). Among them, only the comparison between CM and asymptomatic malaria found differentially expressed genes, 99 of which were upregulated and 135 of which were downregulated in CM. Further characterization of these genes revealed a large proportion of upregulated genes encoding Plasmodium exported proteins and variant surface antigen proteins, such as PfEMP1 and RIFIN, and downregulated genes included those encoding rhoptry-associated proteins, merozoite surface proteins, and Maurer's cleft two-transmembrane domain protein (212). (iii) Markers of parasite drug resistance. Resistance to antimicrobial agents, including antimalarial drugs, is a growing problem (213,214). Widespread resistance could set back many of the gains made by malaria control initiatives over the last decade (11,214). Artemisinin and its derivatives are the mainstay of antimalarial treatment globally, and emerging resistance is starting to spread from foci in Southeast Asia (11,214,215). Markers of artemisinin resistance were sought in a microarray-based transcriptomic study of 1,043 P. falciparum clinical isolates from 13 regions of Southeast Asia and Africa where malaria is endemic (216). The identification of genes significantly associated with the parasite clearance half-life strongly implicated the parasite unfolded protein response in mediating artemisinin resistance and also showed an association with reduced expression levels of DNA replication genes, presumptively linked to relative resistance in slower-developing early-blood-stage parasites. LEARNING FROM TRANSCRIPTOMIC STUDIES OF MALARIA Key Insights The ups and downs of type I interferons. The high frequency of AT-rich motifs in Plasmodium genomes, such as the ATTTTTAC motif in P. falciparum, is implicated in the induction of type I interferons (127). The role of type I interferons in malaria has been enigmatic because of conflicting data from animal models indicating roles in both protection and pathology (127,142,199,217,218). Emerging findings suggest that the timing, source, and regulation of type I interferons are crucial (219), and it is interesting to note that this might have been inferred earlier from data in the transcriptomic literature. In humans and mice, type I interferon-related genes are consistently induced in blood early in infection (in uncomplicated disease) (126,127,129,173), but relative downregulation is seen in later, more severe infection or with a higher parasite load (129,173,187) (Fig. 3). Interestingly, this type I interferon signature is found in PBMCs (127), whole blood (129,173), and isolated neutrophils (220). In contrast, increases in type I interferon-related gene expression levels in brain are temporally associated with the onset of ECM (69,144,174,175,178,179), whereas higher early expression levels in spleen are associated with a better control of parasitemia (199), and sustained high expression levels in spleen are associated with ECM (217). This fits very well with the evolving paradigm, developed through functional studies, that an early burst of type I interferons may be necessary to enhance innate and adaptive responses, while sustained production may suppress pathological immune responses but risks compromising the control of the parasite load (218,219). Thus, the timing of the up-and downregulation of type I interferon signaling determines a fine balance between immunopathology and parasite survival, and influencing this balance through the kinetics of AT-rich DNA release may have evolved as a parasite strategy to manipulate the host response (142,218). Pathological effects of neutrophils. Neutrophils are one of the most numerous leukocyte populations in blood, yet there has been disproportionately little investigation of their role in malaria. Higher neutrophil-associated gene expression levels are a consistent feature of analyses of whole-blood transcriptomes from uncomplicated and severe human disease (24,126,187). In contrast, the sparse data from rodent models do not show an induction of a neutrophil-associated gene expression signature in whole blood in ECM (173) but show an association with lung pathology (137). The simplest explanation for these differences might result from changing proportions of neutrophils in blood. Relative neutrophilia is frequently described in human cases of malaria (221,222), particularly in association with severe disease and high-level parasitemia, whereas this is not always the case in rodent malaria models. However, differences in neutrophil counts are not the only explanation. In cases of retinopathy-positive CM, neutrophil gene expression signatures were greatly increased in comparison to those in cases of retinopathy-negative CM despite very similar neutrophil counts (187). Another explanation may be genuine transcriptional differences between whole-blood neutrophil populations, possibly arising from the mobilization of immature neutrophils from the bone marrow. This may explain the particular enrichment of genes encoding neutrophil granule proteins, being regulated during neutrophil development and particularly expressed in immature neutrophils (223). Consistent with this, immature neutrophils and elevated circulating levels of their granule proteins are detected in cases of severe and uncomplicated malaria (221,224). Many neutrophil granule proteins can be damaging to host tissues, and it is tempting to speculate that this indicates a pathogenic mechanism in humans. Female C57BL/6 mice, which are commonly used in malaria research, have among the lowest circulating neutrophil counts of all mouse strains (225). Although neutrophil mobilization from bone marrow has been described for rodent malaria (226,227), neutrophil counts in peripheral blood can go up or down (226)(227)(228), possibly reflecting the rate of egress from bone marrow, trafficking to other organs, and cell death. Proving a functional role for neutrophils in rodent model pathogenesis, and extrap-olating the data to human disease, is also challenging. Antibody depletion of neutrophils has been used to assess their role, but common antibodies used for this purpose can also deplete monocyte subpopulations (229,230). Despite this, additional evidence supports the roles of neutrophils in malaria-induced lung and liver injury (220,228) and of a neutrophil subpopulation in ECM (227). The prominence of neutrophil-related gene expression in human cases of malaria clearly indicates that more work is needed to understand exactly what these cells contribute to both defense and disease. Variations in parasite gene expression in vivo. The broad similarity of in vivo and in vitro parasite transcriptomes, particularly after accounting for the parasite developmental stage, is discussed above. However, the amount of variation from the in vitro "standard" that occurs in vivo remains to be fully quantified, as do the implications of any in vivo variation. Differences in parasite gene expression detected between different in vivo situations (117,216), including those under antimalarial drug pressure, suggest that the in vivo variation can be both substantial and responsive to the within-host environment, even if, on average, it is similar to the in vitro situation. The implications of such in vivo variation may be very important. The use of laboratoryadapted parasites for high-throughput drug and antibody screens may either overestimate or miss in vivo effectiveness. Unfortunately, the factors that drive in vivo variations are currently poorly defined, but their characterization might allow the adaptation of culture conditions to recreate elements of in vivo variability. Closing Gaps in Knowledge: General Principles How similar are animal models and human disease? Although it is clear that there are differences between severe malaria in animal models and human disease (172,180,197), there is no agreed way to quantify the importance of the differences or similarities for understanding pathogenesis. This is not unique to malaria research; similar controversies arise for other infectious diseases, such as tuberculosis (231), meningococcal disease (232), and typhoid (233). One way to objectively identify similarities would be through systematic transcriptomic comparisons between species, across organs, at different time points, and over the spectrum of severity (234). The easiest comparisons in practice will be comparisons of blood transcriptomes between humans and rodent models, as has been tried for tuberculosis (231), but comparisons with other tissues will also be needed for a complete understanding (234). In this way, the components of a model that have relevance to human disease may be identified and studied further, even if other aspects of the model are different. Studies with outbred, wild-derived, or true-wild mice may also be highly informative because the greater genetic variation between these mice (235) may increase variations in gene expression associated with variations in the parasite load, severity of anemia, or organ dysfunction at the same time point during infection, recapitulating the variability found in humans. Variation in pathogen genomes. The involvement of the highly polymorphic var, rif, and stevor multigene families in the pathogenesis of severe malaria is attracting increasing interest (19,236), but accurately determining the expression levels of these and other highly polymorphic genes in global gene expression analyses remains challenging because of current reliance on a reference genome for the quantification of transcript levels. For bacterial pathogens, a similar challenge can arise through different mechanisms, with the core genome potentially being supplemented by a flexible genome of additional genes present at various frequencies (237). The flexible genes may be acquired or lost by mutation and selection during the course of population growth and by the transfer of genetic material between bacterial species. One strategy for improving quantification is a targeted approach sequencing just the most abundant variant genes, as was recently applied to the P. falciparum var transcriptome (192). A more comprehensive strategy would be complete genome and transcriptome sequencing on the same pathogen isolates. Assembly would be facilitated by longer sequenced reads using, for example, MinION and PacBio sequencing technologies (238,239). Cost remains an issue for this approach, particularly because it would be important to collect as many clinical isolates as possible to obtain a complete picture of their complexity and rigorously define associations with severity. Are we sampling the right tissues? The most useful transcriptomic information about the host response or host-pathogen interactions will likely be derived from RNA extracted from specific tissues rather than bulk sampling of tissues. For example, analysis of whole brain or whole spleen, composed of many cell types with differing functions, may misrepresent specific interactions involved in protection or pathogenesis. Ideally, we would like to investigate much more specific in vivo interactions in malaria (Fig. 1), for example, mosquito saliva, sporozoites, and skin; Kupffer cells, hepatocytes, and sporozoites; hepatocytes, tissue schizonts, and CD8 lymphocytes; and brain endothelial cells and sequestered parasites. Techniques are evolving for the microscopic dissection and capture of specific cells from tissues and for the purification of single cells from blood and organs (36,240) (Fig. 2). The combination of these techniques with RNA-seq protocols suitable for histological specimens and lowquantity RNA now opens the door to interrogating these interactions in great detail (240). Which clinical phenotypes should be studied? Future studies need to give careful consideration to the information that might be gained from comparisons of different clinical phenotypes. To better understand the pathogenesis of severe malaria, there is a clear need for a dual RNA-seq analysis comparing cases of severe and uncomplicated malaria. Deeper insights may be gained from comparisons of subjects with discrete phenotypes of severe malaria, such as severe anemia, acidosis/hyperlactatemia, and CM. Comparisons between subjects with malaria and subjects with asymptomatic infection, matched for parasite loads, may be particularly helpful to understand the nature of antidisease immunity. Are we looking at the right stage of infection? Looking at events preceding the onset of severe disease due to infection is necessary to understand the underlying pathogenic mechanisms, but targeting these mechanisms when a patient presents with severe disease may be ineffective because the damage has already been done. There are notably no studies of malaria examining gene expression profiles associated with recovery from severe disease, during the first few days after the initiation of antimalarial treatment, and such studies may be much more informative for the identification of targets for adjunctive therapies to improve outcomes of malaria and other infectious diseases. Association and causation. Within transcriptomic data sets, it is inevitable that the expressions of many genes will be highly correlated with each other and with outcomes of interest, and this makes it particularly difficult to separate association from causation in observational studies. We believe that there are approaches that might enhance the identification of causal relationships. One relatively simple approach is to look at differences between different biological phenotypes of the outcome of interest. For example, comparisons of uncomplicated malaria with each of the different phenotypes of severe malaria (hyperlactatemia, CM, and severe anemia) may reveal gene expression common to all severe phenotypes and gene expression specific to each one. Since the severe phenotypes share many common risk factors, such as parasite load (13), we would expect the genes specific for each phenotype to be more likely to have causal relationships. Similarly, quantitative or dose-response relationships can be used to identify those genes that are highly correlated with specific features, such as the platelet count or lactate concentration. Even stronger evidence may come from one of the most powerful epidemiological methods for demonstrating causation, Mendelian randomization (241). This epidemiological technique uses natural population variation in a gene with a known function to examine the causal effect of exposure, usually requiring a large set of genotyped samples. A seminal example is the use of the protective effect of sickle-cell trait against developing malaria to demonstrate that malaria causes susceptibility to bacterial infections in humans (242). A similar approach might be employed to estimate the proportion of gene expression that is responsible for limiting the parasite load rather than occurring solely as a consequence of the parasite load. Sickle-cell trait or other red cell polymorphisms that limit parasite growth independent of the host immune response might be used as the instrumental variable. Regardless of the approach, new hypotheses may be generated based on gene expression data, which can be tested by using reductionist approaches in vitro or in animal models for experimental validation. In humans, causality can be assessed directly in interventional studies, with transcriptomics providing a means to elucidate the mechanism of action of new treatments in clinical trials. Lessons for Study Design and Analysis It is clear that many studies on malaria had suboptimal elements in their design. Principles for improving the design of malaria studies will also apply to many other infectious diseases ( Table 2). The overarching principle is that if a study is not designed to answer the research question, no novel statistical approach and no amount of data mining are likely to be able to answer the question. Community guidelines for both microarray and RNA-seq reporting already exist and will likely evolve to keep track with increasingly complex analytical approaches (243,244). TRANSCRIPTOMICS IN FUTURE HOST-PATHOGEN RESEARCH Fifteen years of transcriptomic studies of malaria have brought many insights but also highlighted the complexity and challenges of studying host-pathogen interactions in this way. For other infectious diseases, some of these challenges may be even greater. In the next decade, advances in transcriptomics will likely transform our understanding of host-pathogen interactions in many infections. Specific examples might include unraveling the triggers and consequences of bacterial toxin and protease production in virulent staphylococcal and streptococcal infections; establishing the mechanisms underlying clearance, latency, or progression in tuberculosis; and defining protective and harmful host-pathogen interactions in emerging infections, such as Ebola and Zika viruses. This will be facilitated by technical advances, such as long-read sequencing technologies and single-cell sequencing, that will allow unprecedented levels of detail to be described. The greatest challenge may become the synthesis of the huge amounts of data and detail into a model that is comprehensible to the human mind. We believe that important insights will come from the integration of multiple layers of data, such as transcriptomic, genomic, proteomic, and metabolomic data (1), combined with comprehensive descriptions of clinical and other pathophysiological features of infections. This will need to be done in increasingly large (and therefore costly) studies, because the more layers of data that are integrated, the greater the dimensionality. Methods for effectively reducing this dimensionality with minimal compromise in discrimination will no doubt evolve in parallel. Usefulness and generalizability will need to be maximized by the collection of consistent types of data between studies so that their combination and meta-analyses can be performed. To ensure the translation of discovery to clinical application, researchers will need to better characterize the clinically relevant components of their model systems and achieve optimal sampling of tissues with the most-relevant host-pathogen interactions. This will require collaboration across disciplines, bringing transcriptomics together with advances in imaging, minimally invasive sampling, microscopy, and viable-cell separation. Studies with humans will be greatly enhanced by embedding within formal epidemiological studies with rigorous design principles. Clinical trials of antimicrobials and adjunctive therapies represent rare opportunities where humans are subjected to experimental manipulation, and collecting samples for transcriptomic analyses will be helpful for mechanistic interpretations of their results. Mathematical developments will likely facilitate new approaches to studying trajectories of infectious diseases, overcoming the inherent limitation that naturally infected human subjects present only a snapshot view of the dynamic process of infection (136). Approaches have already been developed for fate mapping of single cells based on Logistic and linear regression models can be implemented in many transcriptomic analysis tools Discovery of optimal biomarkers Expression of many genes may be associated with outcome, but it is difficult to select the smallest combination with the best out-of-sample prediction Which combination of transcripts best predicts clinical deterioration in a child with uncomplicated malaria Apply variable selection algorithms to a training data set; confirm with a separate test data set; validate with at least 1 external data set Reporting Maximize reuse of data Provision of metadata allows maximum future reuse of transcriptomic data Complete subject-or sample-level data allow secondary analyses to be performed Observe community standards for reporting; make metadata publicly available transcriptional similarities over a developmental continuum (245), and a similar approach may be taken to the course of human disease where the transcriptome of an individual at a single point in time might be mapped onto a disease trajectory from asymptomatic to mild to severe, by reference to the transcriptomes of each disease state. Transcriptomic studies fairly consistently show that different gene expression signatures characterize infections with different pathogens and different stages of infection with the same pathogens (210,211). This means that there is a high likelihood that various selection methods applied to transcriptome analyses in any infectious disease will reveal small sets of transcripts as diagnostic or prognostic biomarkers. This has the potential to revolutionize many aspects of clinical infectious disease practice, if these could be turned into rapid tests administered at the point of care. Currently, commercial methods to allow these RNA signatures to be used as near-patient tests do not exist (211), but as the potential impact of and demand for such methodologies are realized, it seems inevitable that the technological problems will be solved. In conclusion, we are optimistic about the advances in the understanding of host-pathogen interactions that will come from future transcriptomic studies and the benefits that may come from these studies if they are well conducted. We predict that the results of these studies will force researchers to consider much more dynamic models of infection, where changes in the host response may affect pathogen behavior and vice versa, and that the simultaneous study of host and pathogen will increasingly be viewed as being essential.
2018-05-03T01:02:49.103Z
2018-04-25T00:00:00.000
{ "year": 2018, "sha1": "fcb008c992604a2bfe4f1fc9b724711c49d9d35b", "oa_license": "CCBY", "oa_url": "https://mmbr.asm.org/content/mmbr/82/2/e00071-17.full.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "0d15fd0736eacdbdd38f4c1564aa0eb29a5a7a91", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
154814399
pes2o/s2orc
v3-fos-license
PROFESSIONAL, ACADEMIC AND INDUSTRIAL DEVELOPMENT NEEDS: A COMPETENCY MAPPING AND EXPERT OPINION REVIEW . There is a tripartite pull from academics, industry and professional bodies on the development needs of the Quantity Surveyor (QS). At best, there is scope for misunder-standings between the stakeholders as to what is being required and what is being achieved. At worst there may be actual gaps in the education and / or training being offered and some discrepancies between the levels of attainment. This research sought to review the Royal Institution of Chartered Surveyors (RICS) QS competencies and their application in the delivery of QS degree programmes. The changing development needs of QSs who satisfy the aspirations of industrial, professional and academic stakeholders were investigated through content analysis of the views of an expert forum consisting of relevant stakeholders and a series of competency mapping case studies. The study revealed that there are considerably different standards right across the RICS accredited QS programmes with respect to coverage of competencies. It is concluded that there is no standard benchmark in achieving competencies and it is open to individual interpretation. Further research in the development of a Graduate Competency Threshold Benchmark is suggested to align the disparate views of the stakeholders to accommodate changing development needs. INTRODUCTION Quantity Surveying is the profession that is well established in the British Commonwealth as being responsible for the management of cost and contracts in the construction industry (RICS, 1971(RICS, , 1983Male, 1990;Pheng and Ming, 1997;Bowen et al., 2008;Ling and Chan, 2008). The profession is also known as Construction Economics in Europe and Cost Engineering in the Americas and parts of Asia (Rashid, 2002;Pathirage and Amaratunga, 2006;Smith, 2009). The academic, professional and training needs of Quantity Surveyors are pulled by three different stakeholders in three different directions (Figure 1). Academics are interested in producing a rounded graduate with the basic foundation of knowledge for further development whereas professional bodies are interested in graduates who can be progressed to full professional status through the achievement of the required core competencies (RICS, 2009a and2009b;Perera and Pearson, 2011). The industry is looking for a graduate who can straight away contribute both to the daily functions of business activity and to its growth. Hence, there is a tripartite pull on the development needs of the Quantity Surveying Graduate. The present education system of the Quantity Surveyor does not recognise these multi-directional needs and hence often produces a graduate whom the industry sees as not fulfilling their requirements (Wong et al., 2007;Lee and Hogg, 2009;Perera and Pearson, 2011). This leads to many problems, with greater levels of employer and graduate dissatisfaction and obstacles to early career development of the QS graduate. These conflicting concerns have long fuelled the "education versus training" debate and some conflict between Educators and Employers through which the RICS steers a sometimes difficult path. On the one hand it sends messages to the universities that it wishes to see programmes which lean more towards the "academic" rather than the "technical", whilst on the other hand it sends messages to employers that they should accept graduates issuing from its accredited degree programmes as being appropriately qualified to take positions at higher than technician grade (for which the RICS itself has a specific training route via the HND / Foundation Degree). This can create ambiguities and wrong impressions to the industry, creating conflicts in expectations. For its own part, the RICS has created a set of Core Competencies which, if they are to be fully achieved by candidates for membership, requires active cooperation between the academic sector (providers of basic subject knowledge and certain academic skills) and the industrial sector (providers of practical skills training) through the operation of their business. Current needs of quantity surveying graduates Significant growth in undergraduate level education of Quantity Surveyors stems from the late 1960's and early 1970's with the switch from Diplomas in Quantity Surveying, firstly to Ordinary degrees and, within a few years, to Honours Degrees. From the 1971 RICS report "The Future Role of the Quantity Surveyor" (RICS, 1971) identifying specific competencies at the time, the profession began to evolve rapidly and in 1983 a further report was produced, "The Future of the Chartered Quantity Surveyor" (RICS, 1983) as if to further consolidate the professional status of the QS. Just over twenty years ago, with the publication of the document "QS2000" (Davis Langdon and Everest, 1991) there was recognition of a number of forces acting on the QS profession, highlighting both the changes to the client body and to the construction industry (Fan et al., 2001a(Fan et al., , 2001bJohn, 2002;Fellows et al., 2003;Rick, 2005;Cartlidge, 2006;Ling and Chan, 2008;Senaratne and Sabesan, 2008;Maidin and Sulaiman, 2011). Both the RICS and the educational sector show similarities in their lack of appreciation of the specific requirements industry may have of its newly graduated student members. At the same time the industry does not seem to appreciate that a graduate is a person with higher intellectual capacity to rapidly further develop their professional skills and technical knowledge once in employment (Perera, 2006;Lee and Hogg, 2009;Simpson, 2010). This conflict and lack of alignment of industry, academic and professional perspectives create a barrier to the development of the profession as well as the career development of the graduate Quantity Surveyor. Added to this is a more fundamental failure on the part of all parties to appreciate the dynamics of the market sector. The majority of new graduates appear to be entering more nontraditional quantity surveying routes (Perera, 2006;Perera and Pearson, 2011). It has been shown both through research (Perera, 2006) and through records of 1st destination Surveys (UNN Returns, 2001) that a large majority of new graduates find employment not in Private Consultancy Practice (PQS) or the Public Sector, as was the case until the mid 1980's, but with Main Contracting and specialised subcontracting organisations. Perera (2006) shows that in the University of Ulster more than 80% of graduates either seek employment or prefer to be employed in the non-PQS sectors of the industry. The situation is very similar in many other universities in the UK. Feedback from Assessment of Professional Competence (APC) workshops has noted a certain Private Practice bias within the presentation of advice, and indeed there is feedback at university level suggesting this. Much of the academic content and the structure of the RICS itself would both seem directed at those employed in the PQS and Government Sector, paying less attention to the skills inherent in the role of the Contractor's Surveyor (Simpson, 2010). For their part, those engaged in developing Quantity Surveying within the construction sector may see this as another barrier to cooperating with the RICS when required. This is evident from the fact that RICS membership does not grow in the same proportion to the growth in Quantity Surveying student numbers (Perera, 2006). The emergence of Commercial Management (Walker and Wilkie, 2002;Lowe and Leiringer, 2006) as a distinct discipline encompassing the role of the contractor Quantity Surveyor is a fact that the RICS should consider in detail in its future development of career paths for the Quantity Surveyor. Leading Quantity Surveying professional bodies the world over have already begun to recognise these developments and trends. For example, recently the Australian Institute of Quantity Surveyors (AIQS) established a separate pathway for contractors' Quantity Surveyors for completing professional qualification. RICS assessment of professional competence The competence-based education initially started in nursing education in the 1970s (Trivett, 1975;Ewens, 1979;Cowan et al., 2007) and gained popularity in many other disciplines in formal and informal education and training all around the world over the last forty years (Mole et al., 1993;Meyer and Semark, 1996). Professional accreditation bodies in the built environment have also been advocates of a competency-based approach (Newton, 2009). The entry of graduates and others into any professional group of the Royal Institution of Chartered Surveyors (RICS) as fully qualified Chartered surveyors comes only after they have successfully passed the Assessment of Professional Competence (APC). This is true of the Quantity Surveyor, the specific subject of this study, as much as for any other. Key to this is the demonstration, by the candidate, of their having attained certain competencies determined by the Education and Membership Board of RICS. In the case of the graduate, these competencies will have been acquired both through their formal university education and the workplace training which they have received, whether as part time students in employment or during a work placement. In either case, the applicant will have undertaken a period of full time employment beyond graduating, further adding to the in-service training element of their overall skills profile. It will be appreciated that there is a balance to be struck between the level and type of competence which should be expected, and can be achieved, in the universities and that which arises out of exposure to experience only available within the workplace. To some extent the two must be complimentary, as they should be, and it has emerged over the years that both Academia and Industry have certain expectations of one another, rightly or wrongly, as to what the other can and will achieve as a vehicle for graduate learning. These last are encapsulated, for some, in the arguments within the "education versus training" debate that has dogged the relationship for as many years as formal Quantity Surveying education has existed. From the above it will be seen that, at best, there is scope for misunderstandings between the stakeholders as to what is being required and what is being achieved. At worst there may be actual gaps in the education and/ or training being offered and received or, at least, some discrepancies between the levels of attainment. In summary, it is suggested that the present education system of the Quantity Surveyor does not recognise the multi-directional needs of the Quantity Surveyor and hence often produces a graduate whom the industry sees as not fulfilling their requirements. A further factor in the willingness on the part of the Industry to accept and train new graduates must be resource constraints born of the financial insecurity of the current economic recession, and being experienced severely by existing Members who might otherwise be more willing to accept the risks and responsibilities of employing and training new recruits. This paper is aimed at investigating the changing developmental needs of Quantity Surveyors who satisfy the aspirations of industrial, professional and academic stakeholders through the analysis of the views of an expert forum consisting of academics, industry and profes-sional body representatives. The research also sought to review competencies and their application in the delivery of QS programmes by mapping all 24 RICS QS competencies against curricular for four RICS accredited QS Honours degree programmes reported as four case studies to provide a full picture of the extent of coverage of competencies in the programmes accredited by the RICS. RESEARCH METHODOLOGY The research was carried out in three distinct data gathering phases culminating in data analysis and reporting. The key stages and process are detailed below. Review A detailed literature review was carried out to identify the RICS QS competencies and their interpretation. Competency mapping case studies A detailed competency mapping exercise was carried out based upon four RICS accredited quantity surveying programmes offered by four leading universities. This involved mapping RICS QS competencies to the individual module specifications of the respective QS programmes. These are referred to as mapping case studies. Expert forum This was the catalyst for the identification of key issues related to academia, industry and the RICS. An expert forum consisting of ten specialists was established. A series of interviews were carried out firstly to identify key issues and subsequently these were used to verify the findings of the competency mapping case studies. The forum comprised three academics (programme leaders), three consultant or project quantity surveyors (PQS), three contractor or commercial quantity surveyors (CQS) and one RICS representative (member of the RICS Education and Qualification Standards). Analysis and survey results The content analysis of the interviews conducted and the competency mapping case studies provided a detailed account of the primary areas of investigation listed below: 1 1. Mapping competencies to RICS accredited programme curricular. 2. Establishing the expected level of achievement of competencies by graduate quantity surveyors. The outcomes related to each of these aspects are discussed in detail in the following sections. RICS QS competency requirements The RICS Competencies are arranged into three groupings, depending upon their perceived relevance to the Role of the Quantity Surveyor: 1. In most cases there is an element of choice. The RICS distinguish between three possible levels of attainment in each of a range of competences when setting its requirements of those seeking membership. Briefly, these are as follows: -Level 1: Knowledge (theoretical knowledge). -Level 2: Knowledge and practical experience (putting it into practice). -Level 3: Knowledge, practical experience and capacity to advise (explaining and advising). There are 10 Mandatory competencies, 7 Core competencies and 7 Optional competencies (two only of these last to be selected by the candidate). The RICS stipulates that an APC candidate needs to achieve all Mandatory competencies at Level 2 or above, all Core competencies at Level 3 (except one not relevant to specialisation depending on employment in consulting or contracting practice which is at Level 2) and 2 Optional competencies at Level 2 or above. Competency mapping method The main method of competency mapping involved the use of a two dimensional matrix comprised of QS competencies on the Y -axis (vertical listing) and Programme specifications on the X -axis (horizontal listing). Each competency was subdivided into the three Levels (1 to 3). Figure 2 illustrates an example of this mapping matrix created as a protected spreadsheet form. A detailed map scoring system (Table 1) was devised to enable indication of perceived levels of achievement of competencies through the evaluation of the individual module specifications pertaining to a programme. The respondents completing the form were required to make judgements as to what amount of a competency at which Level (Levels 1, 2 or 3) was achieved by each module of a programme. Mapping process Competency mapping to programme specifications was carried out in 3 stages: 1. Scoring the mapping matrix by the researchers. 2. Scoring the mapping matrix by programme directors of the respective programmes. 3. Consensus adjustment of scoring by the researchers to eliminate bias. This three stage process established the final scores for competency mapping to programme specifications which were then used for the evaluation explained in this paper. Programme Directors of the programmes selected as case studies were requested to complete the matrix form based on their judge- These case studies are referred to as Case study A, B, C, and D. Each was asked to allocate approximate scores, at each Level, as defined above, on a scale of 0.25 to 1.00 depending upon their estimation of the coverage they achieved for each of the RICS Mandatory, Core and Optional Competencies through delivery of the modules making up their Undergraduate Quantity Surveying Programme. Through this exercise total scores were achieved in respect of each of the above competencies for each University, together with totals relating to all Modules delivered. The scoring carried out by the programme directors was reviewed by the researchers through a discussion process to achieve a consensus view on individual module scores. The aim of this process was to eliminate individual bias of the scoring process and to achieve a reasonable degree of uniformity in the interpretation of scores. The last figure can be split to show total estimated delivery at each of the Levels 1, 2 and 3. There are three possible levels of analysis; the overall total coverage of all competencies for each University, the split between levels for each University and the individual University's actual coverage of specific competencies. These are each analysed in the following sections. Overall total coverage of all competencies by universities There are some variation between the universities studied. Two Universities return total scores of 45 to 48, as against the others who both score 37, a difference between the two pairs of 25%. This would seem to be a significant variance, given that all are offering broadly the same overall programme of delivery and assessment, within broadly similar timescales, and all leading to the same award. Inter-level split across universities The aggregated level of competency mappings for each university is evaluated in Table 3. The main reason for the high level of variance between total coverage of competencies (Table 2) is the level of variance built in due to different volumes of coverage at Level 1. Both Level 2 & 3 scores are very similar between universities. This suggests that they have a similar appreciation of the significance of the value of the higher two levels required of new graduates by the RICS. As would be expected, in all cases the total score for Level 1 far exceeds that for Level 2, and that for Level 2 is far in excess of that for Level 3. Level 3 hardly features at all, as one might expect, for it is a competency level only expected of candidates at the time they come to sit their APC, one year or more after graduating. Coverage of specific competencies by universities This section examines the coverage of competencies at the three different levels by the programmes studied. These are analysed separately for Mandatory, Core and Optional competencies. Coverage of mandatory competencies Mandatory competencies generally can be expected to be achieved at Level 1. Figure 3 shows how each university performed in coverage at Level 1. The yellow benchmark line has been set at 1 to indicate sub standard coverage of competencies. A score of 1 or above indicates fully achieving a competency at the respective level. It is clear that there are many competencies (M001, M002, M003, M005, M006 and M008) that have not been adequately covered even at Level 1. Coverage of core competencies The coverage of the core competencies presents the most important analysis as these competencies are vital for the function of the quantity surveyor. Figure 4. Core competency mapping scores: Level 1 illustrates the coverage of Core competencies by universities. When using a benchmark score of 1 all universities have achieved this for all competen-cies. However, as a cumulative score is used this may not fully represent the required level of achievement of a competency. Figure 5. Core competency mapping scores: Level 2 indicates the core competency coverage at Level 2. It is clear that set against a benchmark score of 1 there is inadequate coverage for all competencies across all universities except for T074 Quantification and Costing of Construction works. The scoring for mapping was carried out based primarily on scoring by programme leaders. In the absence of a detailed specification to indicate what level of content coverage is required for a competency to be achieved, it is difficult to have a uniformly interpreted outcome. Coverage of optional competencies Only two Optional competencies are required to be addressed for the APC. However, universities attempt to cover many optional competencies in their curricular often as non-optimal modules. There is no guidance from the RICS as to how many or to what extent (which level) these optional competencies should be completed upon graduation. This is again open to interpretation. Figure 6. Optional competency mapping scores: Level 1 clearly indicates that all universities do not achieve optional competencies to a benchmark level score of 1. Expected achievement of mandatory, core and optional competencies The RICS QS competencies provide the basis on which a quantity surveyor will be judged as to their capability to act as an independent, professionally qualified chartered surveyor. The respondents were first asked to consider the competencies in general. The RICS representative noted that there are more prescribed core competencies for QS than for any other pathway. This was however to be combined with the understanding that not every competence need be met by the universities and that the RICS welcomed diversity to reflect the individual strengths of each. Industry CQS respondents noted that the competencies were relevant and "do adequately describe what we want". A summary of expected level of competency is presented in Table 4. These were extracted from 8 expert forum members who responded to this section. They include 3 academics, 3 CQS and 2 PQS. Also, not all the 8 respondents have graduate level expectation for some Optional competencies such as Capital allowances, Corporate recovery and insolvency, Due diligence and Programming and planning. The RICS stipulates that an APC candidate needs to achieve all Mandatory competencies at Level 2 or above. Table 4 shows that some of the experts expect graduate QS to have achieved Mandatory competencies at Level 2 or even Level 3. For some competencies such as Communication and negotiation, Data management, and Teamworking, this may be expected due to hypothetical projects and multidisciplinary projects modules involving simulations in most QS degree programmes. But for other competencies such as Business planning, Client care, conduct rules, ethics and professional practice, Health and Safety, etc. it is difficult to see how graduate QS can achieve this through university education. Table 4 also revealed that most Core competencies are expected to be achieved at Level 2 by graduate QS. It is however worrying that certain academics think that core QS skills such as Design economics and cost planning, Quantification and costing of construction works, etc. should be achieved to Level 1 despite possibilities for learning at Level 2. More worrying is the expectation of a few industry experts who think that graduate QSs should have achieved Level 3 in Commercial management of construction, Construction technology and environmental services, Contract practice, Design economics and cost planning and Quantification and costing of construction works. The RICS stipulates that an APC candidate needs to achieve all Core competencies at Level 3 (except one not relevant to specialisation depending on employment in consulting or contracting practice which is at Level 2). To gain relevant experience and skills, an APC candidate must have worked for 3 years after graduation. Hence it is difficult to see how graduate QSs will have achieved Level 3 as some of the experts anticipated. Furthermore, the RICS stipulates that an APC candidate needs to achieve two Optional competencies at Level 2 or above in the areas of specialisation. Table 4 shows the experts' expected level of achievement of Optional competencies by graduate quantity surveyors at mainly Level 1 and 2. Whilst the expectation at Level 2 is questionable, it is interesting to see four experts aiming for Level 3 in Contract administration and Programming and planning. The stated competencies are however popular specialisation areas for PQS and CQS respectively hence this is partly expected. In conclusion, Table 4 shows that there is disparity in the expected level of competency. When viewed in relation to the mapping case studies, there appears to be inconsistency of views of the major construction stakeholders. There are indeed different interpretations of graduate level competency and actual attainment perhaps due to individual understanding of competencies, level definitions and the role of universities in the training of quantity surveyors. There is wider coverage of the risk and value management in Level 3 of the course and in terms of competencies it will be at Level 2. Future role of the quantity surveyor The interviewees were requested to provide views on the present and future role of the QS. With respect to the present role of the QS they generally agreed that this centred on cost advice, estimating, and measurement. One academic noted that this differed between a contractor's surveyor and a consultant's surveyor though others did not stress the difference. There was some disagreement as to the development of the role of the QS. One PQS noted the role had not changed much whereas one CQS noted it had changed a lot. Perception of areas of work becoming more important There was a strong feeling that the role would become more complex, taking more concepts such as sustainability and whole life costing into account. One PQS stated "We are looking at WLC (the whole life cycle) of the facility and its use in a wider context". The importance of WLC was noted by two respondents, one CQS and one PQS. Two respondents (PQS and CQS) suggested that the name QS should change to reflect the function more accurately on the lines of Cost Manager or Cost Engineer. The name change is indicative of observations by other respondents that the difference between PQS and CQS is narrowing and the two roles are merging. The respondents in general indicated the need to up skill the QS knowledge base in use of ICT and its impact on the profession. They also agreed that collaboration and team working would be a more important skill to develop. Sustainability and project management skills were seen as areas for further development whilst civil engineering construction, infrastructure development and mechanical and electrical (energy related) projects were seen as growth sectors for the future. One PQS was of the view that there is potential for procurement to revert to more traditional methods due to economic pressures. This could be seen as an important possibility that further enhances the cost control role of the QS. Relative importance of the QS competencies Four respondents (three CQS, one PQS) noted that there were areas that were not given enough attention or that the students had poor knowledge of; valuation (1), measurement (1), building contracts (1), construction technology (2), M and E services (1), environmental services (1), team working (1), and data management (1). When queried about possible additional competencies, three respondents (1 PQS, 1 RICS and 1 CQS) identified sustainability, business management and planning, accounting, communication (language, report writing and team working), new building technologies, pre-fabrication, civil and infrastructure engineering, life cycle costing as possible additional competencies. Some of these are already covered in some competencies. Since competencies do not give lengthy descriptions of content, these are open for interpretation. Three respondents (2 academic, 1 CQS) were happy with the coverage and felt that there should be no new additions to the competencies/skills. One PQS stated that contract administration is listed as optional but felt that it should be core. No respondents felt that there was any obsolete content taught. Views on quantity surveying education Six respondents shared their views on the present nature of QS education (1 RICS, 2 academics, 2 PQS, 3 CQS). As class sizes get bigger to make courses more economically viable opportunities for tutors to spend more contact time and give more feedback will be compromised by the numbers of students they have to work with. One PQS expressed the view that there was too much mass teaching, with a mismatch where the learning outcome does not map to the industry requirement and also felt that some lecturers need to update their knowledge so that the graduates were appraised of the latest techniques. The respondent did however note that it was not possible to make generalisations and there were differences between universities and individual lecturers. One PQS also felt that the RICS had less than adequate involvement in regulating curricular while another CQS felt that although there are many RICS accredited programmes they were not comparable in most respects. Level of satisfaction with the curriculum used to produce graduate QS The academic curricular content was commented on by 5 respondents (1 academic, 1 PQS, 3 CQS). The academic noted that they were able to cover a lot of the core competencies in a 4 year degree and that they could map modules that they teach to the core competencies. 2 respondents (1 PQS, 1 CQS) stated that the coverage was pretty good in general terms. However, the industry respondents felt that it was difficult to map modules taught at universities to RICS competencies. One PQS felt that some courses do not deliver what employers want and one academic stated "students are going out without the necessary skills to undertake their basic job and that is where employees feel that the universities are letting the system down". This being said, the general view was that it is not easy to generalise and some courses are better than others and also it is down to other factors such as the student, mode of study, and employer. Views on QS programme curriculum development On aspects of curricular development 5 interviewees responded. Two identified measurement as an area that needs greater attention (1 CQS, 1 PQS). Other areas identified include taxation (CQS), understanding building technology and construction (CQS), bill of quantities (PQS), cost planning, preconstruction estimating (CQS) while there was an overemphasis on management of projects (1 PQS, 1 CQS).The aspect that caused most concern for one PQS was that graduates had a poor understanding about construction technology and no real understanding of on-site con-ditions. Reflecting on these views it is clear that greater attention is needed to some core areas of quantity surveying. If so, the academics will be faced with the dilemma of identifying which areas to forego in lieu of areas of expansion. The role of universities in producing a graduate quantity surveyor All 10 respondents considered what a university should provide with regards to QS education. They were requested to choose between: 1. Provide an overall academic knowledge and a good foundation in Quantity Surveying, or 2. Concentrate on training students for direct QS employment. Six respondents agreed with statement 1 (2 PQS, 1 CQS, 1 RICS, 2 academics). 2 respondents agreed with statement 2 (1 PQS, 1CQS). One CQS felt that it should be a bit of both, a balance of academia with vocational on a 50/50 basis. One academic was undecided. One CQS stated that over the last 30 years they have seen the quality of technical Quantity Surveying become diluted and warned that if the trend continues we would lose technical standards forever. In overall terms most wished to see a sound academic background for graduate quantity surveyors but did not want to see any compromise on the level of knowledge. They also seem to expect improved technical competence in graduates going into the industry. Industry -academia collaboration in QS programme delivery Two respondents (1 PQS, 1 CQS) commented that there is a reasonable level of employer engagement with the universities. However, the level and extent of engagement is one aspect that requires further exploration. Industry -academia level of communication Communications between universities and industry were generally seen to be reasonable although it was added that universities try the hardest and industry needs to be better at communication. The state of the economy was seen as a factor that influences level of communication (1 academic). Greater involvement of the industry as a stakeholder in the development of programmes, face to face industry consultation and industry taking programme development and contributions as part of their corporate social responsibility were seen as steps that can be used to improve the situation. Perceived success of modes of study The majority of respondents (9) stated that Part Time students were far better and more rounded than full time students, though this was usually in respect of their dedication to work and approach to the job. Industry placement in quantity surveying education All 10 interviewees had contributions to make concerning their views on placement. This was unanimously seen as a positive, if not crucial, thing for a student to have. The experience the student gains from having practical experience cannot be replicated in any other way. The current economic situation is having a negative impact on the availability of placements. Routes of membership The RICS QS competencies (learned through education and industry experience) provide the basis on which a quantity surveyor will be judged as to their capability to act as an independent professionally qualified chartered surveyor. Graduate QS can become professionally qualified upon successful completion of the APC after 3 years of post-qualification industry experience. The graduate route is still apparently the most popular route to chartered membership. It is expected to breach the gap between what is learnt at university and what is needed to get chartered. As a result, it is useful to investigate the appropriateness of this membership route and others. The RICS recently revised their membership pathways. Level of awareness Accordingly, two interviewees (1PQS, 1CQS) stated that they are not familiar with the new routes of membership other than the graduate route. The appropriateness of routes of membership A total of seven (1 RICS, 2 academic, 2 PQS, 2 CQS) expressed content with the graduate route of membership. One CQS did note that it was sometimes hard to push graduates into becoming chartered, suggesting that this was due to a combination of fee levels and their not seeing any advantage in becoming chartered. Another problem that exists is that more specialised contractors did not give the graduate a wide enough experience in some competencies (1 academic, 1 RICS). The new Associate pathway was stressed as not being a shortcut to becoming chartered surveyor by the RICS representative. One academic said that it was a nice idea but did not see its relevance and felt that it was not clear enough where the cut off point was between the two levels while another expressed some reservations. One PQS felt that it may lead to people aiming for a minimum standard and that As-socRICS is not good enough to be recognised. 1 CQS noted that it was helpful to people who do not have degrees but to then progress to MRICS or FRICS was a very convoluted route. Another CQS said their company had looked at this route but gone back to the graduate route. These sentiments suggest there is lack of understanding about the new route as well as some doubt as to the need for it. There was a mixed response to the new Senior Professional route. Three respondents stated that they were not happy with this route. 1 academic viewed it as a "rubber stamping" exercise. One CQS said "my main problem with that route is that it does not test techni-cal competence". One PQS did not think that people should just be given MRICS for their long experience and although it provides an opportunity to get practitioners into mainstream RICS, they should still fit the APC model and competencies. One academic warned that the RICS have to be careful not to be seen as an institution desperate to get new members in. On the positive side, one PQS noted that it was good and had worked well for them, adding that the CIOB are doing the same thing. Availability and importance of a structured training programme for APC The RICS representative noted that unless the company has signed up to the structured training programme they should not take on a graduate for APC. Three respondents (2 CQS, 1 PQS) stated that they did have a structured training programme. One PQS noted that there were very low completion rates for the APC and felt that this was due to very poor levels of basic knowledge, with big gaps between what is learnt at university and what is needed to get chartered. One possible reason for this was seen as employers not considering it as important and that they lack a structured training programme. It was also noted that it is difficult to provide all the training in three years. Smaller companies often struggle as they do not have the volume or frequency of work types to enable them to have a smooth training process. One PQS was highly critical of the APC process itself, stating that it is a daunting process that makes candidates unduly nervous. The RICS process compares with the CIOB less favourably as the CIOB process is friendlier and they help you to get through it. Level of communications with the RICS The level of communication and the respondents' perception was analysed with respect to RICS Partnerships for programme accreditation, the RICS and Universities, the RICS and Industry communication, Industry and Universities communication. With specific reference to the communication between the RICS and universities 4 respondents (2 academic, 1 CQS, 1 PQS) made contributions. The 2 academics noted that they had a good rapport with the RICS. The CQS did not know about this while the PQS thought that some had good communication with the RICS and others did not. The general consensus with respect to communications between the RICS and industry was that it is in need of much improvement, although it is beginning to move in the right direction. There is a need for increase in regional and local level of involvement (2 academic), fees scales need to be more realistic (1 PQS), and RICS needs to be more in touch with leading edge work (1 PQS). Three respondents (1 PQS, 2 CQS) did not really have any contact with RICS through their role in the company with one commenting that RICS has lost its focus on members and become a business instead of an Institution (CQS). Level of success of the RICSuniversity partnership agreement The RICS partnership process was seen as facilitating greater discussion, but most communications still came down to personal relationships. One academic saw the accreditation partnership as a way to understand how the course is being assessed "so that students come out with the ability to be Quantity Surveyors". These indicate the primary role of the RICS partnership agreement as regulating RICS accredited programmes. However, the level and detail of regulation was criticised. One PQS felt that there was a conflict of interest within the RICS Education Board if there were academic members on the board and these influenced its decisions. But, this is questionable as the role of Board is not necessarily to project the view of industry alone. A balanced representation perhaps might be useful. Lack of consultation with the professional group was also noted adding that RICS communication with industry was not good. One CQS did not know about the partnership arrangements. Another felt that there was a real inertia around working out solutions to problems that were identified. There was recognition of the difficulty involved in getting all three parties around the table and keeping the lines of communication open. DISCUSSION The research aimed at investigating the changing developmental needs of Quantity Surveyors who satisfy the aspirations of industrial, professional and academic stakeholders. It used several research instruments to achieve this: 1. Review of RICS QS competencies: provides details of competencies. 2. Competency mapping cases studies involving 4 RICS accredited QS honours degree programmes: indicates how competencies are mapped to programme curricular. 3. Expert views from a forum of experts (industry, academic and the RICS): enlightens on level of competency to be achieved by a graduate and other contextual factors. The main research objectives sought to ascertain several key aspects related to QS education and development. These are summarised in the following sections. Summary of the status of RICS QS competencies The RICS has formulated clear and detailed documentation (RICS, 2009) identifying, classifying and explaining QS competencies. This is primarily aimed at providing guidance to APC candidates seeking full professional membership of the institution. There are 24 QS competencies classified as Mandatory (10), Core (7) and Optional (7). These competencies can be achieved at any of three levels as Level 1, 2 or 3. The RICS defines that an APC candidate needs to achieve all Mandatory competencies at Level 2 or above, all Core competencies at Level 3 (except one not relevant to specialisation depending on employment in consulting or contracting practice which is at Level 2) and two Optional competencies at Level 2 or above. These competencies form the basis for describing the knowledge-base of the quantity surveyor and at APC to ascertain the level of attainment. Therefore, they should form the basis on which QS degree programme curriculum is modelled. At each programme accreditation the RICS seeks to establish whether the programme in question deals with these competencies. There is no systematic approach or guidance as to what level of competency need be achieved by a graduate completing a RICS accredited programme. At present it is an estimation of whether core competencies are addressed in module specifications. This process has led to RICS accredited honours degree programmes across the country producing graduates demonstrating considerably varying degrees of competence. It is then left to the employers and graduates themselves to up skill to the required benchmark specified for the APC. What was clearly found in this research is that this process produces a graduate less confident to face the industry and an employer less satisfied than they might otherwise be. This clearly confirms the findings of Lee and Hogg (2009). Key findings of competency mapping The main findings related to the competency mapping can be summarised as follows: 1. There is no prescribed threshold benchmark standard for achieving competencies at graduate level. 2. There are no detailed specifications to indicate what content should be covered to achieve a competency. 3. Different universities aim to achieve competencies at different levels, based on their own interpretations. 4. In the absence of a detailed competency specification, the level of achievement of competencies as judged by our own interpretation seems satisfactory for the most part. There are inadequacies in the level of coverage of some competencies. 5. Programme leaders tend to interpret levels of achievement of competencies differently to one another, resulting in apparent differing levels of achievement of competencies and different levels of coverage. 6. There is no standard way to interpret the actual achievement of competencies. 7. There is no formal competency mapping process available for universities in curricular development or revision. 8. Most mandatory competencies are not achieved to a significant extent by the universities studied to date. 9. Core competencies are well achieved at Level 1 based on interpretations made by universities and some attempt made at Level 2. There is greater scope for achieving core competencies to some extent at Level 2. 10. Optional competencies are not reasonably achieved at Level 1 by most universities. Some competencies are however dealt with to a considerably higher level by some universities. There is greater variation across universities. Views of the expert forum Most experts were of the opinion that competencies in general should be achieved at Level 1 by graduates. However, some academic experts were of the view that universities achieve more than Level 1 in some competencies and move greatly towards Level 2. One Consultant QS was of the view that both Mandatory and Core competencies should be achieved at Level 2. The above situation is exactly reflected with respect to the coverage of competencies. There is no uniform view and it is very much open to individual interpretation. These tensions of interpretation are well evident in the above competency mapping case study analysis. CONCLUSIONS The development needs of quantity surveyors are highly influenced by the needs of the industry and profession and shaped by the perception of academia that produces QS graduates to the profession. This research analysed RICS QS competencies and how they are mapped against degree programmes that produce QS graduates. It revealed that there is a huge variation in interpretation of competencies and levels of achievement. The documentation available is inadequate for this purpose probably because it is intended for APC candidate guidance. The competency mapping case studies revealed that there is a high level of variation in the mapping of competencies between programmes especially at Level 1. Although based on the views of programme directors the mapping indicated that most core competencies are well mapped but that there are deficiencies in mandatory and optional competencies. The net result is that there is significant variation in the quality and level of graduates produced by different degree programmes accredited by the RICS. This problem is exacerbated as the programme directors as well as industry experts have considerably varying degree of interpretation of competencies. The absence of a threshold benchmark that clearly defines graduate level of competence has led the industry to have unrealistic expectations; academia to aspire for unattainable levels of competence, producing a less than satisfied graduate that defies direction. The expert forum was also used to extract contextual factors that influence industrial, professional and academic development of QS graduates. Overwhelming majority of the expert forum was of the view that the aim of universities' should be to provide an overall academic knowledge and a good foundation in Quantity Surveying as opposed to provide training to produce a QS for the industry. Limitations The analysis of competencies was limited to the documents currently available for download from the RICS web portal. The mapping of competencies was limited to opinions of the programme directors moderated through cursory examination of module specifications. Therefore it is possible that there could be a reasonable degree of variation in the outcome of mappings. But the authors are of the opinion that this would not be to an extent that would undermine the overall conclusions derived for the project. Further research and directions The focus of the research was to evaluate the views of the two main stakeholders of graduate QS education; the universities and industry. The universities were represented by academics responsible for programme delivery while the industry was represented by consultant (PQS), contractor or commercial (CQS). The views of these stakeholders on the relationship with the RICS were also investigated. There is a considerable degree of differing views and lack of responsibility from all stakeholders, mainly arising out of inaccurate interpretations and lack of definition. This lack of a common benchmark for the interpretation of achievement of competencies by graduates clearly contributes to the dissatisfaction and false expectations on the part of the industry and thus the demoralisation of the graduate. In order to address this situation and thereby align the disparate views of industry, academia and the RICS, further research in the development of a Graduate Competency Threshold Benchmark and the Competency Mapping Framework will be required.
2019-05-16T13:04:21.126Z
2013-06-27T00:00:00.000
{ "year": 2013, "sha1": "58b53f1aca4c288b65c2668e5b0a7b233ea2e76b", "oa_license": "CCBY", "oa_url": "https://journals.vilniustech.lt/index.php/IJSPM/article/download/4181/3545", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "4764cb22cb53056b157a003bfa6810ce5d5c7595", "s2fieldsofstudy": [ "Education", "Business", "Engineering" ], "extfieldsofstudy": [ "Business" ] }
252887283
pes2o/s2orc
v3-fos-license
Traveling planning concepts revisited: how they land and why it matters ABSTRACT What constitutes the “landing” of “traveling planning concepts” (TPCs) in new places remains conceptually elusive in the literature. To explore this process, this paper proposes a multidimensional conceptual framework that identifies key recurring activities during landing. The framework is applied to a qualitative analysis of the ongoing development of an innovation district in the rising yet unequal city of Medellín. The analysis reveals a process during which introductions of the concept with add-on components were legitimized based on translations adjusting it to the local circumstances, while governance procedures coordinated mandates for action, and a version of the concept gradually materialized. These recurring activities constituted mechanisms that generated the progression of landing through subsequent phases. In Medellín, the TPC resulted in a distributed model that corresponds neither to the prevailing innovation district concept nor the locally intended socially inclusive version. This paper contributes by showing why the analysis of landing processes is key to understanding whether, how, and in what form traveling planning concepts appear in new destinations. This novel framework enables a systematic and comprehensive identification of the causal process comprising the mechanisms that make both the TPC landing and the version of the TPC it produces unique. Introduction Cities monitor each other's urban planning solutions.When those solutions convey values and ideals that are broadly esteemed or originate in prestigious places (Healey, 2013(Healey, , p. 1512;;Wood, 2015a), they are imitated, albeit in different socioeconomic conditions (Healey, 2013(Healey, , p. 1511)).They become what Healey (2010) calls traveling planning concepts (TPCs).In her view, such concepts are "likely to be shaped by their origins and by the channels through which they have traveled" (Healey, 2011, pp. 200-201).In related literature on policy mobilities (PM), mobile policies have been said to be "in part made through their travels" (Ward, 2018, pp. 276-277).More precisely, there are three main instances in the "travels" in which TPCs may change.First, they may change when they are "packaged" (González, 2011(González, , p. 1403) ) or "codified" (Peck & Theodore, 2010b, p. 207) for travel.Then, they can be modified by the intermediaries (e.g.planners, consultants) who carry them (cf. McCann, 2011;Prince, 2012).Finally, they are adjusted to new local conditions (e.g.Healey, 2010, p. 6;Perera, 2010, p. 146;cf. Cochrane & Ward, 2012, p. 5;Ward, 2002, pp. 6-9).It is this third instance, the "landing" (Healey, 2011(Healey, , 2013) ) of TPCs, that we are interested in.Landings are key episodes in the overall journeys of TPCs, and we will argue that their careful analysis helps reveal whether, how, and in what form traveling planning concepts appear in new destinations. We take the landing of a TPC in a new context to be the process that culminates in the operationalization of a locally produced version of the concept, which is expected to generate sufficient desired outcomes and to maintain the support of relevant local communities (cf.Healey, 2013Healey, , pp. 1520Healey, -1521;;Peck & Theodore, 2012, p. 24).How landing occurs, however, remains ambiguous in previous literature.While some authors acknowledge the complexity (a characteristic of the landing process) and the mutations that take place in the concepts during landing (a causal consequence of the complex landing process), very little analytical depth can be found in the study of this process.In some of the literature, it appears as if landing was a simple "importation", "adoption", or "translation" of existing concepts.We emphasize that this is not a single occurrence but an iterative process, and it is during this often extended process that TPCs become adjusted to the context of landing and gain their shape.To understand the complexity and scope of this process and how adjustments during it may make concepts distinct from those in other places, we raise the following research questions: How does TPC landing take place?How does the landing process affect TPCs? To precisely conceptualize and analyze the landing process, we propose a multidimensional framework that enables a systematic analysis of the process of TPC landing over time.We apply this framework in an empirical study of the landing of the innovation district (ID) concept in Medellín, Colombia's second-largest city.This concept, originating in advanced innovation-driven economies, was adopted in the developing country context to support Medellín's transformation from its traditional manufacturing basis toward a "knowledge-based economy" after years of economic downturn and devastation as a narco-warzone.In this case, the environment of TPC landing differs profoundly from its origins, which helps reveal the influence of landing on the TPC. IDs embody the policy trends of innovation-led urban economic development and the planning of knowledge-intensive urban economic spaces (e.g.Hutton, 2004).They are urban areas with mixed land use that serve as hubs for business, research, and education and facilitate access to diverse advanced resources through proximate interactions with actors in different sectors (Katz & Wagner, 2014).They are also strongly externally connected through extended networks (cf.Monardo, 2018, p. 331).They are planned to provide high-quality urban environments, housing, services, street life, and leisure activities for the workforce in innovative firms, start-ups, research institutes, etc. (Blakely & Hu, 2019;Esmaeilpoorarabi et al., 2020b;Katz & Wagner, 2014).High hopes for revitalizing inner cities and creating jobs have been placed in IDs (Esmaeilpoorarabi et al., 2020b;Katz & Wagner, 2014).However, doubts have been raised about their capacity to generate public engagement and benefits for adjacent neighborhoods and the broader economy (Arenas et al., 2020;Esmaeilpoorarabi et al., 2020a, p. 10;Heaphy & Wiig, 2020). Further concerns have been expressed about IDs disproportionally privileging investors and technology firms (Gómez, 2022) and thereby engendering segregation and gentrification, particularly in cities of the global South (Goicoechea, 2014(Goicoechea, , 2018;;Lederman, 2020).Regardless, this concept diffuses to destinations worldwide due to imaginaries of "world-class urbanism" involving the appealing rhetoric of socially inclusive creative processes in urban economic activities, which is advocated in transnational circuits diffusing international "best practices" (e.g.Bertelli, 2021;Lederman, 2020). Our analysis is structured as follows.We provide a brief overview of the literature on TPCs, with support from the literature on PM, and propose a multidimensional framework that captures the elements of TPC landing processes.We then introduce our data and methods, Medellín, and report our qualitative analysis of the Medellín ID landing process.Subsequently, we discuss the usefulness of the multidimensional framework in generating novel insight into landing as a causal process and into the evolution of TPCs during landing.We conclude by suggesting that this approach helps pin down the mechanisms that make both the process and the version of the TPC it produces unique.This study thus contributes by offering a more thorough analysis of the causal process of TPC landing than what is found in the earlier literature. A multidimensional framework for analyzing TPC landing The vaguely theorized landing process Of the two lines of research on mobilities that inform our analysis, the first, which focuses on TPCs, draws on eclectic sources to provide a critical analysis of the transnational flows of planning ideas and practices (Healey, 2013;Healey & Upton, 2010).The second, which focuses on PM, analyzes mobile policies, their mutations (Peck & Theodore, 2010a) and assemblages (e.g.McCann & Ward, 2012, 2013) and is often concerned with the spread of neoliberal policies (Peck & Theodore, 2001) from a structuralist perspective (Healey, 2013(Healey, , pp. 1519(Healey, -1520)).The former is informed by the latter (see Healey, 2013), but the reverse is not the case (cf.Cook, 2015, p. 836;Jacobs, 2012, p. 413), and the literature on PM seems to subsume TPCs without distinguishing them as planning solutions (e.g.McCann, 2011).Despite differences in predispositions, these studies share an interest in the phenomenon of traveling/mobile planning/policy concepts/ideas that are spread by various carriers, adopted around the world, and adapted to local circumstances while maintaining a relation with extralocal actors and developments.In practice, planning and policy ideas and concepts often travel together and influence each other.We recognize that the traveling objects are frequently hybrids.They constitute elements in locally emerging "assemblages" of planning and policy practices, actors, and institutions (cf.e.g.Healey, 2013Healey, , pp. 1514Healey, , 1516;;McCann & Ward, 2012, 2013;Ong & Collier, 2005;Prince, 2010).In this study, we place more emphasis on TPCs because the traveling concept we study, the ID, centrally involves an urban planning aspect. Concepts are often sourced from leading cities in advanced economies (Yigitcanlar et al., 2008) but can also originate from cities in the global South (Montero, 2017;Wood, 2015a) or be combined from disparate origins (e.g.Bertelli, 2021;Bunnell, 2015;Robinson, 2015).A key observation in the literature is that concepts and policies change as they cannot be replicated identically in new contexts.Change occurs when concepts become entangled in context-specific social interactions and reciprocally transformative urban processes (Healey, 2013) or when policies are "translated and re-embedded within and between different institutional, economic and political contexts" (Peck & Theodore, 2001, p. 427).Through adaptation in place-specific social processes, the locally adjusted version of the concept may become idiosyncratic.In addition, the literature recognizes the role of local political processes resulting in specific coalitions' interests being served (cf.Temenos & McCann, 2012).Systematic conceptualization and analytical precision are needed to understand how all of this happens (cf.McCann, 2011, p. 111). The literature provides versatile conceptualizations to capture what is at stake when external concepts and ideas arrive in new places, but the process of TPC landing remains vague or only partially discussed.Often, rich empirical narratives of actual landing processes are accompanied by narrowthat is, partial and therefore weakconceptual frameworks, emphasizing specific aspects of the process.This results in excluding other aspects and their interrelations.The aspects that are identifiedalbeit often in passinginclude translation (e.g.Bertelli, 2021;Healey, 2013;Müller, 2015;Peck & Theodore, 2001), legitimation (e.g.Goicoechea, 2014;McCann, 2011), governance (e.g.Prince, 2012;Ward, 2018), politics (e.g.Prince, 2010;Temenos & McCann, 2012, pp. 1390-1401), embeddedness in territorially constituted social relations (McCann & Ward, 2010, p. 180), cities' planning policies (McCann & Ward, 2010, p. 181), organizational structures and policies (cf.Borén et al., 2020, p. 253), and adherence to local assemblages (e.g.Healey, 2013Healey, , p. 1514;;McCann & Ward, 2012, p. 328).Instead of focusing on such ad hoc, partial, and isolated aspects of the phenomenon, we propose a multidimensional framework that supports a comprehensive analysis of the landing process.This framework helps account for the process's constitutive causal elements, the range of actors representing different voices in urban society, and the ways in which the process changes TPCs. A multidimensional process Our multidimensional framework identifies key activities involved in producing the landing process.The activities recur successively or simultaneously and coevolve.They represent five dimensions of the TPC landing process.Some of them are discussed ad hoc in the TPC and/or PM literature.Others were inferred during our data collection and early analysis. We propose introduction as the first dimension of the TPC landing process.The introduction involves the identification of a planning concept by referring to a model case or an idea circulating among planning professionals and presenting it as an applicable model in a given local context.The initial introduction gives a rough idea of a desired solution to some local issues.As landing ensues, learning about the possibilities of the concept and local needs occurs, and the concept or aspects of it may need revision and reintroduction in a more specific or new form.Thus, while for Wood (2015b), introduction refers to persistent failed attempts at gaining approval for a given concept preceding its eventual adoption, for us, introduction is a dimension of a landing process that recurs as new elements are adjoined to the TPC. Our second dimension, translation, is discussed inconsistently in the literature.Translation takes place in international circuits of planning knowledge (Healey, 2013;cf. Ward, 2018); it is understood as "the processes through which an idea or technique moves from one site to another" (Healey, 2013(Healey, , p. 1516;;citing Callon et al., 2009); as the work related to concept mobility and the add-ons involved in those processes (Jacobs, 2012, p. 418); or as the entire landing process, as "[t]he 'translation experiences' through which exogenous planning ideas and practices become 'localized'" (Healey, 2013(Healey, , p. 1520;;cf. Bertelli, 2021;Cochrane & Ward, 2012, p. 9).Translation has even been specified as the impact of mobile policy adoption on the landing site (Müller, 2015, p. 192).Boxenbaum and Battilana (2005) understand the translation of managerial practices as "adapting a foreign practice to own institutional context" (p.356).Similarly in our analysis, translations reflect the necessity to adjust concepts to local urban and social structures, public and private capabilities, the interests and needs of inhabitants, and existing policy and planning assemblages (cf. McCann & Ward, 2013;Prince, 2010).More precisely, translation is the form and function given to a TPC as it is adapted to the specific circumstances of a new place.It involves selecting aspects of a concept (and thus rejecting, reframing, or modifying others) or adding new elements.Translation evolves as new features, associations of multiple understandings, influences of changing circumstances, and precision are instilled in the concept.Several rounds of translation are likely required before the concept is fully tailored to a new site. Our third dimension, legitimation, ensures that actions are "desirable, proper, or appropriate" in a social context at a particular time (cf.Suchman, 1995, p. 574).This is a prerequisite for gaining resources and support in the surrounding institutional environment (Scott & Meyer, 1992, p. 140).Legitimation is critical for TPCs sourced from divergent institutional environments.The TPC/PM literature notes the need to locally legitimize planning/policy action (e.g.Healey, 2013Healey, , pp. 1518Healey, -1521;;McCann, 2003McCann, , p. 162, 2011, p. 119) , p. 119) and urban experiments, policies, and planning concepts (López & Montero, 2018;Montero, 2017;Prince, 2012;Sorensen, 2010, p. 118, 135).Here, legitimation as a dimension of landing helps focus systematically on the role of diverse acts of legitimization throughout landing processes.Legitimation is an interactive process requiring consensus building within organizational fields (Suddaby et al., 2017, pp. 451-452).Legitimation strategies vary depending on the stakeholders involved, the audiences targeted, the nature of the support sought, and the stage of the process.New landing developments create a recurrent need to persuade key stakeholders and the public of the appropriateness of adopting and translating a given concept or idea.Failing to legitimize an intended TPC among relevant audiences may provoke contestation against the concept and the coalitions involved. In our fourth dimension, governance, TPC landing is enmeshed in interactions, negotiations and power relations that influence the assumptions and expectations adhered to TPCs (cf.Dzudzek & Lindner, 2015;Healey, 2013Healey, , p. 1523)).These collective processes are coordinated through networks and partnerships between local governments and various stakeholders controlling critical resources (e.g.Pierre, 2014, p. 874).Due to the involvement of multiple actors, agendas, and rationalities, the processes remain in flux and are vulnerable (cf.Beunen et al., 2015).Powerful agents may influence decision-making, change the scope of the TPC or jeopardize the landing.Changing circumstances during landing may emanate new relationships and require adjustments to the TPC to maintain the interests and commitment of old and new coalition members. Finally, our fifth dimension, materialization, involves the tangible outcomes of the progression of TPC landing.Materialization is rarely addressed explicitly in the TPC or PM literature, although most studies consider successful planning projects or policies, that is, ones that have materialized.Some authors studying unfinished policies consider them failures (Stein et al., 2017, p. 45).We suggest that seeing materialization as one of the evolving dimensions of landing makes all materializations (whether complete, incomplete, or failed) crucial in explaining the progression and outcomes of landing processes.Materializations may be partial, occur sporadically, and generate unintended outcomes (cf.Wood, 2015b).However, they coevolve with the other dimensions and recur in different forms.Materializations may concern the built environment, organizations, plans, etc.Each materialization epitomizes what can possibly be achieved in the landing process at a given time. The progression of landing The landing process unfolds through phases, that is, temporal brackets that occur sequentially over time and serve as comparative units of analysis.The phases comprise progressions of activities and events, separated by discontinuities in the temporal flow.Temporal bracketing allows for examining causality in a process and how developments in the previous phases change the context and impact subsequent phases.(Langley et al., 2013, p. 7) It assists in identifying mechanisms that explain the progression of the process (cf.Tsoukas, 1989;Van de Ven & Poole, 2005). The phases of a landing process are qualitatively different, as the activities along the theoretically identified dimensions differ.Through the coevolution of the activities along the various dimensions in each phase, conditions are generated for the following phase.Therefore, in each phase, the rationales and forms of action tackle new issues, solve different problems, and give rise to different effects.Each phase evinces the progression of TPC landing.This process can be discussed in terms of mechanisms.Explanatory mechanisms are multilevel (Machamer et al., 2000): higher-level mechanisms can be explained by lower-level mechanisms.In TPC landing, diverse activities on the proposed dimensions recurring during the process constitute the lower-level mechanisms (Figure 1).Jointly, the lower-level mechanisms constitute phase-specific higher-level mechanisms that generate discontinuities or critical events that distinguish the phases and contribute to the causal process of landing. We apply this framework in the analysis of the landing of the ID concept in Medellín. The landing of the innovation district concept in Medellín Data and methods Qualitative analysis enables the uncovering of the subtle dynamics of the ID landing process.The primary data stem from on-site observations and 27 semi-structured interviews (24 were conducted in Medellín in September 2016 and November 2017, and three in Cambridge, Massachusetts, in April 2017).The interviewees (Table 1) represented the views of five interest groups: public and private organizations involved in the ID landing and/or liaison officers working with the neighboring or business communities on ID planning and innovation projects.Secondary data on the evolving ID and contextual conditions during the landing process consist of planning documents and public reports, mostly available on the internet.The interview data were recorded, transcribed, and thematically coded using NVivo software.The codes corresponded to the dimensions of the TPC landing process.The content analysis focused on the activities carried out by key agents and their interconnections.Some interviewees provided contrasting narratives regarding the role and the focus of the ID, but through data triangulation, we aimed at a neutral account. The context of ID landing Medellín, the provincial capital of Antioquia, is a long-standing economic hub in Colombia.Initially a producer of coffee and raw materials, the city transformed into the country's industrial capital by the 1970s.It then lost competitiveness in key sectors (textiles, processed food, chemicals) and declined radically for two decades while at the same time receiving a rural migration influx that nearly tripled the population.The ensuing high unemployment (ca.60%), and precarious urban conditions provided fertile ground for a drug trafficking boom (Departamento Nacional de Planeación, 1991, p. 5).The city was shattered under decades of narcotraffic, economic recession, fragmented development, inequality, violence, and weak government institutions. Change gradually started to occur in the 1990s (Promo2; Pub1; cf.Betancur & Brand, 2021, pp. 18-19).The local institutional capacity improved with local democracy (popular mayoral elections since 1988), decentralization (increased administrative and financial responsibilities and competences), and support from the national government and international organizations.Solutions to Medellín's social problems were sought through new modes of governance, including broad collaboration with community leaders, civil society, academics, and the private sector (Betancur & Brand, 2021, p. 8;Pub1).An ideological turn was apparent in the approaches to tackling violence, inequality, and economic growth (Ferrari et al., 2018), 1 reflected in the development plans of the Mayors' office that subsequently guided the city's economic development (Dolan, 2020, p. 117;Leyva, 2010).The ID concept underscoring urban regeneration was sourced from abroad. Analysis of the ID landing process in Medellín We analyze the progression of the landing through four qualitatively distinct phases that we identified while handling the raw data.The start and end of each phase are not marked by a definite date but a year during which a new type of dynamism started to emerge.This is indicated in the variation in the activities along the five dimensions from phase to phase.Table 2 presents the scheme for our data analysis and summarizes the main results. We analyze the landing process through a narrative as follows. SEARCH: setting the stage for the ID landing (2008)(2009)(2010)(2011) The local government's quest to implement effective urban economic development initiatives set the stage for the ID landing process.With mounting unemployment, the city's development plan for 2008-2011 introduced a flagship building project to support entrepreneurship (Alcaldía de Medellín & Salazar Jaramillo, 2008).Public intervention in terms of "an entrepreneurial block" would contribute to the renovation of the surrounding decaying neighborhoods in an economically strategic area near the city center, adjacent to leading universities, a university hospital, significant cultural and recreational facilities, and along the city's main metro line (RN 6 ). To define this project, the Mayor's Office commissioned a multidisciplinary team of professionals from local government agencies, public enterprises, and universities (RN 6 ).Through planning tourism, the team benchmarked urban planning models in advanced cities, including Boston, Barcelona, Madrid, and Singapore, as well as cities in Chile, Argentina, and Brazil (Ruta N, n.d.).They found entrepreneurship increasingly considered in the context of the knowledge-based economy and agreed that the building plan intended to support entrepreneurship should now more broadly promote science, technology, and innovation (STI) activities (RN 6 ).This new way of thinking was translated to support the city's knowledge-based transformation. The redefined building was expected to benefit a broader set of local stakeholders and to support key ongoing urban agendas, including education and entrepreneurship, which had been the main pillars in overcoming the crisis in Medellín since 1991.These agendas were supported by established urban coalitions where public, private, and civil society organizations collaborated to create social opportunities (Pub1).The STI elements were expected to facilitate the diffusion of innovation activities in the city (Alcaldía de Medellín & Salazar Jaramillo, 2011, p. 66), educate high-skilled labor, and provide support for "the new economy" (RN 4 ). As a hub for innovation activities, the planned building would epitomize the city's STI agenda advocated since 2003.This agenda was steered by the triple-helix strategic alliance CUEE (Leyva, 2010;Morales-Gualdrón et al., 2015, p. 141), comprising top decision-makers from powerful local conglomerates, main universities, the Governor's and Mayor's offices, and directors of relevant national and regional organizations Finally, the plan resonated with the internationalization and branding agendas of the city: promoting business tourism through international events (i.e. the Annual Board of Governors of the Inter-American Development Bank, 2009; the General Assembly of the Organization of American States (OAS), 2008) and showcasing the city's social innovations to top leaders (cf.Brand & Dávila, 2011; Promo 2 ; RN 6 ).Renowned social innovations are a key element of the city's brand (Doyle, 2019;Hernandez-Garcia, 2013).They connect Medellín's informal settlements to urban public infrastructure networks via significant urban and architectural landmark interventions such as metrocables, escalators, architecturally prominent public libraries, cultural centers, and schools (Dolan, 2020;McQuirk, 2012).These social innovations have contributed to the city's modern image and encouraged foreign investments (Dolan, 2020, p. 124). Combined, the different urban agendas advanced Medellín's internationalization.For instance, having participated in the 2008 OAS General Assembly, the U.S. Secretary of State Rice referred Hewlett & Packard's (HP) management to the city's potential.In 2010, HP established its Regional Development Center in the city.The entry of a prominent foreign multinational in Medellín was taken to be a result of the well-defined STI strategy, including the building plan (Promo 1 ; Promo 3 ; RN 6 ), with funding guaranteed by the municipality and publicly owned corporations (EPM, utilities; UNE, telecommunications).As observed, "At that precise moment in history, the services it [HP] brought and the bet we wanted to make at STIit was a perfect match" (RN 6 ). This large foreign investment was seen as legitimizing the city's STI-motivated building plan and the concerted efforts of the broad urban coalitions supporting it.HP entry also fast-tracked the new building construction and received additional support as the EAFIT university built an extra floor in a campus building to temporarily host HP to ensure its entry.(Promo 1 ; Promo 3 ; RN 6 ) The search phase materialized, first, in the 2009 establishment of the Ruta N Corporation, a nonprofit public organization dedicated to STI activities (Alcaldía de Medellín & Salazar Jaramillo, 2011, p. 87) and second, in the 2011 completion of an architecturally notorious and internationally certified green building for the Ruta N Innovation and Business Centre (Ruta N, 2018; we will refer both to the organization and the building as Ruta N in what follows).The former managed the latter.Ruta N signaled the "new North" of Medellín's knowledge-based transformation (RN 6 ).After Ruta N was built, other technology firms started to flow in (Gómez, 2022). The search process and the efficient build-up of Ruta N stemmed from the governance arrangements and the interlocking urban coalitions behind Ruta N. As members of the local elite, they represented various key stakeholders in the city (the public sector and the local private and semipublic conglomerates), enabling them to coordinate broad administrative and material resources in the city (cf.MIT 1 ; Promo 1 ; RN 6 ).The city's persisting social and economic problems triggered a strongly felt social transformation ethos within what we call a developmental ecosystem, a loosely coupled but broadly based constellation of resourceful decision-makers for whom the development opportunities offered by Ruta N in the nonaffluent neighborhood complemented the city's optimistic development trajectory.This governance arrangement, however, allowed private influence on public policy and developmental outcomes (cf.Betancur & Brand, 2021, p. 19;Franz, 2018, pp. 94-95). VISION: an innovation district (2011)(2012)(2013) In this phase, the idea of a significant building supporting STI development obtained a concept definition: the ID concept that was introduced during an informal visit by a team of MIT architects.The Ruta N team presented the visitors with plans to support the transformation of the city to a knowledge economy.The MIT team identified these as "the kind of thing that we are working on, that's called an innovation district" (RN 1 ). To fit the circumstances in Medellín, the ID concept was translated into a socially inclusive ID model, with the aim of also creating synergies with the (partly informal) low-tech businesses predominant in the low-income neighborhoods surrounding Ruta N. Unlike the prevalent ID models in innovation-driven economies, the Medellín ID was designed to serve as a platform to initiate a local innovation ecosystem (Ind 7 ; RN 3 ).The model had to benefit two disparate communities.On the one hand, a complete transformation of the area into a "world-class" urban environment was supposedly required for the entrepreneurs, managers, and employees of the growing number of technology firms (RN 5 ).On the other hand, a full transformation would hurt the livelihoods of the neighboring low-income landowners, residents, and firm owners (RN 2 ; RN 3 ).The solution was seen in an incrementally growing, socially inclusive ID involving small-scale entrepreneurs and landowners in diverse economic activities (Ratti & Frenchman, 2012; RN 2 ).As stated, "In the 21-century idea of innovation, you do not need big footprints.You need small enterprises to develop side by side in smaller blocks, the size of a house" (MIT 1 ). The tailoring of the Medellín ID set it apart from the benchmarked IDs and helped build legitimacy among stakeholders.The model was developed with foreign advisors.Barcelona shared its experiences in developing the renowned local government driven 22@ district (RN 1 ; Ruta N, 2013a;cf. e.g. González, 2011).To complement that knowledge, the academic team from MIT with associated consultants (Carlo Ratti Associati, Mobility in Chain, and Accenture) was hired to develop a strategic plan for the Medellín ID (RN 1 ; Ruta N, 2013b).Additionally, a social enterprise accelerator from Buenos Aires consulted on developing transformative business models and living labs to support the socioeconomic inclusion of neighboring residents and to create synergies with technological entrepreneurs (RN 2 ; RN 3 ).The planning process also included the participation of local experts, representatives of private and public sectors, and the neighboring communities.This broad participation contributed to the transparency of the process (RN 1 ; RN 2 ). The 2012 "Innovative City of the Year" title (Wall Street Journal, 2012), awarded for Medellín's social innovations, helped further legitimize the chosen socially inclusive ID model and strengthen the image of a growing innovation culture.Prizes awarded by transnational organizations help convince local stakeholders about the credibility of the pursued "world-class" ideals (Lederman, 2020, p. 106).The ID "added to the narrative of Medellín working in social issues, because [the ID area] is a poor area of the city" (MIT 1 ).The title helped normalize the concept of a specialized area for innovation (RN4) "in a place where people were not necessarily acquainted with the idea of innovation" (RN4). The materialization in this phase was the Medellinnovation District strategic plan (MID) (Ratti & Frenchman, 2012).It extended the intended redevelopment area around Ruta N from 16 hectares (Alcaldía de Medellín & Gaviria Correa, 2012, p. 113) to 196 hectares (Ratti & Frenchman, 2012, p. 102).Ruta N became central in the governance of the ID landing.It received a mandate to lead this process supported by a board of directors with a high-ranking representation from the local government as well as public and private local conglomerates (RN 1 ).Ruta N built consensus among diverse local stakeholders in the participative planning process envisioning the ID and the role that different parties could play in it (RN 2 ). PLAN: formalization of the ID (2013-2017) The MID was introduced as the basis for the formalization of the urban planning and implementation instruments.These instruments gave rise to the technical translations of the ID.An urban plan was crafted according to the goals set in the MID and in compliance with the national planning legislation (RN 1 ).National legislation curtailed the bold vision of the ID, however, as no legal standards existed for an area dedicated to innovation activities.The area was therefore categorized as "urban renewal", normally used for large land units developed by the private sector.This categorization required landowners or developers to provide urban infrastructure and social upgrading, an expensive option (RN 1 ; RN 2 ).Additionally, it prevented an intended key feature of MID, namely, its incremental (plot-by-plot) development "rather than wholesale clearance" of the area (Ratti & Frenchman, 2012, p. 25; cf.MIT 1 ).The partial plan for the area allowed the development of mid-sized land units, but that required 51% approval of sometimes up to 20 landowners (RN 1 ).This slowed down development and made the area less attractive for private investors (RN 1 ). The alignment of the ID with formal statutes and local stakeholder interests contributed to its legitimacy.Business models were crafted to address a major challenge set by the MID: the generation of opportunities for the neighboring communities.A dedicated team at Ruta N worked with these communities to enhance their absorptive capacity and competences as well as to form synergies between formal and informal businesses, create institutional ties, and open experimental spaces (Morisson & Bevilacqua, 2019, p. 3;RN 3 ).Additionally, the 7th UN-Habitat Conference held in Medellín in 2014 contributed to the legitimation of the city's development model, including the ID.The presence of 23,000 representatives of the international planning community and their recognition of Medellín's "bold vision of social urbanism" that went beyond physical development to a commitment to social inclusion and equality (Turok, 2014, p. 575). The planned ID involved new actors in governance, and Ruta N worked toward building consensus among them.The decision-making process and ID implementation necessitated an organization able to both manage construction and support socioeconomic development and innovation.An organizational model that established a division of labor between the municipality (overall operation of the ID), the public Urban Development Enterprise (physical infrastructure), and Ruta N (socioeconomic development and innovation) was designed in consultation with the World Bank.(RN 1 ) The materialization in this phase comprised the inclusion of the ID in the Territorial Ordinance Plan (Alcaldía de Medellín, 2014), i.e. it was written into planning legislation. The approval of the plan granted the necessary conditions for realizing the ID and established legal bases for further development.The ID was foreseen as the most significant concentration of STI activities in Medellín (Distrito de Innovación Medellín, n.d.) and the consolidation of the local innovation ecosystem within approximately a million square meters to be developed in the subsequent 12 years (RN 1 ; RN 4 ). MISSION: the Medellín ID model (2017-) After the formal planning phase, the implementation of the ID officially started in 2017, although the Ruta N building had already been fulfilling many functions of an ID since 2011.By 2016, it had hosted 143 technology firms from 22 foreign countries as well as domestic ones.Such a concentration of foreign technology firms made the Medellín ID stand out in Latin America (RN 4 ). Investing in the ID area was cumbersome and costly.Small landowners lacked the money, capacity, and incentives to initiate building projects.Ruta N had the mandate to develop the area but lacked the resources to buy land or build (RN 1 ; RN 3 ).The technology firms had no interest in office ownership or in developing the ID territory.Their demand for fully serviced office spaces attracted international operators to set up coworking spaces in the more affluent business district in the south of the city, endowed with urban amenities and safety and ensuring higher returns on investment (RN 1 ).Thus, the location of the ID was problematic, as observed, It is there because it is a political decision, not because it is the best place to put the district. […] [H]ere you are not dealing with the people who are interested in this thing.(MIT 1 ) Consequently, the concept of concentrating innovative activities in a designated area became inadvertently translated into a distributed model comprising a network of firms, organizations, and coworking spaces across the city, with Ruta N as its hub.This model emerged due to the rising demand for office space and the interim impasse in developing the ID area further.Ruta N attracted technology firms to full capacity.The spatially diffused ID model also developed because policy efforts to boost innovation resulted in the proliferation of other innovative initiatives around the city, led by independent organizations such as industry associations and universities.They promoted collaboration among technology firms and the digital transformation of existing local industries while they believed Ruta N focused on foreign firms and startups (Ind 1 ; Ind 6 ; Ind 7 ; Uni 1 ). Until the mission phase, the ID landing had the support of all power holders in the city, and Ruta N could use its mandate to make progress.Then, the volatility of urban governance processes became apparent.Having been founded by earlier administrations, by 2015, the ID no longer was a key priority.The new local government shifted its focus to other urban agendas, leaving the district to market forces (RN 1 ).After their maximum two-year tenure at Ruta N, technology firms relocated to other areas as the district offered no other options.Investors were interested in more lucrative projects.The lack of private and limited public investments in the ID undermined its rapid development and started to erode its legitimacy. Legitimacy for the ID among neighboring communities was sought by maintaining inclusive development as a driving principle of the ID (RN 3 ).However, it has not been easy to unite disparate worlds.The technology and business communities had little interest in the area, and the implementation of the socioeconomic inclusion plans depended on funds acquired by developing the built environment (RN 2 ).Persistent attempts have been made to create synergies between the communities and to expose residents to the latest technologies, albeit on a small scale (Morisson & Bevilacqua, 2019;RN 2 ).Despite good intentions, interactions with nearby communities have been rare, and Ruta N has been perceived as an isolated island of innovation in the district (Arenas et al., 2020).In general, urban social innovations have not helped eradicate poverty and inequality in the city (cf.Brand, 2013;Franz, 2017Franz, , 2018)).The local conglomerates that supported the ID since the beginning see value in its overall development, but within limits: they "do not want to think about it as a social project, and they control the political backing" (MIT 1 ). In this phase, one large building infrastructure with innovation potential was planned in the ID, the University of Antioquia's "City of Health" campus.Lacking private investors, other major building plans have failed to materialize.While the development of the designated ID area became slow and uncertain, the ID was seen as progressing because a budding urban innovation culture was maintained by diverse initiatives around the city.Ruta N was a strategic partner and a broker in the nascent networked ID model, supporting different innovative initiatives and coworking spaces around the city.Its strategy became "to navigate in the medium term [and] prove to the city that this is going to work" (RN 1 ).Ruta N consolidated what had materialized thus far: an emerging networked ID model with Ruta N as its hub.It helped develop the city's overall innovation ecosystem and at the same time advanced the original ID plan, insisting "We have to keep the flame on during this administration and try to engage the next one" (RN 1 ). Discussion The Medellín ID provides an instructive case for the study of TPC landing, as the ID concept was drastically shaped during the landing process to fit the local conditions.At the same time, this case representatively demonstrated the complexity of landing, manifested in several qualitatively distinct phases.Our conceptual framework enabled us to unravel more systematically and comprehensively than has been done in previous literature how the landing of TPCs takes place and how they are shaped through the process. The analysis of the ID landing process in Medellín in terms of multidimensional and multilevel mechanisms demonstrates the relevance of the framework in the advancement of theory building on TPC landing.It revealed the structure of the causal process of landing.Five categories of lower-level mechanisms generate the dynamics in the process.Throughout the phases, (1) introduction draws stakeholders' attention to a solution to issues relevant in the phase, an initial planning concept and subsequent specifications; (2) translation changes what is introduced so that it is suited to the contextual circumstances; (3) legitimization ensures support among stakeholders who control relevant resources under those circumstances; (4) governance processes create sufficient coordination on pressing issues within and among the coalitions involved and address potential contestation; and (5) materialization ultimately epitomizes the concrete achievable outcomes in each phase.Jointly, the lower-level mechanisms in each phase constitute the higher-level mechanisms that generate the unique progression of landing.In the Medellín ID case, the first phase ("search" in our empirical narrative) prepared the ground for the landing, the second ("vision") produced the design for the district, the third ("plan") institutionalized the formal plan, and the fourth ("mission") operationalized a distinctive ID model (Figure 2) and brought the landing process to a conclusion. The detailed contextual analysis of the landing process also helps avoid the pitfalls of missing out on some of the causal factors contributing to the process or providing partial explanations for the landing.The latter can result from referring to only some of the dimensions or prematurely identifying an intermediate materialization (or lack thereof) as the ultimate outcome of a landing process, as was observed in the literature. Existing literature on TPCs and PM suggests that mobile ideas and concepts do not come alone but are integrated into assemblages of planning and policy practices, actors, and institutions.This suggests that TPC landing often takes place in the context of a set of other urban development goals, possibly benefiting from them and facilitating their implementation.In Medellín, we observed how the ID landed interwoven with the idea of the knowledge economy and was adjusted to fit a broader assemblage of socioeconomic development agendas in place in the city.Additionally, add-ons (mobile concepts or ideas in their own right) may supplement and modify the original concept to suit the landing site.In Medellín, add-ons were sourced from international agencies and advanced economies as well as cities more comparable to Medellín.Contrary to Jacobs (2006, p. 13;2012, p. 418), for whom add-ons are any changes that are made to an original concept in a translation (as in, say, changing the color of a passenger car or adding another gear; it still carries people from a to z), we propose that they are modules that extend the form and function of the TPC (like a caravan extends the form and function of a passenger car).Add-ons are amalgamated as modular components in focal TPCs and can help adjust and legitimize them. The ID landing was set in motion by an intention to bring in an externally originating concept to solve local problems.It was progressively shaped during the landing process.The proximate cross-sector interactions associated with the standard internationally traveling ID concept (Katz & Wagner, 2014) and the locally designed socially inclusive model have only developed on a small scale in the designated area.While Arenas et al. (2020) understood the limited development of the ID as a landing failure, in our interpretation, the ID actually developed capitalizing on the dynamics set in motion under the specific circumstances of landing.In those circumstances, the plans for the area were challenged by twofold institutional inertia: national legislation prevented incremental land development, and prevailing business practices discouraged involvement and investments in the low-income neighborhoods.However, fueled by the ID landing, small nodes of innovation activities emerged elsewhere in the city and, in coordination with Ruta N, gave rise to a city-wide network model, extending the prevailing ID concept.Without a thorough analysis of the landing process enabling the identification of diverse stakeholders in the entire city, the network model could have gone unnoticed.This model does not prevent the more comprehensive development of the originally designated area in times to come. Conclusion Landings are key episodes in the travels of TPCs.Our study is the first attempt at conceptualizing the overall structure of TPC landing as a multidimensional iterative process that begins when an externally originating concept is sought to solve local problems and ends in its local operationalization.The process approach facilitates a detailed analysis of the progression of TPC landing through successive phases driven by multilevel mechanisms and reveals the factors that influence the evolution of the concept during the process. In Medellín, the ID concept originally suited the aspirations of a city striving toward economic revival by advancing its knowledge-based economy.The mismatch between the globally traveling concept, advocating urban regeneration and support for the highly skilled, and the realities of the local circumstances in a primarily poor area of the city made it necessary to add other concepts and policy ideas and create new features in the Medellín ID model.These add-ons helped create legitimacy but also rendered the landing process arduous.Thus far, it has resulted in an alternative ID model, one that was possible and that the local coalitions managed to implement given the happenstances of the process.Undeniably, the ID has not boosted radical change in the designated neighborhoods, but neither has it exacerbated gentrification as witnessed in other cities of the global South (cf.Goicoechea, 2014Goicoechea, , 2018;;Lederman, 2020). The advancement of the TPC model was contingent upon the limits set by the environment: the resource constraints, the interests and agendas of the parties involved, and the notorious need to find solutions to the conundrums in urban development.The idiosyncratic landing process that took place in challenging circumstances produced a distinctive ID model.Thus, established TPCs may be modified and transformed into qualitatively new kinds, generating diversity among TPCs.These may start new rounds of circulation, making smoother landings in similar institutional environments elsewhere (cf.Temenos & McCann, 2013), avoiding imaginaries of "world-class urbanism" that remain disengaged from the context (cf.Lederman, 2020). Complementing the literature on TPCs and PM, this systematic analysis of the landing process of the ID concept in Medellín suggests that the specifics of the landing process are the key to understanding whether, how, and in what form a given TPC appears in a particular place.Detailed comparative analyses on landings are needed in the future to capture the diversity of such processes and their transformative power on mobile concepts and the places that adopt them.Another issue for further consideration is whether it is possible to identify types of landings supportive of specific types of concepts and localities and even identify landings as another kind of entity that could travel.Finally, our analysis was limited to studying TPC landing, not its effects on the Figure 2 . Figure 2. The four higher-level mechanisms constituting the Medellín ID landing process. Table 2 . The landing of the Medellín Innovation district 2008-2017.
2022-10-14T15:02:24.371Z
2022-10-12T00:00:00.000
{ "year": 2023, "sha1": "bc545fd8472f26a4f5b6a904f532c75248daab33", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1080/02723638.2022.2127267", "oa_status": "HYBRID", "pdf_src": "TaylorAndFrancis", "pdf_hash": "35222b58befeb9f0f2ca8c72153a89164e23ecc8", "s2fieldsofstudy": [], "extfieldsofstudy": [] }
261334760
pes2o/s2orc
v3-fos-license
Home ventilation for patients with end-stage chronic obstructive pulmonary disease Purpose of the review The number of patients with end-stage chronic obstructive pulmonary disease (COPD) treated with chronic non-invasive ventilation (NIV) has greatly increased. In this review, the authors summarize the evidence for nocturnal NIV and NIV during exercise. The authors discuss the multidisciplinary and advanced care of patients with end-stage COPD treated with NIV. Recent findings Nocturnal NIV improves gas exchange, health-related quality of life and survival in stable hypercapnic COPD patients. Improvements in care delivery have been achieved by relocating care from the hospital to home based; home initiation of chronic NIV is feasible, non-inferior regarding efficacy and cost-effective compared to in-hospital initiation. However, the effect of NIV on symptoms is variable, and applying optimal NIV for end-stage COPD is complex. While exercise-induced dyspnoea is a prominent complaint in end-stage COPD, nocturnal NIV will not change this. However, NIV applied solely during exercise might improve exercise tolerance and dyspnoea. While chronic NIV is often a long-standing treatment, patient expectations should be discussed early and be managed continuously during the treatment. Further, integration of advance care planning requires a multidisciplinary approach. Summary Although chronic NIV is an effective treatment in end-stage COPD with persistent hypercapnia, there are still important questions that need to be answered to improve care of these severely ill patients. INTRODUCTION Patients with end-stage chronic obstructive pulmonary disease (COPD) experience severe and often disabling respiratory symptoms and poor exercise capacity, health-related quality of life (HRQL) and survival.At this stage, patients may develop chronic hypercapnic respiratory failure (CHRF).Chronic noninvasive ventilation (NIV) applied nocturnally at the patients' home is an effective treatment for patients with CHRF.In recent years, NIV has gained acceptance as a treatment for patients with CHRF due to end-stage COPD.Guidelines now recommend NIV for patients with COPD and persistent hypercapnia [1,2].This has resulted in significant increases in the number of patients with COPD treated with home NIV.Nevertheless, chronic NIV poses several challenges to make it worthwhile.First, not all patients benefit from this treatment and there is a lack of knowledge about what predicts a beneficial effect.Second, titration of chronic NIV in this patient group is complex as ventilation needs to be improved in a diseased lung. Third, when starting chronic NIV in these end-stage patients, it often means that NIV is continued until death, therefore demanding a multidisciplinary approach integrated with advance care planning (ACP). For this review, we have conducted a literature search on (a combination of) the following terms: 'COPD', 'non-invasive ventilation', 'patient perspectives', 'patient experiences', 'palliative care' and 'ACP'.Relevant papers that were published in the past 3 years were selected, as well as older papers based on expert opinion.We first summarize the evidence for chronic NIV in patients with COPD.Second, we discuss intermittent NIV to relieve dyspnoea during exertion.In the second half of the review, we focus on the care for patients with COPD on chronic NIV.Is it useful to combine treatment options?What are the experiences of the patients with chronic NIV, and how can ACP be incorporated into the repeated contacts caregivers have with the patients?Finally, we discuss the need for integrating NIV with palliative care, including ACP. SUMMARY OF THE EVIDENCE FOR CHRONIC NON-INVASIVE VENTILATION For a long period, chronic nocturnal use of NIV was not regarded to be beneficial in patients with CHRF due to COPD [3][4][5][6][7].However, in the last 15 years, several studies have shown meaningful benefits when chronic NIV targets normocapnia during the night, socalled high-intensity NIV [8].A recent Cochrane systematic review showed that in patients with COPD and persistent hypercapnia in a clinically stable disease phase, chronic NIV improves gas exchange [9].The improvement in gas exchange seems to be dependent on ventilatory settings and treatment adherence, with greater improvements in gas exchange achieved with higher inspiratory positive airway pressure and better treatment adherence [9].More importantly, patient-related outcomes, such as survival and HRQL, are improved by chronic NIV [9][10][11][12]. Beneficial effects can be expected on complaints related to nocturnal hypoventilation, such as bad sleep, fatigue and mental performance [13].Unfortunately, NIV seems to be unable to improve dyspnoea during daytime while breathing without NIV [8].In contrast to stable hypercapnic COPD, the benefits of continuing NIV at home after an acute exacerbation are less evident [8].In this population, chronic NIV seems to prolong the admission-free survival, especially in patients with severe persistent hypercapnia more than 2 weeks after the acute event [8,14], but the effect on exacerbations and HRQL was lacking.This differential response between stable and post-exacerbation COPD has not been clarified yet.A possible explanation might be that in the postexacerbation population, patients are extremely vulnerable and require multiple combined interventions to reduce exacerbations and improve their HRQL. To date, patient-related factors associated with a beneficial effect have not been found yet.Recently, the interest in clinical phenotyping of patients with end-stage COPD has increased.Janssens et al. [ 15 ▪ ] conducted a cluster analysis on a cohort of patients with COPD treated with chronic NIV and identified two distinct phenotypes.A respiratory phenotype, which included patients with a low body mass index and severe airway obstruction, and a systemic phenotype, which included patients with a higher body mass index and more comorbidities.Interestingly, survival was better in the systemic phenotype, but it is not clear whether this is due to a worse response to NIV or due to characteristics of the disease that are associated with a worse outcome.Future studies should focus on the identification of a phenotype associated with a beneficial effect on patient-centred outcomes ('responder phenotype') to further optimize the patient selection for NIV. NON-INVASIVE VENTILATION DURING EXERCISE FOR SYMPTOM RELIEF Exercise training is a key component to maintain exercise capacity.A greater physiological response is achieved with training at high intensity, but patients with severe COPD are often incapable of achieving these high intensities due to dynamic hyperinflation and hypoxaemia, resulting in early lactatemia [16].Potential mechanisms by which NIV might improve exercise tolerance are respiratory muscle unloading, improving oxygenation due to better ventilationperfusion ratio and reducing hyperinflation [16,17 ▪ ].A recent meta-analysis included 15 randomized controlled trials (RCTs), which studied the effect of NIV during a training exercise programme of 4-12 weeks [18 ▪ ], and concluded that after the training programme, the 6-min walking distance was higher in Knowledge gaps Clinical implications What are the characteristics of patients with COPD that benefit the most from NIV? Better selection of patients who benefit the most from chronic NIV will lead to better outcomes. In the group that trained with NIV and experienced lower during NIV use.However, there was no effect on dyspnoea.In most studies, NIV was well tolerated during exercise.The vast majority of the studies investigated patients naïve to NIV, limiting the generalizability to patients who are already familiar with nocturnal NIV.Two studies have investigated the acute effects of high-pressure NIV during exercise in patients who had already been initiated on nocturnal NIV.In both studies, NIV during exercise resulted in a better exercise capacity and reduction of dyspnoea [19,20], and the NIV group had less exercise-induced hypercapnia compared to the control group that trained with oxygen [20].So, to conclude, there is evidence that the use of NIV during exercise relieves dyspnoea and improves gas exchange and exercise capacity when applied with sufficient inspiratory pressures, but data are sparse.It remains unclear whether NIV has carryover effects on symptoms and endurance during exercise without NIV.Larger studies are needed in this specific population that incorporate NIV in exercise training programmes. MULTIDISCIPLINARY APPROACH COPD is a heterogeneous disease, and hypercapnia is only one of the various treatable traits.Therefore, optimal treatment of patients with severe COPD should include a combination of treatment options, combined or initiated sequentially [21].Besides general recommendations like smoking cessation, nutritional support, sufficient physical activity and optimal pharmacological treatment, in this end-stage COPD population, bronchoscopic lung volume reduction, multidisciplinary pulmonary rehabilitation and/or lung transplantation might be worth considering [22].NIV may be of use for palliation of severe dyspnoea at end-stage disease without CHRF, but there is no literature to support this hypothesis.In some cases, chronic NIV may be useful as an add-on treatment.Severe hypercapnia is a relative contraindication for endobronchial valves, but valves may be considered after the initiation of NIV has improved gas exchange [23,24].Secondly, there is evidence that the initiation of NIV prior to a pulmonary rehabilitation is beneficial to the outcomes of the rehabilitation.The benefits achieved by rehabilitation on exercise capacity and HRQL seem to be better maintained when chronic NIV is subsequently continued at home, at least for patients with CHRF [11,25]. INITIATION AND MONITORING Historically there has been a high heterogeneity throughout Europe in the place where chronic NIV is initiated and where it is subsequently monitored [ [26][27][28][29].For many years, it was believed that the initiation of chronic required hospital admission [28].This is especially true for patients with COPD, who generally are older and require higher ventilatory pressures to improve their ventilation.However, in-patient initiation and follow-up monitoring require substantial in-hospital healthcare resources, excessive costs, and often place a high burden on these severely ill patients. In recent years, the interest in outpatient and home initiation and monitoring has increased and monitoring opportunities using telemonitoring are rapidly evolving.In a European survey conducted in 787 patients using NIV due to different diseases, the majority of patients would consider telemonitoring [30].Recently, three RCTs have shown that home initiation of NIV using extensive telemonitoring is feasible, safe and non-inferior to in-hospital initiation, both in patients with neuromuscular diseases and in patients with COPD [29,31,32].As may be expected, the cost of home initiation was over 50% lower compared to in-hospital initiation. To ensure that NIV is applied effectively, and goals are achieved, regular monitoring of ventilator data (compliance, leakage, obstructions and patientventilator synchrony), nocturnal gas exchange and side effects is necessary [33].The frequency of monitoring is a subject of debate.Recently, the attention on more frequent (daily or weekly) remote monitoring of patients on home ventilation has greatly increased [34 ▪ ].Remote monitoring will personalize the followup, thereby preventing unnecessary and challenging hospital visits when patients are stable and intensifying the follow-up of deteriorating patients that might need intercurrent adjustments of ventilator settings (Fig. 1) [34 ▪ ,35].Moreover, healthcare systems taking care of those patients should be organized in a way that easy access to technical and clinical support is granted 24 h a day, 7 days a week, once needed. For patients, the utmost goal of chronic NIV is to improve their HRQL.Both for clinical care and research, symptoms can be assessed systematically by using validated questionnaires, such as the Severe Respiratory Insufficiency questionnaire, the Maugeri Respiratory Failure questionnaire or the S3-NIV questionnaire [36][37][38][39].The S3-NIV questionnaire addresses both symptoms of respiratory failure, as well as side effects of the NIV.For use in clinical practice, both technological advances (like the development of an application) and the use of a short and self-administered questionnaire like the S3-NIV seem to be useful tools. PATIENT EXPERIENCES For a successful and satisfying therapy, it is extremely important to manage patient expectations.In patients with progressive disabling diseases like severe COPD, it is of utmost importance to define goals of therapy.Most patients will strive for a reduction in symptoms, a better HRQL and a reduction in exacerbations or hospitalizations, and do not univocally strive for a longer life.Variable survival rates have been reported in patients with COPD on chronic NIV, but on average, survival is shorter compared to patients with slowly progressive neuromuscular disease or obesityhypoventilation syndrome (median survival rates reported ranging from 2.7 to 4.4 years; 1-year survival reported ranges from 77 to 88%) [10,40].These findings stress the importance of discussing goals and expectations at the beginning of NIV and repeatedly during the follow-up. There is limited literature on the experiences of patients once they have started chronic NIV, and there is no information on the expectations of patients prior to NIV initiation.The survey by Masefield et al. [ 30] found that patients consider maskrelated factors, such as leaks and comfort, as the important aspect of the treatment.A qualitative study by Caneiras et al. [ 41 ▪ ] on the experiences of patients treated with home NIV (18 patients, 50% with COPD) found that most patients experience benefits of NIV on their symptoms.However, patients described the initial period as frightful and difficult due to adverse events and impact on their daily lives.Although, in general, these feelings resolved when benefits were subjectively perceived, the limitations on daily life persisted.This finding emphasizes the importance of a thorough patient education prior to initiation and of extensive support during the initial initiation.Unfortunately, predicting the individual's response to NIV remains difficult, as a 'responder phenotype' has not been defined.This complicates the management of the expectations of the patients. ADVANCE CARE PLANNING IN PATIENTS WITH CHRONIC OBSTRUCTIVE PULMONARY DISEASE TREATED WITH CHRONIC NON-INVASIVE VENTILATION A majority of patients with COPD who have successfully started on NIV will be ventilated until they die.This means that a repeated discussion is needed with your patients on how to deal with end-stage disease.At end stage, symptoms often deteriorate resulting in increasing hours of ventilator use, which require more frequent healthcare contacts.We suggest a shift of focus from monitoring of efficacy to comfort of NIV (Fig. 1).Also, it is extremely important to discuss expectations, current experienced HRQL and integrate ACP.ACP has been defined as an individuals' ability to define goals and preferences for their future treatment and discuss these with their close ones and healthcare professionals [42].Even though ACP is recommended by several [22,43,44 ▪▪ ], it is still uncommon in patients with severe respiratory diseases [45,46].Palliative care is part of ACP and was recently defined in patients with COPD as a holistic, multidisciplinary, person-centred approach aiming to control symptoms, improve HRQL and support patients' informal caregivers [44 ▪▪ ].Although it was based on low to very low quality of evidence, there were no unsolicited effects due to palliative care interventions.Limiting factors for performing ACP seem to be lack of time and the difficulty in predicting the disease course [47,48] ].With a progressing disease, ACP should be integrated early enough to know patients' perspectives.Even though starting chronic NIV might lead to improved HRQL, more stable disease and sometimes better survival, palliative care should be part of the treatment as patients at this stage suffer from wide variety of severe symptoms [50,51].Unfortunately, in a study performed in Finland, end-of-life decisions were made in only 39% of the patients with end-stage COPD and only 23% of the patients in their cohort died at home [52].The likelihood to die at home may be increased by palliative care [53].Besides, specialist palliative care consultations were associated with reduced emergency room visits and hospital days during the last year of life in patients with end-stage COPD [54].This highlights the importance of discussing not only intubation and resuscitation orders but also to take a broader view on advance care discussions to avoid unnecessary hospitalization.Given the poor survival of patients with COPD treated with chronic NIV, ACP should be started at least when NIV is initiated and should be discussed continuously throughout the treatment (Fig. 1) [55,56].This ensures a timely progress to palliative care for comprehensive symptom palliation, also concerning symptoms beyond NIV. The difficult question is when to proceed to the terminal stage.Increasing hours of NIV (e.g. during the daytime), worsening of gas exchange and symptoms despite high ventilatory settings and frequent hospitalizations are signs that may indicate the start of the terminal stage.At this stage, patients may choose to stop NIV as it is not leading to sufficient improvement anymore whilst others stick to nocturnal use.Some patients will increase their ventilator use up to 24 h per day to relieve symptoms and still experience a satisfactory HRQL.It remains key to provide care from a multidimensional perspective; by combining non-pharmacological (including NIV) and pharmacological treatment options to reduce symptoms, a satisfactory end-of-life period should be achievable. CONCLUSION In patients with CHRF due to COPD, chronic NIV is an effective treatment to improve gas exchange, HRQL and survival when patients are clinically stable.Unfortunately, characteristics of patients that will benefit the most have not been identified, which complicates the selection of the most suitable patient and the management of patient expectations.The initiation of NIV should be a trigger to initiate ACP, which can still be implemented broader to promote a better symptom palliation, end-of-life care and avoid undesirable hospitalizations or treatments.As patient numbers are expected to rise in the coming years, answering these questions will certainly result in better care for these severely ill patients. FIGURE 1 . FIGURE 1. Comprehensive treatment of patients with chronic respiratory failure due to COPD in the end stage of disease.ACP, advanced care planning; NIV, non-invasive ventilation.Notes: *, represents an acute exacerbation of COPD.Location of care: home ( ), in hospital ( ), outpatient based ( ), nursing home or hospice ( ). [49urther, even though palliative care specialists have good knowledge of ACP, they lack information on ACP specifically for patients with end-stage COPD[49
2023-08-31T06:18:31.977Z
2023-08-23T00:00:00.000
{ "year": 2023, "sha1": "e23d38b2bff25cff60c1b24bf1862157444a25d8", "oa_license": "CCBY", "oa_url": "https://journals.lww.com/co-supportiveandpalliativecare/fulltext/9900/home_ventilation_for_patients_with_end_stage.55.aspx", "oa_status": "HYBRID", "pdf_src": "WoltersKluwer", "pdf_hash": "f35fb369f7a8fcb1b1e8dd32d83d517ff6f1652b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
267977867
pes2o/s2orc
v3-fos-license
Analysis of factors associated with the first lumpy skin disease outbreaks in naïve cattle herds in different regions of Thailand Introduction Thailand experienced a nationwide outbreak of lumpy skin disease (LSD) in 2021, highlighting the need for effective prevention and control strategies. This study aimed to identify herd-level risk factors associated with LSD outbreaks in beef cattle herds across different regions of Thailand. Methods A case–control study was conducted in upper northeastern, northeastern, and central regions, where face-to-face interviews were conducted with farmers using a semi-structured questionnaire. Univariable and multivariable mixed effect logistic regression analyses were employed to determine the factors associated with LSD outbreaks. A total of 489 beef herds, including 161 LSD outbreak herds and 328 non-LSD herds, were investigated. Results and discussion Results showed that 66% of farmers have operated beef herds for more than five years. There were very few animal movements during the outbreak period. None of the cattle had been vaccinated with LSD vaccines. Insects that have the potential to act as vectors for LSD were observed in all herds. Thirty-four percent of farmers have implemented insect control measures. The final mixed effect logistic regression model identified herds operating for more than five years (odds ratio [OR]: 1.62, 95% confidence interval [CI]: 1.04–2.53) and the absence of insect control management on the herd (OR: 2.05, 95% CI: 1.29–3.25) to be associated with LSD outbreaks. The implementation of insect-vector control measures in areas at risk of LSD, especially for herds without vaccination against the disease, should be emphasized. This study provides the first report on risk factors for LSD outbreaks in naïve cattle herds in Thailand and offers useful information for the development of LSD prevention and control programs within the country’s context. Introduction Lumpy skin disease (LSD) is a highly contagious viral disease that primarily affects cattle.It is caused by the lumpy skin disease virus (LSDV), a member of the Capripoxvirus genus (1).Clinical manifestations of LSD can include fever, loss of appetite, and general weakness.The most notable feature, however, is the appearance of the characteristic skin nodules.These nodules can occur on various parts of the body, including the head, neck, limbs, and genital areas (2).In severe cases, the nodules may become ulcerated, leading to secondary bacterial infection.LSD poses significant economic implications for cattle populations.In affected herds, the morbidity rate can vary widely, ranging from 3 to 85%, depending on the susceptibility of cattle and other factors (3,4).The mortality rate is typically lower than 3% (3), but in some cases, it may exceed 40% (5).The disease can have devastating effects on the livestock industry, leading to substantial economic losses.The World Organization for Animal Health (WOAH) has defined LSD as a disease requiring notification due to the potential for rapid virus propagation in susceptible cattle populations and the consequential considerable economic effects in affected herds (6,7). While LSD was previously confined to Africa with occasional incursions into the Middle East, recent outbreaks have raised concerns about its emergence and rapid spread in Asia (4,(8)(9)(10)(11)(12)(13)(14).Thailand, being a significant hub for livestock production and trade in Southeast Asia, has also experienced the impact of LSD outbreaks since March 2021 (15).It was initially detected in the cattle farming regions located in the northeastern part of Thailand (9).Later, outbreaks of LSD were reported across the country.There were 283,213 affected herds with 628,089 cases across 64 provinces as of June 30, 2022 (12). Various risk factors associated with LSD outbreaks in endemic settings have been identified (16,17).The movement of infected animals is considered as a significant factor in facilitating long-range transmission, whereas arthropod-borne transmission is likely to be the primary mechanism responsible for the rapid and aggressive spread of the disease over short distances (18).The predominant blood-feeding arthropod vectors for LSD are stable flies (Stomoxys calcitrans), mosquitoes (Aedes aegypti), and hard ticks (Rhipicephalus and Amblyomma species) (1).Furthermore, cattle breed, source of replacement stock, introduction of new animals, herd size, communal grazing and watering management, and housing were identified as potential risk factors for the LSD outbreak in previous studies (16,(19)(20)(21)(22).Moreover, management type, gender, age, precipitation, and intake of community water sources have been determined to be risk factors for LSD (23).However, there is a notable research gap regarding the specific risk factors for LSD in the context of Thailand. Understanding the risk factors associated with the occurrence of LSD is crucial for effective prevention and control strategies.Identifying and quantifying these factors can aid in the development of targeted interventions, including vaccination campaigns, vector control measures, and improved biosecurity practices.Therefore, this study aims to determine the risk factors contributing to the occurrence of LSD outbreaks in naïve cattle herds in various regions of Thailand.The finding from this study has the potential to significantly advance the development of targeted control measures and policies.Ultimately, this will lead to improved management and prevention of the disease.The outcomes of this study may also contribute to the existing body of knowledge on LSD risk factors, potentially benefiting other countries facing similar challenges. Study population and sampling This case-control study was conducted in three provinces of Thailand: Nakhon Phanom, Buriram, and Prachuap Khiri Khan (Figure 1).The study took place from July to September in Nakhon Phanom, and from August to September in both Buriram and Prachuap Khiri Khan, all in the year 2021.It is important to note that the questionnaire survey was not conducted during the outbreak period, as the primary investigation prioritized the outbreak investigation protocol carried out by livestock authorities in each area.It is noteworthy that the surveys in all three provinces were carried out approximately 2 months after the latest herd had confirmed the LSD outbreak.Furthermore, the study focused on households that owned cattle as the primary unit of analysis.To ensure representative samples, a multi-stage sampling technique was employed. Initially, the selection of provinces was purposive and based on collaboration between central and local veterinary authorities.Subsequently, within each province, three districts were chosen using a simple random sampling approach.Furthermore, subdistricts within each district were randomly selected.The case herds in this study were identified based on the official outbreak investigation reports issued by local veterinary authorities in each subdistrict.In each sub-district area, all LSD outbreak herds were included in the study.Control herds were randomly selected from herds located in the same sub-village as the case herds.An approximately 1:2 ratio for case and control herds, respectively was applied.As a result, the total number of herds included in this study for Nakhon Phanom, Buriram, and Prachuap Khiri Khan provinces was 159, 180, and 150, respectively. Case and control definitions Cattle herd served as the epidemiological unit.A case herd, or an LSD-outbreak herd, was defined as a herd with at least one individual cattle showing the LSD clinical signs, which include raised, circular, firm, nodules varying from 1 to 7 cm diameter, as observed by investigators from the Department of Livestock Development (DLD) (9).Confirmation of the disease could be through laboratory testing using the polymerase chain reaction (PCR) method (12), although it was not always a prerequisite.A control herd, or a non-LSD outbreak herd, was defined as a beef cattle herd located in the same village and/ or subdistrict as the case herds.The control herds must not have any history of clinical LSD among their animals.The historical records of LSD outbreaks were cross-checked with information provided by farmers and local veterinary authorities during the questionnaire survey. Questionnaire survey The semi-structured questionnaire utilized in this study was developed collaboratively by veterinary experts from the DLD and epidemiologists from the Regional Field Epidemiology Training Program for Veterinarians (R-FETPV), supported by the Food and Agriculture Organization of the United Nations (FAO).Several questions in the questionnaire were adopted from the official outbreak investigation form employed for nationwide investigations of LSD Data collection was carried out by livestock and veterinary authorities.In cases where data were incomplete, follow-up telephone interviews were conducted to gather the necessary information. Hierarchical structure of the data The data is organized into a hierarchical structure, wherein it is structured into multiple levels or layers, with each level representing distinct units of study.Within the study's dataset, farms are grouped into clusters within districts, and these districts, in turn, are clustered within provinces.This hierarchical arrangement facilitates statistical analyses. Statistical analysis 2.5.1 Descriptive analysis Descriptive statistics, including the mean and standard deviation for quantitative variables, as well as frequencies (expressed as percentages) for qualitative variables, were calculated using R version 3.6.2(https://www.r-project.org). Univariable mixed effect logistic regression analysis The mixed effect univariable logistic regression model used in this study incorporated both fixed and random effects.Each potential risk factor is defined as a fixed effect, while the individual district was included as a random effect (24).To account for the clustering of districts within provinces, a factor named "province" was included in the univariable and multivariable logistic models as a fixed effect (25).The odds ratio and p-value were determined based on Wald's test. Subsequently, risk factors with a p-value less than 0.2 were selected for further analysis using a mixed effect multivariable logistic regression.The objective of this step was to select factors that have a significant association with the outcome while accounting for potential confounding variables.Multicollinearity between variables was also examined using a Cramer's π-prime statistics.A pair of categorical variables was considered collinear if Cramer's π-prime statistics was greater than 0.7 (24,26). Multivariable mixed effect logistic regression analysis 2.5.3.1 Model In the mixed effect multivariable logistic regression model, the potential risk factors were considered as fixed effects, while the individual district was defined as a random effect, similar to a previous study (21).The models also incorporated the variable "province" as a fixed effect as suggested in the literature (25).The statistical model can be expressed as follows ( 27): ( ) where y ij is the outbreak status (1 = outbreak or 0 = non-outbreak) of a herd i clustered in district j.The term β β 0 represents the intercept, β β k is the regression coefficient for the fixed effect factors ( ). The term u district i ( ) is the random effects on the intercept for the j district which includes herd i.It was assumed that u ,σ σ .The error terms ε ε i are assumed to follow a logistic distribution with mean zero and variance / 2 3 π . Model selections Model selection was performed using a backward stepwise method.Akaike's Information Criteria (AIC) was utilized as the criterion for selecting the most appropriate model (23, [28][29][30][31].The interaction between variables was also examined during the model selection process.If the inclusion of an interaction term did not improve the model, the interaction term was removed from the model. Confounding was assessed by examining the change in estimated coefficients of the variables that remained in the final model upon the addition of a non-selected variable.If the inclusion of this new variable resulted in a change of >25% in any parameter estimate, that variable was deemed a confounder and retained in the model (24,26). Evaluation of multicollinearity and model assumptions After identifying the final model, an assessment of multicollinearity was conducted by examining the variance inflation factors (VIF) values.The VIF represents the ratio of the overall variance in the model to the variance when a specific single variable is included.A VIF value below 5 indicates no evidence of multicollinearity among the variables included in the final model (32).Additionally, residual diagnostics for the final mixed effect model were evaluated. In the final model, odds ratios and their corresponding 95% confidence intervals were calculated for each variable intra-class correlations For the final model, we considered the variance components as a random effect, dividing them into two levels based on their origin.The first level variance is equivalent to A low ICC indicates minimal clustering as most of variance is found within individual districts.In contrast, a high ICC means that there is less variation within a district when compared to the variation observed between the different districts (33). The mixed effect logistic regression was conducted using the "glmer" function from the "lme4" package.To assess the variance inflation factors (VIF), the "vif " function from the "car" package was employed.The diagnostics of residuals were carried out using "DHAMa" package.The ICC was obtained from "mlmhelpr" package. Respondent and management practices A total of 161 LSD-outbreak herds and 328 non-LSD outbreak herds from three provinces in Thailand participated in this study.The provinces included Buriram (n = 180), Nakhon Phanom (n = 159), and Prachuap Khiri Khan (n = 150).The average age of the participants was 54 in the case group and 53 in the control group.Males constituted approximately 72% of the respondents in both groups (Table 1).Most respondents in both groups had a primary education.The average duration of herd operation was 7.7 years with a median of 5 years, The average number of cattle per herd was 4.6 with a median of 5 animals.The majority of herds (90%) had facilities for keeping cattle in stalls. Farm characteristics and management practices for the herds included in this study are summarized in Table 1.Out of all the herds investigated, only eleven herds had a history of purchasing cattle from other herds and transporting them to their own facilities.All herds examined reported the presence of stable flies or mosquitoes or both.Notably, none of the herds had a history of using LSD vaccines.Additionally, the data highlights that 36% of herds with an operational history exceeding 5 years experienced LSD outbreaks, while the percentage was lower at 25% for herds operated for 5 years or less (Table 2).Insect control measures have been adopted by 34% of farmers.Among those who did not implement these measures, 40% experienced an LSD outbreak, while only 28% of farmers who employed such control measures encountered outbreaks (Table 2).Risk factors associated with LSD outbreaks. Risk factors The risk factors for LSD outbreaks identified in this investigation, as determined by univariable logistic regression, are presented in Table 2.The analysis revealed that the number of years in operation and the absence of vector management on the herd were associated with the LSD outbreak status. In the final multivariable mixed effect logistic regression model (Table 3), results showed that cattle herds operating for more than five years had 1.62 times greater odds of experiencing an LSD outbreak (OR = 1.62; 95%CI = 1.04-2.53)than those operating for fewer years.Furthermore, herds that did not implement insect vector control measures had 2.05 times greater odds of being affected by LSDV (OR = 2.05; 95%CI = 1.29-3.25)compared to those implementing these control measures. During the model selection step, no significant interaction term was identified in the final model.Furthermore, there was no evidence of multicollinearity among the variables included in the final model, as all variables included in the final model had VIF values of less than 1.04.The ICC from the final model was equal to 0.09, indicating that the effects of the variation observed within the district were smaller compared to the variation between the different districts. Results related to the residual diagnostics for the final mixed effect model, including QQ plot residuals and a plot between residuals and predicted values, are displayed in the Supplementary Figure S1.The results demonstrate a lack of violations in the model assumptions. Discussion This study aimed to identify the risk factors associated with LSD outbreaks in naïve beef cattle herds located in the upper northeast, northeast, and central regions of Thailand.This research is an integral component of a national project that seeks to comprehend the epidemiology of LSDV, which has caused a significant outbreak in the country.The findings from this study hold the potential to contribute valuable insights to the national strategy for disease prevention and control. Blood-sucking insects play a significant role in the mechanical transmission of LSDV (34)(35)(36).Various bloodsucking arthropods, such as mosquitoes (Aedes aegypti), stable flies (Stomoxys calcitran), horn flies (Haematobia irritans), house flies (Musca domestica), and hard ticks (Dermacentor marginatus, Hyalomma asiaticum, Rhipicephalus appendiculatus, Rhipicephalus decoloratus, and Amblyomma hebraeum), have been previously identified as potential transmitters of LSDV (37)(38)(39).Additionally, recent studies have confirmed that LSDV can be transmitted by insect vectors from animals infected with LSDV to animals that are susceptible to the disease (34,40,41).Based on mixed effect logistic regression analysis, this study determined that lack of vector control on the herds was identified as a significant risk factor for LSD outbreaks.In other words, herds of farmers who did not apply insect vectors control practices had 2.05 times greater odds for LSD outbreak than herds of farmers who did apply such practices.This finding provides support for the results from previous investigation conducted in other areas in Thailand (9), which reported that naïve cattle herd affected by LSD were primarily characterized by suboptimal insect control measures.Furthermore, all cattle herds in the present study were found to harbor insects that could potentially act as vectors for LSD.Thus, with inefficient insect vector control, it was revealed that the transmission of LSD in the naïve herds in this study is likely due to insect vectors.This speculation is supported by previous spatial epidemiological studies conducted in Thailand reporting that insect vectors play a crucial role in LSD outbreaks in cattle farming areas where herds are closely situated or in regions with a high concentration of cattle herds (9,42,43).In addition to the findings of the current study, a study conducted in Thailand, employing transmission kernel analysis, similarly affirms that herd-to-herd transmission in LSD outbreak areas occurs within short distances, with the estimated range falling between 0.2 and 0.8 kilometers (44).This discovery emphasizes the pivotal role that insects may play as significant vectors in the transmission among cattle herds.Furthermore, aligning with the outcomes of our study, the absence of insect vector control measures on farms emerges as a notable risk factor for LSD outbreaks in Indonesia.This investigation demonstrates that farms without insect vector control measures had 8.6 times (OR = 8.6) greater odds for experiencing an LSD outbreak compared to those implementing such measures (45).The impact of insect vectors on LSD transmission has been also observed in different settings.For example, in Sub-Saharan Africa, LSD outbreaks are typically observed following the rainy season when insect populations increase (46).A study conducted in Israel also demonstrated a correlation between the relative abundance of insect vectors in December and April and LSD outbreaks (47).Similarly, in various regions of Nepal, LSD outbreaks were reported during the rainy season (June to August), indicating a link to the increased population of arthropods in the area (48).Furthermore, in terms of implications, eliminating insects on a large scale is deemed impossible due to the common abundance of insect vectors in cattle farming areas throughout the year in Thailand (9).We recommend concentrating on measures to manage and mitigate the role of disease-transmitting vectors.This includes controlling breeding sites for insects, such as standing water and cattle manure.Additionally, the application of insecticides for vector control may be considered, but caution is advised, taking into account potential impacts on human health and the environment.The present study also showed that herds operating for more than five years had higher odds of experiencing LSD outbreaks compared to herds operating for less than five years.However, it is challenging to explain this finding. Although we examined the association between the total years of operation and other variables such as insect control and farm biosecurity, none of the pairs demonstrated a significant association.We hypothesize that farmers who possess over five years of experience may exhibit different farming practices in comparison to other groups of farmers.For example, individuals may exhibit a decreased propensity to obtain news or updates through online channels, which serve as a primary means of disseminating information regarding the LSD outbreaks in Thailand (12).To address this knowledge gap, a follow up study should be conducted to investigate this factor.Additionally, further investigation is necessary to investigate other risk factors that were not considered in this study. Purchasing and selling animals during LSD outbreaks are determined as important risk factors of LSD outbreaks according to the study in Kazakhstan (21) and Indonesia (45).These factors were not identified as risk factors in this study.Strict animal movement restriction to mitigate LSD spread in Thailand was implemented during the course of this research.Only 2% of cattle herds included in this study have a history of animal movement limiting the evaluation of its impact to the occurrence of LSD.Another risk factor linked to the incidence of LSD was the size of the herd.Larger herds were found to have a higher risk of LSD infection, which can be attributed to factors such as stressful conditions, increased likelihood of exposure to the LSD virus, and greater possibilities for disease transmission (49).However, in this study, herd size was found to be less significant, mainly because most herds were small, typically consisting of around 5 cattle each, as they were predominantly owned by small-scale farmers. Based on the findings of this study, it is recommended to implement insect control measures in LSD outbreak areas where no LSD vaccine is available, particularly for naïve herds.For herds that have been vaccinated against LSD, the use of insecticides can be an additional option, taking into account factors such as the abundance of insect vectors, the effectiveness of insecticide application, and economic considerations (7).It's also important to point out that the source of the LSD outbreaks in the study areas was not determined.While the results suggest no correlation between LSD outbreaks and animal movements such as buying animals from other herds, it is crucial to remember that a small number of herds included in the study did have a history of animal movement.Thus, the sample size might not be large enough to fully examine the impact of this variable.In the study areas, we hypothesize that the origin of LSD outbreaks could be due to unauthorized movement of LSDV-infected cattle into the affected regions.Alternatively, the insects carrying the LSDV might have been introduced to the study areas either by flying or being transported by vehicles from other outbreak areas.Once an outbreak occurred, the spread of LSDV was likely aided by the high abundance of insect vectors in the outbreak regions, as suggested by previous studies (9,12). This study is subject to certain limitations.As it relied on a questionnaire survey, there is a possibility of recall bias and information bias, which are inherent to this type of study design.Furthermore, the presence of similar management factors in both outbreak and non-outbreak herds, as these practices were implemented in both types of herds, poses challenges in conducting statistical comparisons.Moreover, it should be noted that the diagnosis of LSD is primarily based on clinical signs, and as a result, subclinical cases may be included in the control group.However, given that most herds are naïve, cattle affected with LSDV would likely exhibit clinical signs of the disease (9).Therefore, the occurrence of subclinical cases in the control herd is less likely, but it should still be acknowledged as a limitation.Furthermore, it is important to note that the study was only conducted in a naïve herd.Therefore, interpretations of the results should take this condition into consideration. Despite certain limitations, this study represents the first investigation of potential risk factors for LSD outbreaks in Thailand.The research was conducted across multiple sites throughout the country, providing a more comprehensive understanding compared to a study limited to a single area.Also, the sample size falls within the range of previously reported studies, being larger than some conducted to determine risk factors for LSD (16,19,20,45,50).Additionally, the statistical models employed in this study accounted for the hierarchical effects of herds nested within each site or district. Conclusion This study investigated the risk factors associated with LSD outbreaks in beef cattle herds in Thailand.The results revealed that herds operating for more than five years had a higher likelihood of experiencing LSD outbreaks.Additionally, herds without effective vector management practices were found to be at a greater risk of LSD outbreaks.These findings highlight the importance of implementing insect-vector control measures in LSD-risk areas, especially for herds that have not been vaccinated against LSD.This study is a significant contribution to the understanding of LSD outbreaks in Thailand.It was conducted across multiple sites.The findings can serve as guidance for managing LSD in naïve cattle herds in various settings. Data availability statement The data analyzed in this study is subject to the following licenses/ restrictions: the data used in this study is derived from lumpy skin disease outbreak investigations carried out by the Department of Livestock Development (DLD), Thailand, and therefore, it's not publicly accessible.Requests to access these datasets should be directed to Department of Livestock Development (DLD), Thailand, email: dld.info@ac.th. FIGURE 1 FIGURE 1Map of Thailand displaying study areas (orange color) which are located in Nakhon Phanom, Buriram, and Prachuap Khiri Khan provinces. ( ) and x k is a set of fixed effect factors x = k k 1, , .. 2 3 π on the logit scale, and this represents the error variance in the binary model.The second level variance symbolizes the random intercept that changes based on the district's effect, symbolized as σ σ j 2 .As a result, to illustrate these variances, we calculated the intra-class correlation (ICC).The formula used for calculating the ICC is as follows (25): TABLE 1 Characteristics of respondents, and LSD case (herd with LSD outbreak) and control (herd without LSD outbreak) herds enrolled in a casecontrol study of risk factors associated with lumpy skin disease outbreaks in beef herds in Thailand. TABLE 2 Summary of associated risk factors related to lumpy skin disease in cattle of herd level based on univariable logistic regression analysis in 3 provinces (n = 489).
2024-02-27T16:51:04.889Z
2024-02-22T00:00:00.000
{ "year": 2024, "sha1": "e5eb441d27f3fd504f00e285d89b2a487083cb40", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fvets.2024.1338713/pdf?isPublishedV2=False", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "dd5b6857b7bf1993a5243fe95fa4c693c1273e40", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science" ], "extfieldsofstudy": [] }
73610221
pes2o/s2orc
v3-fos-license
A reduced-complexity model for river delta formation – Part 1: Modeling deltas with channel dynamics We develop a reduced-complexity model (RCM) delta formation model, in contrast to reductionist models based on high-resolution computational fluid dynamics. The basic framework of this model (referred in this paper as “DeltaRCM”) consists of stochastic parcel-based cellular routing schemes for water and sediment and a set of phenomeno-5 logical rules for sediment deposition and erosion. The outputs of the model include flow field, water surface topography and bed topography that evolves in time. Results show that DeltaRCM is able to: (1) resolve a wide range of channel dynamics, including elongation, bifurcation, avulsion and migration; (2) in response to the changes in input parameters, produce di ff erent types of deltas such as alluvial fan deltas at experimental 10 scale. We also identify three key areas of particular model sensitivity, even at the RCM level: (1) avulsion dynamics is sensitive to dynamic free-surface topography; (2) channel network structure is sensitive to instability at channel mouths which creates bars; and (3) out-of-channel sedimentation is sensitive to water surface slope along channel margins. We also demonstrate a simple stratigraphy tracking component which can 15 display the structure of the deposit in terms of distribution of coarse and fine materials along with the age of the deposit. DeltaRCM is a useful tool for understanding the dynamics of river deltas within a relatively simple cellular representation of water and sediment transport. Introduction Home to hundreds of millions of people, major coastal cities and infrastructure, immensely productive wetlands, and some of the most compelling and diverse landscapes on Earth -yet low-lying and vulnerable to storms and rising sea levelsdeltas are emerging as among the most critical environments in a changing world (Syvitski et al., 2009).They are also immensely complex.The science of deltas comprises, in roughly equal parts, geomorphology, ecology, hydrology, organic and Introduction Conclusions References Tables Figures Back Close Full microbial geochemistry, and human dynamics.The physical dynamics alone would present a formidable challenge if they were restricted to just turbulent flow interacting with sand; but most natural deltas involve major additional complications such as fine-grained cohesive sediment (mud) and strong, two-way interactions with biota.A fundamental debate is developing across the sciences as to the best way to model and understand such complexity (e.g., Murray, 2003;Overeem et al., 2005;Paola and Leeder, 2011;Paola et al., 2011;Hajek and Wolinsky, 2012).Should we try to capture everything, creating models that simulate the processes in as much detail as current knowledge and computing power allow; or should we simplify, even at the risk of losing connections with reality?Modeling of deltas in recent years has produced excellent examples of both approaches, which we review below.Our aim here is to present a model that resides in the middle ground between detailed simulation and abstract simplification.We use a method based on weighted random walks, where the random walks are constrained by rules based on a hybrid of simplified governing equations for fluid motion and phenomenological representation of sediment transport processes.With suitable rules, DeltaRCM is able to produce delta morphologies that compare well with those produced by more complex models such as Delft3D and with the morphology of deltas in the field.We believe that the availability of abundant computing power strengthens rather than weakens the case for so-called reduced-complexity models such as the one we propose here.Understanding -as opposed to simulating -complex natural phenomena requires a spectrum of approaches and a clear understanding of the advantages and disadvantages of each. The paper begins with a review (Sect.2) of current approaches to modeling deltas, emphasizing previous reduced-complexity models.The detailed implementation of our model is presented in Sect.3, and results from it in Sects.4 and 5.In Sect.6 we discuss the meaning of the model results to date.Introduction Conclusions References Tables Figures Back Close Full As with any morphodynamic model, the most direct delta formation model would solve the governing equations for water flow and sediment particles based on first principles, i.e., the conservation of mass and momentum or energy, in detail, given all the necessary initial and boundary conditions.However, this is still not practical, not only because of limits of computational power, but also because of the potential error accumulation in such complex "full physics" models (Hajek and Wolinsky, 2012).Existing models for delta formation cover a wide range of scales and complexity (Fagherazzi and Overeem, 2007;Paola et al., 2011). On the simple side, models based on spatially averaged delta surface topography can predict average delta dynamics, such as laterally averaged surface profile, position of the shoreline, and position of the alluvial-bedrock transition (Parker et al., 2008;Kim et al., 2009;Lorenzo-Trueba et al., 2013).These models do not attempt to provide detailed structure, such as topography and channel networks.On the more complex side, to date the most inclusive physics-based delta formation model is Delft3D, which solves a depth-integrated version of the Reynolds-averaged Navier-Stokes equations (shallow water equations) with a turbulence closure term for horizontal Reynolds stresses, and coupled with empirical sediment transport formulas based on bed shear stress (Lesser et al., 2004;Edmonds and Slingerland, 2007).Delft3D can resolve deltaic processes from smaller, engineering scales such as river mouth bar formation and bifurcation (Edmonds and Slingerland, 2007) to larger, geological scales such as the whole delta morphodynamics controlled by sediment cohesion (Edmonds and Slingerland, 2009), wave, tides and antecedent stratigraphy (Geleynse et al., 2010).Delft3D is widely considered the best high-resolution delta model available to the research community, and its utility is greatly enhanced by the release of an open-source version in 2012.In the middle ground of the model complexity spectrum are the so-called reduced-complexity models (RCMs).These models feature descriptive constructions and intuitive simplifications over the hierarchy of natural processes, in contrast to highly Introduction Conclusions References Tables Figures Back Close Full detailed but computationally complex models such as Delft3D, while still evolving the topography and channel network without simplifying to the degree of spatially averaged models.The most common form of models in this category is a rule-based cellular routing scheme, such as the braided river model by Murray andPaola (1994, 1997) and some of the early erosional-landscape models (e.g.Willgoose et al., 1991).In terms of channel-resolving delta formation models, an excellent example is Seybold et al. (2007Seybold et al. ( , 2009Seybold et al. ( , 2010)).In their model, the water flux field is calculated on a lattice grid via a set of simplified hydrodynamic equations which are equivalent to a diffusive-wave form of the shallow water equations with constant diffusivity.A few other examples of delta-related channel-resolving RCMs include an avulsive delta building model by Sun et al. (2002) and a channel-floodplain co-evolution delta building model, "AquaTellUS" by Overeem et al. (2005).RCMs are less computationally intensive than CFD-based high-fidelity models yet still produce morphodynamic features at system scales, such as stream braiding and floodplain aggradation.While computational efficiency is often considered the reason for developing RCMs, their most important advantage is the flexible rule-based framework which allows direct translation of phenomenological observations into the model (as opposed to hoping that they will emerge given a sufficiently detailed description of the underlying mechanics).The challenges of building a RCM for delta formation are the following: (i) the low topographic slope of the majority of river deltas (10 does not provide a strong guide for topographic flow routing, which is a key component in many RCMs for geomorphodynamic systems, (ii) the low slope together with relatively deep, slow channel flow creates a low-Froude-number environment such that the flow senses information from both upstream and downstream directions, making it difficult to design localized rules which are essential for RCMs, (iii) the self-organized distributary channel network includes loops that further complicate flow routing, and (iv) many river deltas have suspended load and wash load as a primary sediment input components, which make sediment routing more complex than in a bedload-only system.In addition, the low-Froude-number flow condition implies, as the Froude number Introduction Conclusions References Tables Figures Back Close Full tends to zero, a "rigid-lid" condition in which the shape of the free surface is nearly flat.This condition potentially offers computational advantages as the flow depth can be estimated from a fixed surface elevation (usually sea-level or a simple function using backwater equations) and bed elevation, but is almost decoupled from the bed topography. In this work, we present an RCM delta model using the "weighted random walk" method.The basic goal is to develop a model that includes just enough of the dynamics to tackle the main problems listed above.A detailed model description is given in the next section, followed by results and comparisons with experimental and field deltas along with the results of more detailed delta models, and then a discussion of the strengths and weaknesses of our model approach. Model construction DeltaRCM has two components: a cellular flow routing scheme as the hydrodynamic component, and a set of sediment transport rules as the morphodynamic component.The model uses a lattice of square cells for its domain, where water and sediment flux are routed in a cell-by-cell fashion.The model evolves in time by updating the depthaveraged flow field, water surface elevation, sediment flux, and bed elevation at each time step. Model setup The physical setting of our delta formation model is simplified to a rectangular basin of constant water depth (h B ) with a short inlet channel on one side (Fig. 1).At the inlet we assume a constant water discharge Q w0 (m 3 s −1 ) and sediment discharge Q s0 (m 3 s −1 ). The boundary with the inlet channel is a wall boundary such that no water or sediment crosses.The other three boundaries are ocean boundaries with the boundary condition of a fixed sea level, H SL .Introduction Conclusions References Tables Figures Back Close Full For water and sediment routing, we first define a set of global parameters that remain constant for each model run: (1) a reference water depth h 0 , i.e. a representative flow depth for the system, and (2) a reference slope S 0 , which is a representative overall water surface slope of the system.For example, for a lowland river delta, a typical value of h 0 is from a few meters to tens of meters, with S 0 on the order of 10 −4 to 10 −5 , while for an experimental fan delta, a typical value of h 0 is tens of millimeters and S 0 on the order of 10 −2 .The values are not precise but rather represent scale values, and may require trial and error to validate for each specific system.The depth of the inlet channel is set at h 0 and the inlet flow velocity is calculated as U 0 = Q w0 h 0 W , which will be referred to as a reference velocity of the system.W is the inlet channel width, specified for each model run. The domain is shown in Fig. 2, with cell size δ c , a value that depends on the target scale of the model run, e.g., in the results section we use 50 m for a field scale delta and 2 cm for a laboratory scale fan delta.The total number of cells along the dip direction (from the inlet, into the basin) is N X and the number of cells along the strike direction (perpendicular to the inlet, across the basin) is N Y .Typically N X and N Y are both on the order of a hundred, with N Y being roughly twice as large as N X to allow a semi-circular delta growth.The inlet has a width of N 0 cells.Typically N 0 is around 5. The primary quantities associated with each cell include: (i) water unit discharge vector q w = (q x , q y ), (ii) water surface elevation H, (iii) bed elevation η.These primary quantities are updated at each time step.Other useful quantities such as velocity vector u = u x , u y and water depth h can be easily calculated from the primary quantities by: h = H − η and u = q w h .Two types of parcels that carry a water or sediment attribute are routed through the domain.A time step is defined by the addition of n w water parcels and n s sediment parcels.This is done through a sequence of water parcels carrying an equal fraction of the total input water discharge during a time step followed by sediment parcels carrying an equal fraction of the total input sediment discharge during a time step.Introduction Conclusions References Tables Figures Back Close Full Within each model run, the size of the time step ∆t is constant.As is often the case in numerical modeling, the choice of ∆t is a balance between computation efficiency and model stability.In each time step, the total amount of sediment added to the domain is measured by ∆V s = Q s0 ∆t.A smaller ∆V s means less change to the topography, and allows the cellular routing scheme to perform better with a more consistent terrain, but obviously it will take more steps to build the delta to a certain size.Here we introduce a reference volume, which is the volume of a channel inlet cell from the bed to water surface.If we assume that channels on the delta self-organize in scale with the reference depth h 0 , this reference volume gives a good measurement of the characteristic topographic change on the growing delta.Currently we set the time step size so that the sediment volume added in each time step satisfies: Therefore time step size is given by: (3) Model operation The operations can best be understood by describing the processes in a single time step.There are four distinct phases: (1) the addition and routing of the water; (2) updating of the water surface elevation; (3) routing the sediment parcels and updating the bed elevation through deposition and erosion; and (4) updating of the routing direction, a vector field that determines the direction of flow through each cell in the domain. Each of these phases is described in turn.Introduction Conclusions References Tables Figures Back Close Full To prepare, we divide the upstream water discharge (Q w0 ) and the total sediment input volume during a time step (∆V s ) into parcels.Typically, we use n w = 2000 water parcels and each water parcel carries equal amount of discharge: And we use n s = 2000 sediment parcels and each sediment parcel carries equal amount of sediment volume: (5) Phase 1: water routing At the start of a time step we assume that we have a delta with known shape and topography, i.e. at each cell we have a value of the water surface elevation H, bed elevation η, and water depth (difference between the water surface elevation and the bed elevation) h.We also have, at each cell, a unit vector F , referred to as the routing direction, which indicates the average downstream direction of flow through that cell.If the current time step is the first step in the model run, the routing directions are all in line with the inlet channel. For the purpose of routing water, we define a binary cell state: 0 -dry, 1 -wet.This is done by doing a sweep through the domain and marking cells with a water depth larger than a small threshold value h dry as wet cells.This threshold value is typically a fraction (10 %) of the characteristic depth scale of the environment of interests or 0.1 m, whichever is smaller.For example, for a natural delta, h dry is typically 0.1 m, while for an experimental delta in laboratory, h dry is typically a few millimeters which is 10 % of characteristic flow depth. The process in the first part of the time step requires us to route, in turn, each of the water parcels through the domain.When the parcel is at a given cell, this requires Introduction Conclusions References Tables Figures Back Close Full making a choice of which of the 8 neighbor cells it will move to.We achieve this by using a so called "weighted random walk" where the movement is dictated by a predefined probability distribution between the 8 neighbor cells.The specification of the probability distribution is as follows: At a given cell, first we calculate the routing weights for the 8 neighbor cells.With the local routing direction F specified, the routing weights are determined by two factors: (i) the angle between the relative direction of the neighbor cell i and the routing direction, which we will estimate using a dot product method that we describe below, and (ii) the resistance to the flow from each neighbor cell i .In this model we calculate the routing weight for neighbor cell i as: where resistance R i is estimated as an inverse function of local water depth h i , For the current version of flow routing, the exponent θ is set to 1, hence leading to the following relationship of the routing weight: The cellular direction vector, d i , is a unit vector pointing to neighbor i from the given cell.Finally, ∆ i is the cellular distance: 1 for cells in main compass directions and √ 2 for corner cells (Fig. 3). The weights above are calculated only for wet neighbor cells of the given channel cell.All dry neighbor cells take a weight value of 0. At each channel cell we can then calculate routing probabilities p i : Introduction Conclusions References Tables Figures Back Close Full To obtain a discharge vector at each cell based on the motion of visiting water parcels, our starting point is to construct, for each visiting parcel, an average vector of the input and output vectors (Fig. 4).So the result is, for each channel cell, a set (size N visit ) of vectors, each expressing the average path of a visiting parcel through that cell.A summation of this set of vectors provides, after appropriate normalization, a representative direction for water parcels through the cell.In this way, a vector with this direction and a magnitude Q cell = N visit Q p_water can be regarded as a discharge vector for the cell (Fig. 4). Then for the purpose of later sediment transport, we need to estimate local flow unit discharge and velocity.To do this we take the cell discharge vector (m 3 s −1 ) and divide it by the cell size δ c to obtain a unit water discharge vector (m 2 s −1 ): Phase 2: water surface calculation Water surface elevation is essential in this model not only in that it participates in the calculation of flow depth but even more importantly, the gradient of water surface plays a major role in determining the routing probabilities, w i (Eq.8), of water parcels. In this reduced-complexity model, our goal is to obtain a sufficiently accurate surface profile without solving the full 2-D hydrodynamic equations.We propose a method that uses a finite difference scheme along the movement path of individual water parcels, analogous to the simplified surface solver developed by Rinaldo et al. (1999). To start with the simplest formulation, we assume that water surface slope along a channel streamline can be approximated by the reference slope S 0 , and in the ocean condition H = H SL , ideally along any given streamline we can reconstruct the surface profile with a simple finite difference calculation.In the model, however, instead of tracing a flow streamline, we take advantage of the walking path of water parcels, which can be considered as an approximation to the flow streamlines.The difference between the water parcel paths (the "zigzag" version of streamlines) and the real flow streamlines is illustrated in Fig. 5.In the following we explain how to construct water surface profile along a water parcel path with a given reference slope S 0 .First, we need to locate the part of the path that is on the delta surface, as the part in ocean is considered flat.In general, a water-parcel path starts at one of the inlet cells, moves from one cell to an adjacent cell, and ends at one of the downstream ocean boundary cells.We distinguish the cells along the path on the delta surface and the cells in the open ocean by checking two values at each cell such that either a cell is in the ocean if both of the following criteria are met: 1. Local bed elevation η is lower than a threshold value η shore (set to η shore = H SL − 0.9h ref ) 2. Local flow speed |u| is smaller than a threshold value U shore (set to U shore = 0.5U ref ) or a cell is on the delta otherwise. With a given water-parcel path, the calculation starts from the end of the path and goes backward towards the inlet.For the kth cell in the direction of calculation, The purpose of the term is to take into account the angle between the parcel path and the streamline. This calculation gives the surface profile along the path of an individual water parcel and is repeated for all water parcel paths.There are two extra situations to be taken care of: 1.If a cell is visited by multiple water parcels, all the values from each visiting path are recorded and an average value is taken among these stored values in the end to obtain a single value for water surface elevation at each cell. 2. If a cell is not visited by any water parcels, its water surface elevation remains the old value (from the previous time step). This newly calculated surface profile is recorded as H temp .We then apply a diffuser to smooth the calculated surface profile, which is typically spiky due to the 1-D stepwise method of calculation.The diffusion is applied as: We have used a diffusivity ε = 0.1 and we apply the smoothing calculation in Eq. ( 11) for 10 times in each time step.This number is selected by checking samples of the resulted surface profile until no obvious spikes appear.We will discuss more in detail how sensitive the results are along with other features in calculating the free surface. In the end, the water surface elevation is updated with an under-relaxation scheme for numerical stability: The under-relaxation coefficient is set to 0.1, which allows the surface profile to transit slowly and smoothly from one time step to another, avoiding numerical instability.Introduction Conclusions References Tables Figures Back Close Full Phase 3: sediment transport and bed topography update Now both the flow field, q w , and water surface elevation, H, are updated.These two variables will remain constant until the next time step.To calculate the changes to the topography in a time step, we propose two sets of rules for the transport, deposition and erosion of sediment.The first set describes the routing of the sediment parcels, and the second set describes the rate of deposition and erosion as the exchange of sediment volume between sediment parcels and the bed.The rules are phenomenological and the goal is to build them via our understanding of macroscopic behavior rather than via fine-scale physical interactions among fluid, sediment and the bed.To this end, we distinguish two types of sediment that have different behaviors in the model: coarse sediment, referred to as "sand", is non-cohesive, and transported as bed load; fine sediment, referred to as "mud", is cohesive, and transported as suspended load. A sediment parcel is either a "sand" parcel or a "mud" parcel.At the beginning of each run, an input parameter f sand gives the portion of sand in the total upstream sediment input.Therefore a total number of f sand n s parcels are designated as sand parcels and a total number of (1 − f sand )n s parcels are designated as sand parcels for each time step. Routing of the sediment parcels For routing sediment parcels we use the same weighted random walk method as for the routing of water parcels (Eq.6) with two modifications: 1.The routing direction F in Eq. ( 6) is replaced with the newly calculated water discharge vector q w at the given cell (from Phase 1 above), assuming that sediment parcels move with the water flow; Introduction Conclusions References Tables Figures Back Close Full 2. Transport resistance for sediment maintains the inverse function of flow depth but has different exponents.The idea is that sediment flux tends to concentrate in the lower portion of the water column and therefore it is more likely to follow topographic low areas.For now we use an exponent θ = 2 for sand parcels (bed load) which is twice the value for water, and θ = 1 for mud parcels (suspended load) which is equal to the value for water.The physical reason for the values chosen is that the distribution of the concentration of coarse material is skewed towards the lower portion of the water column and the distribution of fine material is more evenly distributed throughout the water column. Thus the routing weights for sediment parcels are: for sand parcels, and ( 13) And routing probabilities are calculated as: The rate of deposition and erosion Sediment parcels are routed sequentially in a weighted random walk fashion according to the probabilities calculated with Eqs. ( 13 (ii) how much volume should be exchanged between the sediment parcel and the bed. The rules for sand and mud parcels are different. For convenience of description, we refer to the initial volume of each sediment parcel V p_sed as the "reference sediment parcel volume", and the remaining volume during the walking process of a sediment parcel as the "residual sediment parcel volume", V p_res . The amount of deposition at each cell by an individual parcel is referred to as V p_dep .The amount of erosion at each cell by an individual parcel is referred to as V p_ero .Detailed rules are: For the deposition from a sand parcel we do the following: -At each cell in the domain, we calculate a "transport capacity" for sand flux, q s_cap , as the maximum flux per unit width, which is a non-linear function of local flow velocity U loc .The scaling between sediment flux and flow velocity takes the form of the Meyer-Peter and Müller (1948) formula where q s0 is calculated by dividing the upstream sand flux input by the inlet channel width: -Similar to the calculation of water discharge, as the sand parcels are routed sequentially, we track the accumulated total sand flux, q s_loc , which increases with each visiting bed load parcel: -Deposition occurs if a sand parcel visits a cell that has an accumulated local sand flux exceeding the transport capacity: For deposition from a mud parcel we do the following: -Deposition occurs if a mud parcel visits a cell that has a local flow velocity U loc smaller than a threshold velocity, U dep .The amount of deposition is proportional to the residual sediment volume of the mud parcel as well as the relative difference between the squares of U loc and U dep , a simplified representation of standard empirical laws for fine-sediment deposition (van Rijn, 1984) : The idea is that the finer grain size, the slower flow it requires to settle out. For the erosion of both types of sediment parcel, we do the following: -Erosion occurs if local flow velocity magnitude is larger than a threshold value, U ero , that differs for sand and mud parcels (García and Parker, 1991): -For a sand parcel, U ero = 1.05U ref . For volume exchange between sediment parcel and the bed: -At each step, the volume of the sediment parcel is updated as: -The elevation of the local bed is updated as: After all sediment parcels finish their random walk, to take into account the influence of topographical slope on sediment flux in an approximation of the Bagnold-Ikeda expressions (García, 2008), we apply a topographic diffuser that assumes where α is a scaling coefficient, by default set to 0.1; |∇η| is bed slope.The total change to the bed elevation by the topographic diffuser is obtained by summing up the inbound and outbound diffusive fluxes at each cell over the time period ∆t. Phase 4: update routing direction Before moving to the next time step, we need to update the routing direction, a unit vector at each cell indicating the downstream direction for routing water parcels.In this last phase of the time step, at each cell we calculate the updated value of the unit water discharge vector q w , water surface elevation H, bed elevation η, water depth h, etc. To achieve this, we combine two physical processes dictating the flow direction: (i) at an instant in time flow has a tendency to continue in the same direction as the direction at the previous instant due to inertia, and (ii) in the absence of any other drivers the flow goes down slope which in our case is indicated by the water-surface slope rather than bed slope. First, we calculate a unit vector from the downstream direction based on the previous time step: Then, we calculate a unit vector from the water surface gradient (from the previous time step): Then a linear combination of the two vectors is taken with a partitioning coefficient γ: The value of γ is set to a small number, typically 0.05 in the runs reported here. Model results In this section we present various morphological features produced by DeltaRCM with different domain setup and input parameters.All simulations assume no effects from wave or tidal energy, i.e. the delta is a classic river-dominated delta (Galloway, 1975).We investigate: (1) the effects of input sediment composition and (2) the model's ability to simulate deltas at field and laboratory scales.The former has been studied via field observation (Orton and Reading, 1993) and numerical simulation (Edmonds and Slingerland, 2009).The latter is based on the availability of data from experimental deltas; also, we believe that if a model can handle both field and experimental scales, it could potentially inform the interpretations and connections of both.Furthermore, we demonstrate using DeltaRCM as a tool for hypothesis testing, through study of the effects of receiving basin depth. As discussed above, two types of sediment are routed through the system: coarse (sand) and fine (mud).The ratio of the numbers of these two types of parcels at the inlet gives the ratio of sand and mud coming into the system.To set the physical scale of the simulation, domain and grid size are adjusted by changing cell size and physical input parameters, such as total input water and sediment discharge, and also global parameters such as the reference energy slope. The input parameters include: 1.The portion of sand in the upstream sediment input, f sand ; Introduction Conclusions References Tables Figures Back Close Full 2. Global parameters: the reference flow depth h 0 , and basin depth h B , and the reference slope, S 0 . Strictly speaking, the choice of the reference slope S 0 is dependent on the sand-mud ratio as well as the scale of the physical setting.In our model runs for field scale we use 3 × 10 −4 for purely sandy deltas, 1 × 10 −4 on purely muddy deltas and a linear combination of the two for mixed deltas; for laboratory scale, we use values on the order of 10 −2 for S 0 .The magnitude of the reference slope is scaled with the ratio of bedload and water fluxes that come from the inlet channel, that S 0 ∼ Q s0_bed /Q w0 . Effects of input coarse/fine sediment ratio In this group, the domain is a lattice grid of 120 by 60 square cells.Cell size is taken to be 50 m.The channel inlet is 5 cells wide (250 m), with a reference flow depth h 0 = 5 m.The total water discharge is 1250 m 3 s −1 .The total sediment discharge is 0.1 % by volume, which is 1.25 m 3 s −1 .We use a time step calculated from Eq. ( 3) of 25 000 s (∼ 7 h).Both water and sediment discharge stay constant and we assume they represent channel-forming conditions.We show three model runs in Fig. 6 with the portion of sand in the upstream sediment discharge set to 90, 50, and 10 %.The resultant deltas differ systematically based on the input mud fraction in the following characteristics, which are consistent with those found in the investigation on the effects of sediment cohesion by Edmonds and Slingerland (2009): -On a sandy delta the channels are relatively shallow and mobile, without welldefined levees.Flow is less confined.There are large areas of sheet flow.The shoreline is smooth and the delta grows roughly in a semi-circular shape.Introduction Conclusions References Tables Figures Back Close Full -On a muddy delta, channels are deeper and stable, with well-defined levees. Channels tend to elongate.The shoreline is rugose, and deltas build in different directions by switching lobes. Experimental fan deltas Laboratory experiments, numerical modeling and field observation are three important approaches of understanding the formation of deltas.We would like to test our model across as wide a scale range as possible so we include experimental deltas at laboratory scales.To do this, we change the domain to a lattice grid of 80 by 150 cells with a cell size of 0.02 m.The inlet channel is still 5 cells wide but has flow depth of 0.02 m and a water discharge of 0.6 L s −1 .Basin water depth is 0.02 m.The reference slope is set at 0.01.The time step is estimated at 33.3 s.Sediment input is considered to be coarse-grained only.These conditions are representative of laboratory experiments such as those reported by Reitz and Jerolmack (2012). In Fig. 7 we show a time series of the resultant deltas.Note the alluvial fan delta characteristics, in which a few active channels quickly switch to build a semi-circular shape with a relatively smooth shoreline.To evaluate the details of the channel switching process, we compare a representative switching event from our model results to the avulsion cycle described by Reitz and Jerolmack from the experimental fan delta (Reitz and Jerolmack, 2012).DeltaRCM captures all the steps in the avulsion cycle (Figs.8 and 9). Effects of basin depth It has been suggested that the accommodation -the space that a delta can grow into -plays an important role in the architecture and behavior of a growing delta (e.g., Paola et al., 2000;Heller et al., 2001).However, for the case of river deltas with very low-Froude-number flow, it is still unclear how the depth of the basin affects the overall morphology of the delta.Storms et al. (2007) delta formation from a river effluent discharging constant flow and sediment loads into shallow and deep receiving basins under homopycnal conditions, and shows that the shallow-basin delta is dominated by mouth bar bifurcations and a shoaling channel network, and exhibits significant stratigraphic complexity and sub-aerial development, while the deep basin delta is dominated by unstable bifurcations, levee breaches and avulsions (Storms et al., 2007).The authors suggest that the shallow basin case resembles Wax Lake Delta.In our model runs 6 and 7, we test scenarios with the same inlet channel conditions and discharge, but different basin depths.In run 6, the receiving basin depth is half of the reference depth (defined by the inlet channel which is supposed to be at equilibrium state in terms of sediment transport), while in run 7, the receiving basin depth is twice of the reference depth.In Fig. 9 we show that our results give similar behaviors to the ones modeled by Storms et al. (2007).For the shallow basin the morphological development is very close to the description of Storms et al., while the deep basin delta has similar outcomes but the middle ground bar and avulsion over the levee are not as clear in the RCM results. The difference between a shallow and deep receiving basin, according to our model results, is that: channels will still try to maintain the same unit power of transporting sediment by maintaining a certain cross-sectional geometry with levees on the side and erosion or deposition on the bottom; in general a distributary channel network shoals up and channels are stable at shallower depth going seaward.With a shallow basin the amount of work is reduced.Also the narrow space promotes the splitting of flow which enhances the growth of a distributary network; a deep basin increases the time scale of establishing a stable channel, therefore introduces stronger competition among channels by allowing larger differences to develop.Introduction Conclusions References Tables Figures Back Close Full Finally, we note two interesting emergent features from our model that has also been observed in the field at Wax Lake Delta by John Shaw and colleagues (Shaw et al., 2013;Shaw, 2013) (Fig. 11).First, the channels in the shallow basin delta are erosional initially, which carve into the basin bottom.This is consistent with the observations at the Wax Lake Delta (Shaw et al., 2013).Second, the channel network on this delta develops "tributary" sub-networks on islands (highlighted in Fig. 11), which collect flow both from tie channels directly connected to the main channel network and from sheet flow topping the levees into the islands.Whether this sub-network is erosional or depositional, Shaw (2013) points out that at least the channels comprising it are likely not favorable for deposition.In our model results, we notice the following feature that might explain the situation: 1. the sub-network mainly collects fine sediment from the main channel network, which requires much slower flow to settle out; 2. as the tributary sub-network joins into bigger trunk channels, the ability of the flow to carry sediment increases; 3. at the downstream end of the network, where the trunk channel collecting water coming out of the island meets the open water, the sorting of the sediment deposited is very similar to a normal channel that has a coarser bar-like structure at the mouth. Recording of stratigraphy A delta writes its own autobiography by preserving deposited sediment underground. These sedimentary records open a door to understand the past and to use delta deposits to reconstruct their range of natural behavior.Therefore, the ability to record stratigraphy in a delta formation model enables us to directly investigate the connection between surface and sub-surface processes.In this model, we have two methods that Introduction Conclusions References Tables Figures Back Close Full track the stratigraphy of model-produced deltas: the first method tracks the distribution of coarse and fine sediment by recording the percentage of sand in each deposition event; the second method tracks the age of the deposit by labeling each deposition event with the time that its sediment enters the domain from the inlet channel.Here we present one example from each.(1) We take a sample run of a field scale delta and 30 % sediment input (Run 4).In Fig. 11, we show a stratigraphic slice in the dip direction along the center line of the inlet channel.In Fig. 12, we show the time series of the stratigraphic slice in the strike direction about 20 cells (1 km in this case) away from the inlet channel.In both figures white represents pure sand and dark blue is pure mud, with mixed deposits represented by linear combinations of the two end members. Generally speaking, coarse sediment (sand) can be found in channel belts and mouth bars, while fine sediment (mud) can be found in distal regions such as the bottom set of the delta, or on the floodplain or in abandoned channels. (2) In Fig. 13 we show a sample model run for laboratory conditions (Run 8).Note the evolution of the area pointed to by the yellow arrow.The series of images shows the deposition sequence from an individual avulsion event. Discussion One of the themes running through this paper is that even in the framework of a reduced-complexity delta model there are a number of important details that must be modeled fairly accurately to achieve even qualitatively correct model results.These include a reasonably accurate representation of the water surface and the inclusion of suspended sediment deposition and entrainment.This is quite striking compared to the success of even fairly radical reduced-complexity approaches in modeling other morphodynamic environments such as erosional landscapes (e.g., Willgoose, 1991), braided rivers (e.g., Murray and Paola, 1994) and eolian bedforms (e.g., Werner, 1995). So why is it that deltas seem to require more attention to detail?Can we learn anything Figures Back Close Full from this experience that might help us better understand what systems are most and least amenable to reduced-complexity approaches?Since deltas and drainage basins share dendritic channel patterns -one is a distributary network while the other is a tributary network -we first look at the differences between these two systems.In modeling the evolution of drainage tributary networks, even highly simplified relations for water flux and sediment transport give quite reasonable drainage networks and elevation changes in long-term evolution of catchments (e.g., Willgoose et al., 1991).The equation describing the evolution of land elevation in Willgoose et al. (1991) includes two transport processes: fluvial transport process and diffusive transport process.The former is dependent on the discharge and the slope in the steepest downhill direction, and the latter, diffusive transport process, is dependent on slope alone.These simple formulations cannot be easily applied to modeling deltas because deltas are low-gradient environments where the transport direction and capacity are to some extent decoupled from bed elevation and slope.To be more specific, (1) bed slope in low-gradient environments is often uncorrelated with flow direction and strength, for example bed slope points opposite to the direction of flow where channels shoal up towards the shoreline; (2) the water surface, which dominates local flow routing, is largely independent of bed topography; (3) the typical low Froude number flow in low-gradient deltaic environments creates strong backwater effects that imply strong non-locality in flow and sediment flux control (Lamb et al., 2012;Nittrouer et al., 2011) -meaning that downstream conditions control upstream flow dynamics (Hoyal and Sheets, 2009); and (4) river mouth and shore processes such as waves and tides also control the overall morphology of deltas, providing additional process complexity. According to Werner (1995), for a nonlinear and dissipative system, considerable simplification can be applied if the system exhibits the following two properties: (1) it has a finite number of steady states as "attractors", and (2) it has macroscopic emergent behaviors that are self-organized and consistent with but decoupled from microscopic physics.If we compare drainage networks with deltas, the former exhibits a strong generic pattern and scale-invariant properties such as network patterns that can be Introduction Conclusions References Tables Figures Back Close Full described by Horton's laws (Horton, 1945).In contrast the networks on deltas have many varieties, responding to a wide range of processes, that no universal geometry can describe them as a whole group.Regarding model complexity, the lack of universality in the system pattern indicates the requirement for a more detailed, system-specific approach in modeling them. So, is the low gradient the main cause of the modeling difficulty, making deltas more "unforgiving" than erosional landscapes in terms of the accuracy of hydrodynamic calculation?For cellular models that use explicit flow routing schemes, the complexity level rises as factors other than topographic slope alone determine water and sediment routing.It also increases with non-locality in the broad sense of the sensitivity of dynamics at one point to conditions far away in the system.Other contributing factors such as water surface gradient and flow inertia weigh in as the overall topographic gradient decreases.For example, dune fields may have very low to zero average topographic slope, but they have locally high steepness meaning that, as in erosional landscapes, the sediment dynamics are dominated by bed topography.In deltas, however, the controlling factor is the relatively subtle water surface topography, therefore simple descriptions relating sediment deposition and erosion to e.g., local elevation and slope give realistic dune field dynamics but do not work in deltas. Can we be more systematic about evaluating the amount of detail needed to model a geomorphodynamic system?This is an important fundamental question in morphodynamic modeling, and we do not pretend to resolve it here.But our experience with this model suggests the following guidelines as a starting point: -For gravity-driven systems, the overall gradient of the landform is one important index, in the sense that in high-gradient systems the gradient alone is enough to route the flow.Introduction Conclusions References Tables Figures Back Close Full steepest-path method (Passalacqua et al., 2010) are sufficient to determine the flow path, without the need for simulation of the flow details. -Froude number (Fr ): as Fr tends to unity, the backwater length tends to zero (Cui and Parker, 1997), so the simplification of local normal-flow assumption provides a satisfactory means of accounting for momentum balance in the flow. -For systematic behaviors comparable to or beyond the backwater length scale, in-channel scale hydrodynamic details can be resolved at much lower complexity, such as avulsion models that use single-cell-wide threads to represent channel belts (Jerolmack and Paola, 2007). -Whether the system to be modeled exhibits a strong generic pattern or scaleinvariant (e.g., fractal) properties.The lack of a universal "pattern" in a dynamic system is an indicator of sensitivity to local detail. Conclusion In this paper we have introduced a new reduced-complexity model (RCM) for river delta formation.Key techniques include: (1) water and sediment flux is represented as parcels and routed through the domain in a Lagrangian point of view; (2) the movements of parcels are based on a probability field calculated from rules abstracting the governing physics; (3) deposition and erosion are achieved by exchanging the volume of passing sediment parcels and bed sediment columns, and the condition for this exchange depends on a set of rules that distinguish bed load and suspended load; (4) bed sediment columns record the composition of coarse and fine material in layers; (5) a topographic diffusion process that takes into account cross-slope sediment transport and bank erosion.By varying input conditions such as the ratio of coarse and fine sediment, reference slope, and dimensions of the domain, the simulated deltas give a range of different behaviors that compare well to higher fidelity model results and observations of field and experiment deltas.We find that the relatively simple cellular representation of water and sediment transport is able to replicate delta morphology at the scale of channel dynamics, including the emergent channel network with channel extension, bifurcation and avulsion.Here we summarize the basic components sufficient for a RCM to produce major static and dynamic features of river deltas: a depth-averaged flow field that guides sediment transport; a non-trivial water surface profile that accounts for backwater effects at least in the main channels; representation of both bedload and suspended load; topographic steering of sediment transport. Even at the RCM level of modeling, the following items still require a physically consistent treatment: the instability at channel mouths that creates bars and subsequent bifurcation; the variation in water surface profile associated with lobe extension that causes channel avulsion; water surface slope along channel sides which creates flooding onto the floodplain. We see the potential of this type of modeling in a similar way as of physical laboratoryscale experiments, which effectiveness does not necessarily come from the classic scaling or a form of detailed solution of the governing equations (Paola et al., 2009). The strength of RCMs is to serve as (1) exploratory models that allow for direct representation of phenomenological observation; (2) a tool to identify larger scale processes that are not sensitive to the details of smaller processes; and (3) a framework for hybrid modeling in which higher-resolution model results can be integrated where precise description of smaller scale processes is needed even for larger scale dynamics.Introduction Conclusions References Tables Figures Back Close Full Full (Shaw et al., 2013) observed both in the field (Shaw, 2013) and in our numerical model results. Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | the water surface slope is always zero.With the downstream water surface boundary ESURFD Discussion Paper | Discussion Paper | Discussion Paper | where ∆ is the cellular distance between the kth and (k − 1)th cell, δ c is cell size, and d| k is the parcel step vector from cell k to cell k − 1. Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | )-(15).The change to the bed topography is obtained by the exchange of sediment volume between the moving parcel and the local bed at each cell along the path -during deposition a sediment parcel loses part of its volume and this volume is added to the bed, and vice versa for erosion.We use simple phenomenological rules to decide (i) where deposition or erosion happens and Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | )- The local flow velocity and flow depth are updated in accordance with each event of deposition or erosion: h = H − η and u = q w h .The reason for updating local flow depth and velocity immediately after each event of deposition and erosion is to avoid excess change to the bed.Similarly we add an extra control on the rate of change to the bed by limiting the amount of deposition and erosion by a sediment parcel so that the change to local depth is less than 25 %, so that the change to local flow velocity is less than 33 %.For example, if local flow depth is 4 m, then the maximum deposition or erosion by a single sediment parcel is limited to 1 m change to the bed. Discussion Paper | Discussion Paper | Discussion Paper | the diffusive flux is proportional to local sand (bed-load) flux and topographical slope: Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | uses Delft3D to model initial Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Figure 10 . 9 Figure 10 . Figure 10.Flow features on the island of a delta formed in a shallow basin.(a) Model result 3 from Run #6, where basin depth (2.5 meters) is only half of the inlet channel depth (5 meters); 4 (b) Wax Lake Delta, where basin depth (<5 meters) is much less than the inlet channel depth 5 (>20 meters); (c) schematic drawing showing the "tributary" flow feature on the island (Shaw 6 et al., in preparation) observed both in the field (John Shaw, PhD dissertation) and in our 7 numerical model results.8 9 Table 1 . List of delta model runs and parameter values.
2018-12-27T04:52:44.204Z
2014-07-28T00:00:00.000
{ "year": 2014, "sha1": "038364ce8902200d51877cb8a21fe9d57d692395", "oa_license": "CCBY", "oa_url": "https://esurf.copernicus.org/articles/3/67/2015/esurf-3-67-2015.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "952882ab1a5fcb3c977eb3787c9feeb0e910db49", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Geography" ] }
259264338
pes2o/s2orc
v3-fos-license
Opsoclonus myoclonus ataxia syndrome, ovarian teratoma and anti-NMDAR antibody: an ‘unresolved’ mystery Background Opsoclonus–myoclonus–ataxia syndrome (OMAS) is characterised by the combination of opsoclonus and arrhythmic action myoclonus with axial ataxia and dysarthria. In adults, a majority are paraneoplastic secondary to solid organ tumours and could harbour antibodies against intracellular epitopes; however, certain proportions have detectable antibodies to various neuronal cell surface antigens. Anti-N-methyl-D-aspartate (NMDAR) antibodies and ovarian teratomas have been implicated in OMAS. Methods Report of two cases and review of literature. Results Two middle-aged women presented with subacute-onset, rapidly progressive OMAS and behavioural changes consistent with psychosis. The first patient had detectable antibodies to NMDAR in the cerebrospinal fluid (CSF) alone. Evaluation for ovarian teratoma was negative. The second patient had no detectable antibodies in serum or CSF; however, she had an underlying ovarian teratoma. Patient A was treated with pulse steroids, therapeutic plasma exchange (TPE) followed by bortezomib (BOR) and dexamethasone, while patient B was treated with steroids, TPE followed by surgical resection of ovarian teratoma. Both patients had favourable outcomes and were asymptomatic at the 6 monthly follow-up. Conclusions With coexistent neuropsychiatric manifestations, OMAS can be considered a distinct entity of autoimmune encephalitis, pathogenesis being immune activation against known/unknown neuronal cell surface antigens. The observation of absence of anti-NMDAR antibody in patients with teratoma-associated OMAS and vice versa is intriguing. Further research on the potential role of ovarian teratoma in evoking neuronal autoimmunity and its targets is required. The management challenge in both cases including the potential use of BOR has been highlighted. INTRODUCTION Opsoclonus-myoclonus-ataxia syndrome (OMAS) is a unique presentation of neurological disorders characterised by (1) opsoclonus (chaotic, conjugate and rapid involuntary eye movements without intersaccadic interval); (2) myoclonus (sudden jerky movements involving the axial and limb musculature; and (3) ataxia (appendicular and axial of varying severity). It is common in children than adults. In adults, after excluding structural lesions, the most common aetiologies include parainfectious (Salmonella, Coxsackie B3 virus, HIV, Ebstein-Barr virus, St. Louis encephalitis, falciparum malaria and scrub typhus) or paraneoplastic (carcinoma of the lung and breast, ovarian teratomas (mature or immature), renal cell cancers and pancreatic malignancies) causes, of which the paraneoplastic aetiology amounts to 60% of the cases. 1 2 Regarding the pathogenesis, there are two major observations: (1) disinhibition of the fastigial nucleus, which is supported by the functional MRI studies on patients with OMAS in comparison with health subjects, and (2) immune hypothesis, where a proportion of these patients have identifiable antibodies against neuronal cell surface antigens and neurofilament antigens. [2][3][4][5] However, research had failed to reveal a common neural antigen causing this unique presentation. Pranzatelli et al were able to demonstrate increased titres of CD19+ B cells among those children with neuroblastoma-associated paraneoplastic OMAS, and these children had a favourable response to B-cell depletion therapies. 6 7 Overall, this entity could be considered in the spectrum of autoimmune (brainstem) encephalitis, with good response to immunotherapy and removal of neoplasm. 1 8 9 Here we report two young women presenting with subacute-onset OMAS with psychiatric manifestations and seizures. We aim to highlight Open access a few interesting observations regarding the diagnostic workup and treatment in these patients. CASE REPORT Patient A A 29-year-old woman presented with a 9-week history of ataxia with involuntary jerky movements of the eyes and trunk with behavioural disturbances in the form of increased fearfulness, anger outburst and violent behaviour towards her mother and husband. She had a history of acute onset of ataxia, about 3 years ago, which occurred immediately post partum and was managed as postinfectious cerebellitis with corticosteroids, with which she had complete clinical recovery. Clinical examination revealed that the patient had bilateral opsoclonus and action myoclonus with truncal and appendicular ataxia. She had bilateral dysdiadochokinesia with dysmetria with wide-based ataxic gait. Clinical features were consistent with OMAS and coexistent neuropsychiatric manifestations. Her MRI of the brain with contrast did not reveal any structural lesions. Her cerebrospinal fluid (CSF) studies showed four cells, all lymphocytes with elevated protein (protein 50 mg/dL and glucose 73 mg/ dL). Autoimmune (including thyroid antibodies) and metabolic workup were negative. In view of the history of steroid-responsive cerebellar syndrome, a likely inflammatory aetiology was considered. She was initiated on pulse doses of methyl prednisolone (1000 mg intravenous methyl prednisolone for 5 days). Over the period of next 1 week, her behavioural disturbances worsened with severe agitation, hallucinations and violent behaviour, which were unresponsive to escalating doses of antipsychotics and benzodiazepines. On day 8, her autoantibody for NMDA receptor was reported positive in CSF (figure 1), with the corresponding serum sample being negative. Serum onconeural antibody profile (Euroimmune IgG, Lubeck, Germany; panel includes anti-Hu, Yo, CV2, Ri, Ma2, amphiphysin, SOX1, Tr, Recoverin, Zic4, Figure 1 Cell-based assay using human embryonic kidney cells expressing GluN1 subunits of NMDAR. Undiluted CSF was tested as per manufacturer instructions (Euroimmune Ag, Lubeck, Germany). We classify the assay as strong positive when there is fluorescence signal which is >2+ intensity (ie, more than the intensity of the positive control given in the kit). CSF, cerebrospinal fluid; NMDAR, N-methyl-D-aspartate. Open access Titin and GAD65) was negative. Oncological evaluation with full-body positron emission tomography (PET) scan and pelvic ultrasound was negative for any neoplasm, specifically ovarian teratoma. Rituximab (RTX) infusion was planned; however, she developed an allergic reaction to the test dose. Neuropsychiatric manifestations continued to worsen, and she progressed to develop encephalopathy. TPE (under controlled sedation) was initiated following which mild stabilisation in clinical condition was noted. However, she continued to have OMAS and psychosis. Based on existing reports on the use of bortezomib (BOR) for refractory autoimmune (anti-N-methyl-D-aspartate (NMDAR)) encephalitis, [10][11][12] it was decided to treat her with the regimen of subcutaneous BOR (1.3 mg/m 2 ) and dexamethasone (20 mg) given on days 1, 4, 8 and 11 followed by 10 days of drugfree interval. The first cycle was started on hospitalisation day 18. Following initiation of BOR, she had remarkable recovery in the form of near-complete resolution of opsoclonus, myoclonus and ataxia. Her agitation, restlessness and impulsivity reduced, and her antipsychotic doses were gradually decreased. BOR was stopped after two cycles and steroid doses were tapered and stopped. She was discharged after 34 days of hospitalisation and was largely asymptomatic with minor behavioural issues at 3 and 6 month follow-ups. She was well for 2 years and succumbed in the delta wave of the COVID-19 pandemic in 2021. Patient B A 28-year-old woman presented with a 10-week history of gait ataxia with involuntary movements of the body, diplopia with oscillopsia and seizures. She had developed new-onset behavioural changes with decreased interaction with family members, decreased sleep and anhedonia. She progressed to develop slurring of speech with cough on oral intake with nasal regurgitation. Clinical examination revealed her to be agitated and restless, with generalised tremors and action myoclonus. Cranial nerve examination revealed opsoclonus with decreased palatal movements bilaterally and decreased gag reflex bilaterally. Motor system examination showed generalised dystonia with rigidity with Medical Research Council(MRC) grade 4/5 power in bilateral upper limbs and lower limbs. She had generalised hyper-reflexia with pendular knee jerk. She had bilateral cerebellar signs in the form of dysmetria, dysdiadochokinesia and impaired tandem walk. Her gait was wide-based ataxic, requiring assistance for ambulation. MRI of the brain with contrast did not reveal any structural lesions. CSF analysis showed normal cell counts with elevated protein (67 mg/dL). On day 6 of hospitalisation, autoimmune encephalitis panel and onconeural antibodies panel in serum and CSF were negative. Oncological evaluation with whole-body PET scan identified a right ovarian neoplasm. She progressed to develop worsening of opsoclonus, tremors and myoclonus with behavioural disturbances in the form of agitation and hence was transferred to the high-dependency unit. She was initiated on TPE on day 10 of hospitalisation. She had five sessions of TPE followed by pulse intravenous methyl prednisolone (1000 mg once daily for 5 days). She was taken up post clinical stabilisation for right salpingo-oophorectomy. A 6×6 cm right ovarian cyst with solid component was removed and histopathology was consistent with mature cystic teratoma (figure 2), which showed the presence of mature neural elements surrounded by dense inflammatory infiltrates. Postoperatively, she had dramatic clinical recovery with nearcomplete resolution of opsoclonus and ataxia. Steroids were gradually tapered and stopped. RTX was administered due to initial reservations about the possibility of a relapse. She was asymptomatic at the follow-ups of 3 and 6 months. Further immunotherapy was not continued, and at 2 years of follow-up, she reported being normal and independent for all activities of daily living. DISCUSSION These two cases highlight few interesting observations. Both patients had OMAS and prominent neuropsychiatric manifestations. CSF anti-NMDAR antibody was positive in the first patient who did not have a teratoma, and the second patient had a teratoma without any demonstrable antibodies. Management in both cases was challenging in view of the progression and coexistent psychosis despite high doses of antipsychotic agents. The planning of surgical removal of teratoma is challenging and often has to be prompt despite the severity of the illness. 13 We were especially concerned with use of pulse steroids in the setting of psychosis. Both patients had good outcome with a multimodality treatment approach including judicious use of steroids, TPE (under controlled sedation), immunosuppressants (BOR and RTX) and tumour removal (patient B). Anti-NMDAR encephalitis presenting as a brainstem cerebellar syndrome such as OMAS is a rarity. There are only a few case reports reported in literature Open access with antibody positivity and with clinical presentation as OMAS, among which one reported case is from the paediatric age group. [14][15][16][17] In a large cohort of patients with teratoma-associated encephalitis (211 patients), the novel presentation as brainstem-cerebellar syndrome with opsoclonus was seen in 58% of those who were negative for the anti-NMDAR antibodies (22 of 38 patients), with none in the antibody-positive group having a similar presentation. 18 The mechanism of occurrence of this brainstem-cerebellar syndrome appears to be due to the dysfunction of omnipause neurons in the brainstem (parapontine reticular formation) and involvement of the fastigial nucleus. 2 3 An unknown neuronal cell membrane-based antibody in conjunction with the anti-NMDAR antibody seems to be the most plausible explanation for this interesting observation. 14 15 One hypothesis implicated the pathogenic role of glycine receptor (GlyR) antibodies 5 18 19 in this scenario. Glycine is the chief neurotransmitter of the omnipause neurons, which in turn modulate the burst neurons initiating saccades. However, due to non-availability of the test in our centre, we could not test our hypothesis that this could be associated with OMAS. It is unlikely that the anti-NMDAR antibody is an epiphenomenon, considering the coexistent neuropsychiatric manifestations and the presence of antibodies in CSF. The outcomes of teratoma-associated OMAS are remarkable with immunotherapy, and almost 75% have complete recovery at a median follow-up of 15 months. 14 The association of ovarian teratoma with NMDAR encephalitis is intriguing; not all teratomas potentially lead to development of autoimmune encephalitis(AIE). Dabner et al compared the histopathology of teratomas associated with anti-NMDAR encephalitis with sporadic control teratomas and showed prominent intratumoural lymphoid infiltrate closely clustered around the mature neuroglial elements. 20 They further hypothesise that this histology might be a harbinger to develop AIE, post resection of tumour. Presence of NR1 and NR2B receptor-positive neural elements and predominant CD4-positive lymphocytic infiltration has been independently associated with development of antibodies against NMDAR subunits and overt clinical disease. 21 It is interesting to note that neuroglial elements are integral parts of a teratoma and are present in 30%-50% of cases; however, not all cases with teratoma and neuroglial elements progress to develop AIE. In their study, Day et al have shown that presence of abnormal dysplastic neurons with binucleation or multinucleation, dysmorphism and inappropriate clustering appears to be the focus for immune sensitisation and breakdown of self-tolerance. 22 It is also worthwhile to note that the dense inflammatory response around the dysplastic/tumour-like neural elements in teratoma acts similarly to tertiary lymphoid organs producing antibodies in response to chronic inflammation caused by the persistent antigenic trigger. 23 Comparing this with our case, we do prove the presence of the mature neuroglial elements and the inflammatory infiltrate around the neural elements in the teratoma. We also report the good clinical improvement with BOR in the OMAS associated with anti-NMDAR encephalitis. BOR targets the antibody secreting plasma cells, making it a potential second-line therapy in those resistant to or in those with intolerance to RTX. 24 Considering the relatively quick onset of action, BOR can be considered as a potential therapeutic option in antibody-mediated neurological disorders. A synergistic effect with RTX could also be present. 25 Antimicrobial prophylaxis with acyclovir and cotrimoxazole is usually administered during BOR therapy. Long-term use of BOR is to be discouraged, considering risks of immunosuppression, reactivation of infections and dose-related peripheral neuropathy. The recovery in the second case was dramatic postsurgical resection, implying the autoimmune trigger to be the teratoma itself. Hence, there may not be a role for long-term immunotherapy in this scenario as observed in our case. CONCLUSIONS Our cases highlight this novel presentation of brainstem-cerebellar syndrome (OMAS) among patients with treatment-responsive autoimmune encephalitis. It is to be noted that patients with teratoma-associated OMAS and coexistent neuropsychiatric manifestations are negative for the anti-NMDAR antibody, and those with the antibody and OMAS do not have a teratoma. This implies that a yet unidentified neuronal cell membrane (such as GlyR antibody)-directed antibody might be implicated in OMAS even in cases with anti-NMDAR antibody. Teratoma associated with anti-NMDAR antibody has unique histopathological characteristics and it functions as a tertiary lymphoid organ, causing break in immune selftolerance. Judicious use of immunotherapy often translates into good clinical outcomes. Contributors ATM and AS planned, conducted, drafted the manuscript and submitted the study. AMM, AN, MC and ARG contributed to the management of the patients and critically reviewed the manuscript. SM and JAJP contributed to the diagnostics and the images in the manuscript. RNB, ATP, VM and SA critically reviewed the manuscript. Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors. Competing interests None declared. Patient consent for publication Consent obtained from parent(s)/guardian(s). Provenance and peer review Not commissioned; externally peer reviewed. Data availability statement Data are available upon reasonable request. Open access This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.
2023-06-28T13:07:41.601Z
2023-06-01T00:00:00.000
{ "year": 2023, "sha1": "599670608e456c3219a0aafa1dc5a05fec354e61", "oa_license": "CCBYNC", "oa_url": "https://neurologyopen.bmj.com/content/bmjno/5/1/e000414.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "460830f0bc2c645b9b03c5d43b840055ea562ec6", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
57784042
pes2o/s2orc
v3-fos-license
Diabetes Self-Management Behaviors among American Indians in the Midwestern United States The purpose of this study was to understand if American Indian adults with diabetes in the Midwest are similar to American Indian adults nationally in their self-management behaviors. This cross-sectional survey was conducted from May 2009 to April 2010 at powwows, health fairs, and other community events. The convenience sample self-selected into the study and answered questions via touch screen computer about diabetes self-management. Participants were significantly below the national average for American Indians in their adherence to self-management recommendations in daily foot checks (p=0.0035) and having had a dilated eye exam in the previous year (p=0.0002), despite being significantly more likely to have taken a diabetes self-management class (p<0.0001). They were similar to the national average for daily glucose checks and having had one or more hemoglobin A1C tests in the previous year. Participants were less likely to eat 5 or more servings of fruits or vegetables per day (p=0.0001), but more likely to achieve 150 minutes or more of physical activity per week (p=0.0001). Programs addressing self-care issues should be developed to help improve the self-management habits of American Indian adults with diabetes, with particular attention to activities outside of monitoring blood glucose and hemoglobin A1C levels. Introduction American Indians have the highest prevalence rates of diabetes of any racial/ethnic group in the United States (US), with certain tribes having the highest rates in the world [1][2][3]. The American Indian diabetes rate of 16.3% is more than double that of non-Hispanic Whites in the US (7.6%) [4]. Subsequently, mortality due to diabetes complications is four times higher among American Indians than non-Hispanic White adults [5]. In addition, type 2 diabetes rates among American Indians over age 35 compared to non-Hispanic White adults over age 35 increased from 1.7 times more likely from 1994-2000 to 2.5 times more likely from 2001-2007 [6]. Multiple factors contribute to the extremely high prevalence of diabetes among American Indians. Obesity prevalence among American Indians aged 20-74 is 54%, with an 81% prevalence of overweight or obesity [7]. Low socioeconomic status is also associated with prevalence of type 2 diabetes. American Indians have the highest poverty rate in U.S. at 27% compared to non-Hispanic Whites at 11.6% [8]. Elements of poverty, such as food insecurity and limited access to healthy food and shifts in consumer diets toward inexpensive, caloriedense foods, also contribute to diabetes prevalence [9,10]. With so many American Indians impacted, and with evidence of type 2 diabetes prevalence increasing in younger populations, adherence to diabetes standards of care and self-management behaviors that impact quality of life and health outcomes is now more vital than ever. Diabetes self-management behaviors have been identified by both the American Diabetes Association and the Centers for Disease Control and Prevention as a key component for effective diabetes care [11,12]. Self-management behaviors include healthy eating, consistent physical activity, foot checks, not smoking, regular blood glucose monitoring, adherence to medication regimens, and managing stress [11,13]. One recent study found American Indians to have a higher level of diabetes self-care as compared to other racial and ethnic groups [14], though several previous studies have found American Indians are less likely to meet recommended guidelines for diabetes self-care, leading to excessive disease burden and medical complications [15][16][17]. For example, American Indians are less likely to maintain effective glucose control [15,16,18], more likely to be physically inactive [19], and least likely to inspect their feet daily compared to other racial/ethnic groups [20]. Non-Hispanic Whites have been found to be more likely to possess diabetes self-management equipment (eg -exercise equipment, mirrors for checking feet, and blood sugar logs or diaries) [21] and more likely to report adopting healthy eating behaviors such as limiting fat consumption and substituting fruit for sugary deserts than American Indians and African Americans [22]. It is possible that some of the differences found in diabetes selfmanagement behaviors in different studies among American Indians are due to regional differences in the population. For that reason, it is important to examine to regional selfmanagement behaviors prior to development of interventions to improve self-care. When morbidity rates among American Indians versus all U.S. adults with diabetes were compared, the American Indian morbidity burden for diabetes exceeded insured U.S. adults with diabetes morbidity by 50%. Hypertension (61.2%), cerebrovascular disease (6.9%), renal failure (3.9%), lower extremity amputations (1.8%), and liver disease (7.1%) were all found to be significantly higher among American Indians, resulting in a reported lower quality of life [23]. Structured approaches to diabetes self-management (e.g. identifying specific foods and behaviors that limit food consumption, physical activity monitoring, the practice of regular blood glucose monitoring with specific target values, and use of home aides that assist with diabetes care)have a positive impact on glycemic control and, thus, health outcomes associated with diabetes [24]. Both the American Diabetes Association and the Indian Health Service (IHS) standards of care for people living with diabetes include recommendations for semi-annual visits to a primary care provider focused on diabetes management, daily blood glucose monitoring (at least 3 times daily), hemoglobin A1C monitoring (every 3 months for those not meeting glycemic goals), annual dilated eye exams, annual comprehensive foot examinations along with daily self-examinations, semi-annual or annual diabetes education with additional support as needed, and adherence to prescribed medication regimens [11,13].Though much has been written about the extremely high prevalence of diabetes among American Indian populations, there are gaps in the literature on the regional differences in behaviors of American Indians with diabetes and their comprehensive adherence to recommended standards of care. To understand diabetes self-management among American Indians in the Midwest, we conducted a cross-sectional survey focused on self-management behaviors within a larger survey about health behaviors. Study Participants American Indian research assistants recruited participants at community events such as powwows, health fairs and other events in regional American Indian communities. Eligibility criteria for participants included men and women who self-identified as AI (only or in part) and were at least 18 years of age. Participants were asked to complete a 20-minute selfadministered survey on tablet computers related to their health behaviors and knowledge. A total of 793 American Indian people completed the survey between May 2009 and April 2010, including 134 people with diabetes. The survey included demographic questions, questions about general health and health behaviors, frequency and source of healthcare, and specific diabetes self-management behaviors. All participants provided both written and verbal informed consent and were provided with a $10 gift card for their time and participation in the study. This study was approved by the Institutional Review Board of the University of Kansas Medical Center and appropriate tribes. Demographics- The research team collected standard demographic information, including gender, age, race/ethnicity and tribal affiliation, state in which the individual was currently living and where s/he grew up, marital status and children, and educational attainment. The team also collected information about whether or not the individual had health insurance, where s/he received healthcare and the type of provider most often seen, how often s/he saw a medical professional in the last 12 months and how long ago s/he last saw a medical professional. The survey also included questions about whether or not a participant had seen a traditional healer in the last 12 months and if s/he discussed use of traditional medicine with his or her allopathic provider and vice versa. Health Information Seeking Behavior-Using questions from the 2007 version of the Health Information National Trends Survey (HINTS) conducted biennially by the National Cancer Institute [25], the research team asked participants if they had ever brought information to a health care provider and how often, when the last time they had brought information to a health care provider was, how open the provider was to that information, and if the information helped participants talk to the provider and helped them understand the discussion. Diet and Physical Activity Questions- The research team asked participants how many servings of fruit and vegetables they usually eat per day and how many times in the previous week they ate fast food. To understand physical activity, the survey included questions from the International Physical Activity Questionnaire Short-Form (IPAQ-S [26]), including information about vigorous and moderate physical activity during the previous seven days, as well as time spent walking and sitting during the previous seven days. Diabetes Self-Management-To understand self-management of diabetes, the survey included questions from the 2009 version of the Behavioral Risk Factor Surveillance Survey (BRFSS) [27]. The team asked participants who self-identified as having a diagnosis of diabetes if they were currently taking insulin or diabetes medication, how often they checked their blood glucose, if they had ever taken a class about diabetes management, how often they checked their feet for sores and if they had ever had a sore on their feet that took more than four weeks to heal. The team also asked participants how many times in the last 12 months they had seen a provider for diabetes management, how many times in the last 12 months a provider checked their feet for sores, and how many times in the last 12 months they had their A1C checked by a provider, as well as whether or not they had been told that they had retinopathy and when the last time they had an eye exam with pupil dilation was. Participants who indicated that they had never received a diagnosis of diabetes were asked if they had ever been told by a health care professional that they had pre-diabetes or high blood glucose and if they had received a blood glucose check within the previous year. Knowledge of Health Consequences-To understand if participants had any knowledge of the health consequences of overweight/obesity, including diabetes, the research team asked, "Which of these are increased by being overweight or obese?" Answer choices included high blood pressure, high cholesterol level, heart attack, stroke, and diabetes. The team also asked participants to strongly agree, agree, disagree, or strongly disagree with the following statement: "I know about the long-term complications of uncontrolled diabetes." Data Analysis The research team's statistical analysts calculated frequencies and percentages for each survey question. Analysts measures associations between diabetes and certain survey questions -demographic information, use of health care and information seeking, and knowledge of health consequences of diabetes -using Pearson's Chi-Squared test or Fisher's Exact test, if over 20% of expected cell counts were less than 5. Analysts compared diabetes self-management questions to percentages provided by the Centers for Disease Control and Prevention, Behavioral Risk Factor Surveillance System 2014 data [28], using the binomial test. Analysts used SAS version 9.4 for all analyses. Results And Discussion Demographic information on all participants (N=793) is presented in Table 1. Research analysts examined differences between people who had a diagnosis of diabetes and those who did not. People with a diagnosis of diabetes were significantly more likely to have children (p<0.0001) and be married (p=0.0068). They were less likely to have private insurance (p=0.0226). The sample was weighted towards women (60%) and towards individuals who claimed American Indian ancestry alone as opposed to in combination with another race/ethnicity (76%). Table 2 presents participant use of health care providers, both biomedical or allopathic and traditional, as well as health information seeking behaviors. Though the majority of participants (72%) saw only biomedical providers, 226 individuals reported that they had seen both biomedical providers and traditional healers in the previous 12 months. Of those individuals who saw both types of providers, 150 (66%) discussed their use of traditional medicine with their biomedical providers and 221 (98%) discussed their use of biomedicine with their traditional healers. When individuals with a diagnosis of diabetes were compared to those without a diagnosis of diabetes, several significant differences emerged. Individuals with diabetes were significantly more likely to have a regular primary care provider (p=0.0079) and to use both biomedical providers and traditional healers (p=0.0394). They were also more likely to have seen a provider more recently (p=0.0009) and to see providers more often (p<0.0001). Individuals with diabetes were less likely to bring health information to their providers (p=0.0227), though, among those who had done so, they were more likely to have brought the information to their providers more recently (p=0.0387). In terms of knowledge of health consequences of overweight or obesity, individuals with diabetes in the sample were less likely to know the correlation between heart attack (p=0.0052) or stroke (p=0.0181) and overweight or obesity. They were similarly likely to know the correlation between overweight or obesity and diabetes, high blood pressure, or high cholesterol (see Table 3). Among participants, 134 individuals reported a previous diagnosis of diabetes. Of those participants who did not report a previous diagnosis of diabetes, 71 (11%) reported a previous diagnosis of pre-diabetes and 196 (30%) reported having had a blood glucose check within the previous year. Individuals reporting a diagnosis of diabetes answered questions about their diabetes management. Thirty-seven individuals with diabetes (28%) reported taking insulin; 90 (67%) reported taking some type of oral diabetes medication. Thirty-nine individuals (29%) reported having a diagnosis of retinopathy and 27 (20%) reported having had foot sores that lasted longer than four weeks. Approximately half of individuals with diabetes (N=66, 49%) reported that they strongly agreed that they understood the long-term complications of diabetes. An additional 66 (45%) of participants agreed with the statement. Table 4 presents diabetes self-management activities of participants with a diagnosis of diabetes compared to the national average of American Indians with diabetes reporting these activities from the Behavioral Risk Factor Surveillance System 2014. [28] Participants in the current study were similar to the national average in terms of daily blood glucose checks (62% versus 64% in the national average) and in having had one or more blood A1C checks in the previous year (64% versus 65% in the national average). However, participants were significantly less likely to perform daily foot checks (p=0.0035) or have dilated eye exams in the previous year (p=0.0002), though they were more likely to have ever taken a diabetes self-management class (p<0.0001). Participants in the current study were more likely to engage in 150 minutes per week of vigorous or moderate activity than the national average for American Indians with diabetes (p=0.0001), but less likely to eat five or more servings of fruits or vegetables per day (p=0.0001). Conclusions Results from this cross-sectional survey show that American Indians with diabetes in the Midwest are not always managing their diabetes effectively. Though more individuals in the current study have taken diabetes education classes, these classes have not translated into improved self-management behaviors. It is unknown whether the issue is with the classes themselves or with individuals being unable or unwilling to follow the guidelines provided. Further inquiry into this issue is needed; the research team plans a qualitative follow-up study to understand why individuals are not following the guidelines. The national sample of American Indians from the Centers for Disease Control and Prevention [28] had some similar self-reported self-management behaviors (daily blood glucose checks and at least annual blood A1C checks); however, they had significantly better self-management activities in other areas (daily foot checks and annual dilated eye examinations). It is likely that the lack of self-management in these areas among individuals in the current sample led to the high proportion of individuals with foot sores that lasted longer than four weeks (20%) and a diagnosis of retinopathy (27%). The education participants in the survey are receiving in this area is not effective; the research team is in the process of developing new educational classes and information, including both on-line and in-person classes with more detailed information. The team plans to combine diabetes education with healthy cooking classes using a diabetes-friendly cookbook developed in conjunction with community members. In this manner, the team hopes to address the lower numbers of American Indian people in the Midwest with diabetes eating fruits and vegetables. The team has already provided some healthy cooking classes to American Indian community members locally and has had some initial anecdotal success with bringing people to the classes and providing information. The team is now designing new classes with a greater focus on diabetes management. In this sample, there was a relatively large number of participants who used both biomedical providers and traditional healers, particularly among people with diabetes. This is an important finding for local American Indian communities because many people are unwilling to talk about their use of multiple sectors of health care with their biomedical providers. It is possible that there could be interactions between medications provided through biomedical health care and herbal medicines or foods provided through traditional health care. It is likewise possible that the combination of biomedical and traditional healing has a multiplicative effect on improving diabetes outcomes. This area needs further study and could be used to further improve care of individuals with diabetes. The research team plans further investigation of this topic to determine how it can best be incorporated into diabetes self-management classes. This study has three important weaknesses including a convenience sample, a crosssectional design, and self-report data. However, it highlights some important factors for future research in this area and implications for diabetes education among American Indians in the Midwest. Educators working with American Indians in the Midwest must emphasize certain self-management behaviors, particularly daily foot checks and annual dilated eye examinations, as well as eating enough fruits and vegetables. The research team responsible for this study plans development of additional educational programs for American Indian communities in the Midwest designed specifically to improve diabetes self-management behaviors. Table 4 Diabetes self-management
2018-08-21T20:41:14.096Z
2018-05-06T00:00:00.000
{ "year": 2017, "sha1": "3ea13b546337a3b0467acf7c83a754a9b3b96881", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.20431/2455-5983.0301005", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3ea13b546337a3b0467acf7c83a754a9b3b96881", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
246602285
pes2o/s2orc
v3-fos-license
MicroED characterization of a robust cationic σ-alkane complex stabilized by the [B(3,5-(SF5)2C6H3)4]− anion, via on-grid solid/gas single-crystal to single-crystal reactivity Microcrystalline (∼1 μm) [Rh(Cy2PCH2CH2PCy2)(norbornadiene)][S-BArF4], [S-BArF4] = [B(3,5-(SF5)2C6H3)4]−, reacts with H2 in a single-crystal to single-crystal transformation to form the σ-alkane complex [Rh(Cy2PCH2CH2PCy2)(norbornane)][S-BArF4], for which the structure was determined by microcrystal Electron Diffraction (microED), to 0.95 Å resolution, via an on-grid hydrogenation, and a complementary single-crystal X-ray diffraction study on larger, but challenging to isolate, crystals. Comparison with the [BArF4]− analogue [ArF = 3,5-(CF3)2(C6H3)] shows that the [S-BArF4]− anion makes the σ-alkane complex robust towards decomposition both thermally and when suspended in pentane. Subsequent reactivity with dissolved ethene in a pentane slurry, forms [Rh(Cy2PCH2CH2PCy2)(ethene)2][S-BArF4], and the catalytic dimerisation/isomerisation of ethene to 2-butenes. The increased stability of [S-BArF4]− salts is identified as being due to increased non-covalent interactions in the lattice, resulting in a solid-state molecular organometallic material with desirable stability characteristics. S.1 EXPERIMENTAL DETAILS All manipulations (unless stated otherwise) were performed under an argon atmosphere, using standard Schlenk techniques on a dual vacuum/argon manifold or by using an argon filled glovebox (MBraun). Glassware was flame dried under vacuum prior to use. Pentane and dichloromethane (CH2Cl2) were dried using an Innovative Technology Pure-Solv™ (PS-400-3) solvent purification system and degassed by freeze-pump-thaw cycles. Deuterated solvents were dried using an appropriate drying agent: dichloromethane-d2 (CD2Cl2) with CaH2; acetonitrile-d3 (MeCN-d3) with 3 Å molecular sieves. After drying, these solvents were degassed by freeze-pump-thaw cycles and then stored over 3 Å molecular sieves. Hydrogen (H2) and deuterium (D2) gases were purchased in lecture bottles from Sigma-Aldrich and used as received. [Rh(Cy2P(CH2)2PCy2)Cl]2 was prepared by a previously reported method. 1 All other chemicals were purchased from commercial vendors and used as received. Solution NMR data were collected on either a Bruker AVIIIHD 500 MHz or 600 MHz spectrometer at 298 K unless otherwise started. Residual protio solvent resonances were used as a reference for 1 H NMR spectra. 2 31 P{ 1 H} NMR spectra were referenced externally to 85 % H3PO4 (D2O). All chemical shifts (δ) are quoted in ppm and coupling constants in Hz. Solid state NMR (SSNMR) samples were prepared by packing powdered microcrystalline samples into a 4 mm zirconia solid state rotor inside an argon filled glove box. SSNMR spectra were obtained on a Bruker AVIIIHD 400 spectrometer, with a magic-angle spinning (MAS) rate of 10 kHz, referenced externally to triphenylphosphine ( 31 P: δ = -9.3) or adamantane ( 13 C{ 1 H}: upfield methine resonance, δ 29.5). 3 Thermogravimetric analysis (TGA) and Differential Scanning Calorimetry (DSC) measurements were performed in a thermal analyser (Netzsch STA 449 F5 Jupiter ® ) using an alumina crucible. The samples were heated up to 1000 °C at a ramp rate of 10 °C min −1 under an atmosphere of He flowing at 20 mL min −1 . The powder X-ray crystallography was performed on a Panalytical Aeris X-ray diffractometer equipped with a 600 W copper source and a PIXcel1D-Medipix3 detector. The instrument was operated in transmission mode with the sample in a 0.6mm OD borosilicate capillary. Elemental microanalyses were carried out by Dr Graeme McAllister at the University of York using an Exeter Analytical CE-440 analyser. were treated with CD2Cl2 (0.5 mL) and MeCN-d3 (10.6 µL, 40 eq.), immediately resulting in a yellow solution, from which a yellow solid precipitated. The volatiles containing liberated dx-NBA were vacuum transferred to an empty NMR tube, then sealed under an Ar atmosphere for subsequent NMR analysis. , then suspended in pentane (1 mL). After three freeze-pump-thaw degassing cycles, the ampoule was charged and sealed under an atmosphere of ethene (20 PSI, ~9 cm 3 , ~66 eq. per Rh) and stirred at 500 rpm. After 20 hr, an internal reference, adamantane (15 mg, 0.11 mmol), was added to the mixture, which was then filtered through a 0.2 µm pore PTFE syringe filter into a J. Young NMR tube. 1 H NMR analysis of this pentane solution, integrated relative to the adamantane reference, revealed liberated NBA, 2-butenes, 1-butene and unreacted ethene ( Table 1). The ampoule containing the remaining solids was subsequently recharged with pentane (1 mL) and ethene (20 PSI) as before. The mixture was stirred for a further 20 hr, then quantified once more by 1 H NMR, relative to additional adamantane. To examine whether any trace, unobservable but active, soluble species were present, the filtered solution taken after the first 20 hr was recharged with ethene, stirred for 20 hr, then reanalysed by 1 H NMR: no additional 2-butenes or 1-butene had formed over this time. The analogous reaction with [1-NBA][BAr F 4] yielded less than 0.1 equivalents of 2-butene per Rh. in a pentane suspension was conducted primarily to assess the solid reaction product by SS NMR analysis, however, the mixture was also assessed for 2-butenes and 1-butene by 1 H NMR analysis of the pentane supernatant, using the quantitatively displaced NBA (1 eq. per Rh) as an internal reference. After 20 hr, a 0.5 mL aliquot was removed prior to isolation of the solids for SS NMR characterisation ( Table 2). The S.3 CRYSTALLOGRAPHIC AND REFINEMENT DATA Selected crystallographic data are summarized in the text and full details are given in the supplementary deposited CIF files. This data can be obtained free of charge from the Cambridge Crystallographic Data Centre via http://www.ccdc.cam.ac.uk/data_request/cif. S.3.1 Single-crystal X-ray diffraction methods Single-crystal X-ray diffraction data for were collected on an Oxford Diffraction SuperNova diffractometer with Cu-Kα (λ = 1.54184 Å) radiation equipped with a nitrogen gas Oxford Instruments Cryojet cooler. Raw frame data was reduced using CrysAlisPro, solved using Superflip 4 , and refined using full-matrix least squares refinement on all F 2 data using SHELXL-18 5 within the OLEX2 program. 6 All non-hydrogen atoms were refined anisotropically and hydrogen atoms were geometrically placed unless otherwise stated and allowed to ride on their parent atoms. Distances and angles were calculated using the full covariance matrix. F 4] was finely ground and deposited onto Quantifoil Cu R1/4 grids that had been assembled into autogrid cartridges. These grids were then treated with H2 crystals were highly radiation sensitive and it was only possible to collect 20-30° of data before visible loss of diffraction quality occurred. Over the course of this work 111 datasets were collected from this sample but the S19 highest quality data were recorded from 29 crystals across 2 duplicate grids from the same microscope session. Micro-crystalline [1-NBD][S-BAr All data were processed using DIALS 7 . The images recorded on Ceta-D camera show mean negative background values at high resolution which hampers background modelling so a pedestal of 64 ADU was added to every pixel value. Initially the detector distance was fixed to 958.5 mm (determined using powder diffraction from an aluminium powder calibration grid). The structures were solved ab initio using SHELXT. 8 Structure refinement was performed using SHELXL. 5 Electron scattering factors from Peng 9 were used in refinement. Anisotropic ADPs were refined for all non-hydrogen atoms and all hydrogen atoms were geometrically placed using the idealised (inter-nuclear) X-H distances used in refinement of structures against neutron diffraction data with SHELXL 10 and allowed to ride on their parent atoms. For components (SIMU instruction) and enhanced rigid-body restraints where the relative motion of a bonded pair of atoms is restrained to be perpendicular to the bond between them (RIGU instruction 11 ) were applied to fragments of the structure. These restraints, together with refinement of an extinction parameter (EXTI instruction), enabled anisotropic refinement of all non-hydrogen atoms without resorting to use of ISOR or XNDP instructions to prevent ADPs of some atoms becoming non-positive definite during refinement. S27 Short inter-ion contacts were analysed using the Crystal Explorer package, 26
2022-02-06T16:55:11.486Z
2022-02-03T00:00:00.000
{ "year": 2022, "sha1": "df5050cbe87c04be35c19928e8147d9fddfd8f3f", "oa_license": "CCBY", "oa_url": "https://eprints.whiterose.ac.uk/183257/3/d2dt00335j.pdf", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "d72f184f0cdddb5570b6313c1bddaeb73eb17712", "s2fieldsofstudy": [ "Chemistry", "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
40327995
pes2o/s2orc
v3-fos-license
An Efficient Method for the Synthesis of 2',3'-Nonsubstituted Cycloalkane-1,3-dione-2-spirocyclopropanes Using (2-Bromoethyl)diphenylsulfonium Trifluoromethanesulfonate. An efficient and practical synthesis of 2',3'-nonsubstituted cyclohexane-1,3-dione-2-spirocyclopropanes using a sulfonium salt was achieved. The reaction of 1,3-cyclohexanediones and (2-bromoethyl)diphenylsulfonium trifluoromethanesulfonate with powdered K2CO3 in EtOAc at room temperature (r.t.) provided the corresponding spirocyclopropanes in high yields. The synthetic method was also applied to 1,3-cyclopentanedione, 1,3-cycloheptanedione, 1,3-indanedione, acyclic 1,3-diones, ethyl acetoacetate, and Meldrum's acid. The sulfonium salt 13 was prepared using a slightly modified procedure, originally reported by Aggarwal and colleagues 22) (Chart 6). The conversion of 2-bromoethanol (15) into triflate 16 with trifluoromethanesulfonic anhydride and pyridine in CH 2 Cl 2 followed by treatment with diphenyl sulfide in toluene at reflux afforded the sulfonium salt 13 in 70% overall yield. First, we investigated the reaction of 8a with 1.5 eq of 13 using 3 eq of DBU in CH 2 Cl 2 at r.t. The reaction did not complete even after 24 h, and gave the corresponding spirocyclopropane 1a in only 19% yield ( Table 1, entry 1). When NaH was used in CH 2 Cl 2 , the reaction for 15 h afforded 1a in 72% yield (entry 2). Changing the base to KHCO 3 resulted in a similar reaction rate (17 h) and increased the product yield to 84% (entry 3). Next, we investigated powdered potassium carbonate as a base 23) (Chart 5). Remarkably, the use of powdered K 2 CO 3 enhanced the reaction rate (1.5 h) and afforded 1a in 84% yield (entry 4). Switching the solvent from CH 2 Cl 2 to N,N-dimethylformamide (DMF) improved the reaction rate (1 h), although a considerable decrease in the product yield was observed (72% yield, entry 5). Delightfully, the re-action in EtOAc for 1.5 h gave 1a in 87% yield (entry 6). The use of granular K 2 CO 3 slightly decreased the product yield (83% yield) and gave irreproducible conversion (entry 7). We next optimized the amount of sulfonium salt 13. Using 1.2 eq of 13 led to a slight drop in the product yield (84% yield, entry 8), and the use of 2.0 eq of 13 appreciably diminished the product yield (81% yield, entry 9). Thus, we achieved the direct synthesis of spirocyclopropane 1a from 8a using 1.5 eq of sulfonium salt 13 and 3 eq of powdered K 2 CO 3 in EtOAc. In addition, the synthesis of acyclic 1,3-dione-derived cyclopropanes using the present protocol was examined (Chart 7). The reaction of acetylacetone (6a) with 1.5 equiv of sulfonium salt 13 using powdered K 2 CO 3 in EtOAc provided the corresponding cyclopropane 7a in 79% yield. The use of 1-phenyl-1,3-butanedione (6b) afforded cyclopropane 7b in 83% yield. Since the yields of 7a, b were higher than those in Chart 2, 13,14) these results clearly demonstrate that the present synthetic method using the sulfonium salt 13 is also effective for the synthesis of acyclic 1,3-dione-derived cyclopropanes. Finally, we investigated the synthesis of cyclopropanecarboxylates (Chart 8). The reaction of ethyl acetoacetate (23) with sulfonium salt 13 and powdered K 2 CO 3 in EtOAc for 1.5 h gave ethyl 1-acetylcyclopropanecarboxylate (24) in only 71% yield. On the other hand, the reaction of dimethyl malo- Powdered K 2 CO 3 EtOAc 1.5 81 a) All reactions were performed on a 0.5 mmol scale with 1.5 eq of sulfonium salt 13 and 3 eq of base. b) The starting material 8a was recovered in 30% yield. c) Irreproducible yield. d) 1.2 eq of sulfonium salt 13 and 2.4 eq of powdered K 2 CO 3 were used. e) 2.0 eq of sulfonium salt 13 and 4.0 eq of powdered K 2 CO 3 were used. nate (25) with 13 for 4 h gave dimethyl 1,1-cyclopropanedicarboxylate (26) in only 49% yield, and some decomposition products. 25) Interestingly, the use of Meldrum's acid (27) afforded the corresponding spirocyclopropane 28 in 80% yield. We speculate that the higher acidity of Meldrum's acid (27: pK a 7.3, in DMSO at 25°C) 26) than that of dimethyl malonate (25: pK a 15.9, in DMSO at 25°C) is the reason for the success of the reaction. Experimental General Melting points are uncorrected. IR spectra were recorded on a JASCO FT/IR-460 Plus spectrophotometer and absorbance bands are reported in wavenumber (cm −1 ). All NMR spectra were recorded using a JEOL JNM-ECX400P spectrometer. 1 H-NMR spectra were recorded at 400 MHz. Chemical shifts are reported relative to internal standard (tetramethylsilane at δ H 0.00 or CDCl 3 at δ H 7.26). Data are presented as follows: chemical shift (δ, ppm), multiplicity (s=singlet, d=doublet, t=triplet, quint=quintet, m=multiplet), coupling constant and integration. 13 C-NMR spectra were recorded at 100 MHz. The following internal reference was used (CDCl 3 at δ C 77.0). All 13 C-NMR spectra were determined with complete proton decoupling. High-resolution (HR) mass spectra were determined with JEOL JMS-GCmate II instrument. Column chromatography was performed on Silica Gel 60 PF 254 (Nacalai Tesque, Inc., Kyoto, Japan) and Kanto silica gel 60 N (63-210 mesh) under pressure. Analytical TLC was carried out on Merck Kieselgel 60 F 254 plates. Visualization was accomplished with UV light and phosphomolybdic acid stain solution followed by heating. Preparation of (2-Bromoethyl)diphenylsulfonium Trifluoromethanesulfonate (13) from 2-Bromoethanol (15) (Chart 6) A solution of triflic anhydride (1.81 mL, 11 mmol) in CH 2 Cl 2 (5 mL) was added to a solution of pyridine (0.88 mL, 11 mmol) in CH 2 Cl 2 (5 mL) at −20°C. After stirring for 10 min, 2-bromoethanol (15) (0.71 mL, 10 mmol) was added to the mixture and the reaction mixture was stirred at −20°C for 15 min. The precipitate was removed by filtration and washed with Et 2 O (10 mL). The combined filtrates were concentrated in vacuo, and the residue was diluted with hexane (30 mL). The precipitate was removed by filtration and washed with Et 2 O (5 mL). The combined filtrates were concentrated in vacuo to provide crude product 16 (2.38 g), which was used in the next step without further purification.
2018-04-03T04:47:14.161Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "6f2e43569459ae71dfa749291ee15bcc8a643fd2", "oa_license": null, "oa_url": "https://www.jstage.jst.go.jp/article/cpb/64/12/64_c16-00625/_pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "8286ad5550852ccad3387fd1587f8b072cceac67", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
230662520
pes2o/s2orc
v3-fos-license
A GENETIC ALGORITHM-BASED APPROACH FOR THREE-PHASE FAULT EVALUATION IN A DISTRIBUTION NETWORK Standard IEC 60909 provides all the basic information that is used in the evaluation of three-phase short circuit faults. However, it uses numerous estimations in its fault evaluation procedures. It estimates voltage factors, resistance to reactance ratios (R/X), resistance to impedance ratios (R/Z) and other scaling factors. These estimates do not cater for every nominal voltage. Users often have to approximate these values. In this paper, adjustments were made to the genetic algorithm (GA) with regards to gene replacements and arrangement of scores and expectation. During fault computation, the GA was used to stochastically determine R/X and R/Z ratios with regards to the parameters of the power system. The GA was tested on a nominal voltage that is properly catered for by Standard IEC. The GA results and the IEC values were within an approximate range. This implies that the developed GA can be further used to determine these ratios for nominal voltages that are not sufficiently accounted for by Standard IEC. This leads to obtaining precise fault values in all instances. INTRODUCTION There is a continuously increasing demand in the energy which is generated, transmitted and distributed by modern electric power systems [1]. Identifying short-circuit faults becomes more complicated because of the corresponding increase in short-circuit power [2], and the conventional short-circuit computational methods cannot swiftly handle these short-circuit currents during abnormal operating conditions. The conventional methods for predicting, calculating and plotting short circuit faults found from the literature include [3 -8]: i. The Direct-Method, Per-Unit Method and Symmetric Components Technique [3,4]. ii. Computer methods i.e. Quasi-steady-state fault analysis and Time-domain fault analysis [5,6]. iii. Recent software tools e.g. ETAP (Electrical Transient Analysis Program), Easy-Power and Matlab [7,8]. The computation of short circuit faults in the real world should consider noise and dynamic environments since they adversely affect the fault evaluation processes of all these methods. During the fault evaluation processes of the conventional methods, they try to address the problems of adaptivity to uncertain environments, parameter sensitivity, data intensity, autonomy and multiobjective optimisation [7,9]. However, they fail to do so sufficiently hence they do not give a wide range of operating conditions that can cope with the various time-varying configurations and parameters of the power systems networks. These conventional methods struggle with trade-off analysis for higher dimension problems. For any computational problem, they need all the characteristics of the function i.e. the task processing periods, data dependencies and synchronisation requirements before they can begin execution [6,10]. This means that they cannot provide a valuable mean artificial computing@computingonline.net www.computingonline.net Print ISSN 1727-6209 On-line ISSN 2312-5381 International Journal of Computing creativity approach and therefore this inhibits their function maximisation [11,12]. Evolutionary algorithms are metaheuristic tools which are a stochastic direct approach. They employ dynamic heuristics, unlike conventional methods which apply static heuristics [13]. Evolutionary algorithms begin with a population of solutions for every optimisation problem and use the Pareto sense in prioritizing the solutions [14]. Therefore, evolutionary algorithms present superior qualities in optimising complex problems. In the analytical (mathematical) modelling technique presented by this paper, a network model (of a physical situation) was created and some accompanying equations were formulated. The mathematical model was solved to see the effects of different parameters. This led to having a much more informed evaluation of the performance of the proposed methodology. The states and operating conditions that were beyond the scope of the methodology could also be noted. The rest of the paper is organised as follows. Section 2 gives the theoretical background and description of the research problems. Section 3 gives a detailed description of the proposed methodology and its accompanying motivations. Section 4 gives a detailed overview of the genetic algorithm and the proposed modifications. Section 5 presents the experimental procedures. The results are presented and discussed in Section 6. Finally, Section 7 gives the conclusion of the paper. CHARACTERISTICS OF FAULTS Three-phase short circuit faults can be classified as symmetrical three-phase faults, asymmetrical line-to-line faults, asymmetrical double line-to-earth faults and asymmetrical single line-to-earth faults [15,16]. Whenever a fault occurs, it divides the power system into an upstream network and a downstream network. Fig. 1 illustrates how a fault divides a network system [3]. In Fig. 1(a), the short circuit fault is located far from the sources (generators) and in Fig. 1(b), the fault is located at the generator terminals. For faults located far away from the sources, the effects of the parameters of the sources as rotating machines can be ignored, but for faults located at the generator terminals, the effects of the generator parameters should be taken into account [17,18]. When a short circuit fault occurs close enough to the terminals of a generator, the generator will produce four components of short circuit fault. These components are the aperiodic component, the sub-transient component, the transient component and the steady-state component. Fig. 2 is an illustration of the individual components [1]. As seen in Fig. 2, these components have different decay time constants. These decaying patterns are produced as a result of the noninstantaneous change in magnetic flux in machine windings (armature windings) [19]. The four components sum up to give the full short-circuit spectrum shown in Fig. 3 [1]. REVIEW OF THE METHODS A power system must be able to provide reliable and continuous power flow to consumers during either normal or abnormal operating conditions. This influences the selection of a computational method based on its qualities and the properties it presents. Table 1 presents a survey and analysis of various fault evaluation methods. Their shortcomings are laid bare and the need for employing the most advanced techniques can be seen. The information in Table 1 is given by [1, 3, 5, 7, 11, 17, 20 and 21]. RESEARCH PROBLEMS The computation of three-phase short circuit faults in the real world is a complex problem. In the real world, there are a lot of uncertainties and adverse conditions [19]. These negative factors interfere with any fault evaluation procedure. There is a strong need for deep research to seek and address the following problems in electric power distribution systems: • The conventional methods of computing threephase short circuit faults from the point of inception are not very robust in dealing with noise and uncertainties e.g. simultaneously occurring faults or consecutive faults within a short time. They often need human intervention i.e. they lack autonomy [1,5]. • Conventional methods of calculating three-phase short circuit faults are not very precise and reliable. They depend on numerous estimations in their fault evaluation processes. These estimates do not sufficiently cater for all the nominal voltages [7,22]. • Their fault evaluation procedures are dataintensive and are not easily tractable [24]. • The precision of the conventional methods decreases with an increase in the network size. Their precision also decreases with an increase in the number of machines contributing to the fault [25,26]. From the research problems highlighted above, the main objective of this paper was to develop a methodology that can sufficiently cater for every nominal voltage within 550kV. The methodology should not depend on the predefined estimated values given by Standard IEC 60909 and IEC 61313. It should compute (stochastically determine) these values on a case-to-case basis for every optimisation case. The values should be obtained with regards to the parameters and unique specifications of the power system. During fault computation, the methodology should include non-spinning loads at various network levels. It should also include upstream reactances when computing faults at points that are far away from the sources. This leads to obtaining much more precise fault magnitudes [22]. BENCHMARK FUNCTIONS Short circuit current is highly dependent on the equivalent impedance of the network seen at the fault point. The impedance value depends on the network configuration, type of system earthing, elements within the network, type of fault and the fault location [16,17]. When a fault occurs, it is crucial to obtain the first cycle fault values i.e. the root mean square value of maximum current and magnitude of the first peak [5]. These values are for use in power system designing [8,25]. Nine well-known benchmark functions were used in our experiments. The conventional methods and the genetic algorithms (GAs) were all tested on these functions. The functions are for the short circuit components, peak fault values, symmetrical three-phase fault and asymmetrical line-to-line fault. The functions are given below [15,22,27]. Steady-state current component (Isteady-state) is given by: Aperiodic current component (Iaperiodic) is given by: Sub-transient current component (Isub-transient): Peak short circuit current (Ipeak) is given by: Symmetrical three-phase fault: Line-to-line fault: Line-to-line fault: The impedance at any fault point is given by: From the equations above, X'' is sub-transient reactance. X' is transient reactance, X is synchronous reactance, T'' is sub-transient time constant, T' is transient time constant, T is aperiodic time constant, Vmax is maximum phase voltage at source terminals, Zsc is the equivalent impedance seen at the fault point, Ifault is normal fault current, Vph is the phase to neutral voltage (Vph is generally less than Vmax especially on the step-down side of the transformers) and α is the switching angle. METHODOLOGY For distribution sub-systems, the resistances (R) are normally much smaller than the reactances (X). The resistances and reactances make up the impedance (Z) [1]. Standard IEC 60909 gives the R/X and R/Z values for networks below 550kV nominal voltage. The R/Z ratios will always be a value between 0.1 and 0.3 and the R/X ratios vary depending on the network configuration but they are generally in the range of 0.1 to 1 [5]. These values are approximated depending on the source voltage of the network. The details for computing short circuit faults based on Standard IEC 60909 are given below [17,22,27]. When ∑ is the sum of reactances and ∑ is the sum of resistances, short circuit impedance Zsc is given by: Here, we define: For high voltage systems, Standard IEC 60909 states that a user can estimate upstream resistances from the following relationships (ratios): Reactances can be obtained from (10) as follows: The relationship between reactance and impedance in (14) can be simplified to: (15) When R/X is small, in the order of 0.1 to 0.2 for low-voltage networks and 0.05 to 0.1 for mediumvoltage networks, Standard IEC 60909, IEC 60034 and IEC 60076 highlight that the following estimations can be used [3,17]: IMPLEMENTATION OF THE TOOLS During fault computation, the total impedance at any fault point constitutes of: ▪ The up-stream resistances and reactances. ▪ The resistances and the reactances of all the other components at that particular fault point i.e. cables, breakers and bus-bars. Two main tools were used to compute fault values for evaluations and analysis i.e. the conventional methods and the modified genetic algorithms. The algorithms were implemented within Matlab. Matlab M-Script Files were used for implementing the genetic algorithms. There were Matlab M-Script Files also written for implementing the conventional methods. CONVENTIONAL METHODS The first computational method was the use of the conventional methods i.e. the Symmetric components technique and the Direct-Method. The conventional methods computed fault values entirely based on the steps from Standard IEC 60909 and IEC 61313. The ratios for resistance to impedance that were substituted into (15) were obtained from the approximations given in (11) to (13) by Standard IEC 60909. There was a need for proper application of correct voltage factors and impedance correction factors [24]. Proper implementation of these factors increases simplicity and technical accuracy during the fault evaluation processes of the conventional methods [2]. GENETIC ALGORITHMS The second computational method was the use of modified genetic algorithms. From the IEC coefficients given in (11) to (13), some nominal voltages are not properly accounted for, e.g. if a power system is of 85kV nominal voltage, it is difficult for the designer to choose between (11) and (12). This adversely impacts all the other values that will be obtained using (15). Also when nominal voltages go over 200kV, there are no precise IEC ratios that a user can depend on. This influenced the development of the proposed computational approach that was used by the genetic algorithms. Here, the GAs computed fault values by recalculating impedances at each fault location taking into account fault point impedances and upstream reactances. They computed fault values based on (10). The coefficient values that can be seen in (11) to (13) were determined stochastically with regards to the parameters and unique specifications of the optimised network. To supplement the abovementioned procedures, Fig. 5 in Section 5 gives more explicit details of the GAs fault evaluation procedures. MODIFIED GENETIC ALGORITHM The genetic algorithm searches for solutions based on the principles of natural selection [30]. The genetic algorithm works as a multipath algorithm, searching multiple peaks simultaneously and in parallel, thereby decreasing the risk of trapping in local minima [31]. It functions by developing codes for the values and evaluates the fitness of every string. The genetic algorithm uses Pareto sense and does not require any derivatives or other auxiliary knowledge. The genetic algorithm functions by exploring search spaces where the probability of finding optimum performance is highest [13,29]. It can autonomously schedule, prioritise and balance an optimisation problem. Regardless of its good qualities, the genetic algorithm has some weaknesses. One of the main weaknesses of the genetic algorithm is premature convergence [29]. The chief cause of this is the loss of diversity. If population diversity can be achieved throughout the optimisation procedures, the search path will become much better [21]. Trapping into a suboptimal solution will also be avoided [28]. Mutation is one of the key mechanisms that ensure and maintain diversity. Perfect mutation is needed to avoid the loss of genetic material. When crossover does not guarantee access to all the desired search spaces, random gene changes through mutation will assist in providing variations in the population [31]. The above-mentioned weaknesses led to modifications in the mutation, selection, creation and fitness scaling functions. The details of the proposed enhancements that were implemented and their motivations are given below. CREATION FUNCTION The genetic algorithm has two in-built creation options which are creation 'uniform' and creation 'linear-feasible'. These in-built functions do not give satisfactory and explicit options with regards to altering and making amendments to some parameters [7]. We proposed changes to the creation uniform function to address two main defects that are not properly accounted for by the in-built functions i.e. • To continuously influence the number of individuals that can be created at each evolution stage. • To create a sufficient initial population for constrained cases. Since the genetic algorithm works as a multipath search algorithm, the above-mentioned changes would ensure search efficiency until termination of the optimisation processes [21]. This would greatly decrease the chances of local minima trapping [31]. This would also help the algorithm to effectively explore all the search spaces where the probability of finding optimum solutions is highest [29]. For the first impediment, subsequent individuals in our proposed function were created with regards to the total population (totPop) and the initial population provided (In_P). Two variables were created and added to help in making the adjustments. The adjustments were implemented as follows: For the second impediment, adjustments to the range of values (used in creating the initial populations when considering bounds and constraints) would bring the desired effects when creating the array of populations. It was implemented as follows: The magnitude of ф directly affected the selection of ß and Ψ because the subscripted assignment dimensions of IndividualsToBeCreated should not mismatch the Population arrays. The optimum values were ф = 4, ß = 1 and Ψ = 0. FITNESS SCALING FUNCTION In this function, the best candidates would be given the same opportunities to reproduce. We proposed the use of a variable (η). This variable would help in controlling the relationship between scores, expectation and the number of parents. η determines the amount of expectation with regards to the population size. By trial and error, an optimum value between 0 and 1 could be determined. This value selects the optimum number of scores for the parents at any particular stage during the evolution cycles. The scores would go on to be arranged in descending order to ensure that the top scores were given priority in influencing expectation. This eliminated the use of probabilities that are commonly used in the in-built functions [30]. The strength of the proposed fitness function is that regardless of when raw scores are not in a good range, the best scores will still have precedence. Another advantage is that there is no stalling during optimisation when there is a degenerate scenario i.e. when some of the scores have equal magnitudes. Stalling is a big problem for fitness scaling functions that use probabilities when assigning scores and arranging expectations [7]. Another advantage of the proposed function is that there will be no negative expectations since η was a value between 0 and 1. This gives the proposed scaling function much better qualities over the in-built scaling functions which do not sufficiently cater for all possible operating scenarios e.g. shift-linear fitness scaling has problems with the survival rates of individuals, proportional and rank fitness scaling have problems when raw scores are not in a good range [12]. Top fitness scaling has problems in choosing the best quantity of scores for parents whilst it also does not have optimum default values for higher dimension instances [14]. SELECTION FUNCTION The proposed selection function would sort expectation (exp) in descending order by: exp = sort(exp(: ,1), ′descend′)′ By sorting expectation in descending order, the top parents are selected for crossover and mutation. This gives the best parents top priority and eliminates the use of probabilities and random sampling which are common in the in-built functions [21]. The parents were also limited to the interval between 1 and the population size. By trial and error, this function proved useful and better for the actual evolution of higher-performing individuals. MUTATION FUNCTION In this proposed mutation function, the genes that are mutated are equally spread throughout the genes' range. The probability of a genome being mutated was controlled with the aid of a variable (λ) in the range 0 to 1. By trial and error, values of λ in the range of 0.05 to 0.20 proved to give optimum results when all the other optional parameters had been set. Secondly, a gene had to be replaced by a value randomly chosen from a guided range. The bigger the range implies more diversity since the probability of replacing a gene with a value (similar structure) that has already replaced another gene will be small. This ensures maximum diversity. Variable 'α' was created and used for implementing that. Where 'mp' is the mutation points; mp = find(rand(1, length(child)) < λ) (25) Based on the value of 'range' above, another variable 'γ' was created and it was used to control the mutation process in the creation of children by the following procedures: The optimum value of γ was 4 with bigger values of γ not suitable since this was constrained optimisation. The value of 'spread' would go on to be used as follows: child(mp) = A + rand(1, length(mp)) * spread (30) Mutation_Children (i, : ) = child (31) PARAMETER SETTINGS OF THE GA The traditional genetic algorithm without the proposed enhancements will use the reference GA; the genetic algorithm that has been modified to supplement some defects will use the reference MGA. For the optimisation problem in this research, hybrid functions that could be added since the optimisation procedure had bounds (as constraints) are patternsearch and fmincon. These minimisation functions run after the genetic algorithm terminates and retain a more accurate solution. MGAP will be MGA with the patternsearch minimisation algorithm. MGAF will be MGA with the fmincon minimisation algorithm. Therefore, four different genetic algorithms (GAs) would be tested on the fitness functions evaluated in this research i.e. GA, MGA, MGAF and MGAP. Table 2 gives the genetic algorithm parameters. Some parameters in Table 2 could be varied adaptively to suit the custom functions and the minimisation functions. TESTING OF THE ALGORITHMS The GA, MGA, MGAF and MGAP were first tested on a standard benchmark function. This was done to confirm their robustness and accuracy. The Rastrigin function was used as the test function. All the algorithms were run 5 times and their results are presented in Table 3. The Rastrigin function is given below: (x) = 20 + X 1 2 + X 2 2 − 10(cos2πX 1 + cos2πX 2 ) (32) Xi ϵ [−5.12, 5.12] The Rastrigin function given in (32) has a global minimum of [0; 0]. From Table 3, it can be seen that GA struggles with retaining the global minima. It sometimes converges to local minima. In optimisation cases, an algorithm that converges poorly and settles to local minima is regarded as inaccurate and unreliable [21,31]. That particular algorithm must not be given much priority with regards to optimising much more sophisticated problems [13]. Henceforth, GA was discarded and not used for our experimental procedures. EXPERIMENTAL PROCEDURES The model of the network used in this work was created based on [3,18,19]. The model resembles a real-world system. It has all the basic components of a power system as well as some protection devices i.e. the main power supply, backup sources, transformers, synchronous machines, isolators, circuit breakers, relays, switches, earthing gears and loads. The algorithms/methods highlighted in Section 3 would all be tested on the model for their robustness on the research problems highlighted in Section 2. EXPERIMENTAL MODEL The network that is given in Fig. 4 has a 20kV source that supplies a high-voltage/low-voltage substation via a 1km overhead line. Two 2000 kVA generators also supply as back-up power to the main source. The generators supply the substation busbars in parallel to the main source. Parallel connected transformers of equal magnitude 1250kVA supply the low-voltage busbars. The lowvoltage busbars supply feeders which go to 3 motors rated 100kW each. When the fault occurs, all the motors are running. All connection cables are identical. Symmetrical three-phase short circuit fault and asymmetrical line-to-line fault clear of earth should be calculated at: • Point W i.e. at the high-voltage bus-bars • Point X i.e. 15 meters from the transformer on the low-voltage bus-bars • Point Y i.e. on the low-voltage subdistribution board bus-bars • Point Z i.e. motor terminals • Reverse currents of all the motors at the busbars should also be computed. CONVENTIONAL METHODS The network has a 20 kV source and it can be derived from (12) that the ratio of resistance to impedance will be 0.2. Therefore: Substituting 0.2 into (15) will give: Therefore from (33) and (34), it can be derived that: From the given parameters about the power system, Zup-stream can be obtained from: Zup-stream = GENETIC ALGORITHMS (MGA, MGAF, MGAP) To eliminate stochastic discrepancies, each algorithm was repeated 20 times at each fault point. From (11) to (19) and (33) to (35), all the ratios from the Standard IEC 60909 with regards to this power system have values that are between 0 and 1. Therefore, all the search bounds of the three genetic algorithms used in this experiment would be varied adaptively within the range of lower-bound = 0 and upper-bound = 1. This is because the scalar quantities that we were determining stochastically must have an 'absolute value' that is greater than 0 but less than or equal to 1 [5,21]. GAs AT FAULT-POINT W Using Fig. 4, the first step was to use (15) to obtain the value of Xup-stream from Zup-stream. The value of Zup-stream was obtained using (36). After obtaining the value of Zup-stream, the GAs would not go on to use the Standard IEC 60909 coefficients that are given in (33) to (35) to obtain the value of Xup-stream. Instead, the coefficient was left as an unknown value within the objective function and was determined stochastically using the procedures in Fig. 5. The next step was to obtain Rup-stream from the computed values of Xup-stream and Zup-stream. The GAs used (35) in which they had to stochastically determine the coefficient. There was also a need to obtain the value of RGenerators from the value of XGenerators. The value of XGenerators was computed from the given parameters about the power system. To obtain the value of RGenerators, the GAs did not go on to use the R/X coefficient that is given in (16) but the coefficient was also determined stochastically using the procedures in Fig. 5. GAs AT FAULT-POINTS X, Y, Z For faults at X, Y and Z in Fig. 4, the reactances and the resistances are cumulative values i.e. they are made up of fault point values and up-stream values. However, at Point X, there was a need to obtain the value of XTransformers from the value of ZTransformers. The obtained value of XTransformers was further used to get the value of RTransformers. Based on Fig. 5, the given R/X value in (19) was determined stochastically when computing the value of RTransformers. Tables 4 to 6. Tables 4 to 6 give the coefficient values that were obtained by all the genetic algorithms (MGA, MGAF and MGAP) in 20 runs. Standard IEC values are also included in the tables for comparison. THE COMPUTED COEFFICIENTS From the tables, when all the GA coefficients are rounded off to one decimal place, they will be equal to the IEC coefficient values. This makes all the proposed genetic algorithms capable of handling the computational problem that was being investigated. An analysis is made below as to which ones are the most suitable. The coefficient values that the MGA and MGAP obtained are within an approximate range. This is because the convergence points of these algorithms were almost the same. The MGAF obtained values that deviate a lot more from those obtained by the other algorithms thus implicating that it struggled with convergence to the global minima. However, the coefficients obtained by all the GAs are slightly different from the values given by the Standard IEC 60909. The MGA and MGAP coefficients deviate from the IEC values by not more than 4% whilst the MGAF coefficients deviate by up to 18.5%. This makes the former two to be the much better GA options for the computational problem that was being investigated. Fig. 6 is a plot of the optimisation tools against their maximum percentage deviation from the predefined IEC values. The trends in Fig. 6 explicitly show the best and worst algorithms when computing coefficient values. When running the algorithms to obtain the coefficient values, the average time per run was noted and it has been plotted in Fig. 7. The MGA and MGAP converged at a lesser number of iterations, thus their computational time was short. Computational time is a key element used when evaluating an algorithm. The algorithms were evaluated using Matlab R2017a software installed on an Acer Aspire with Intel(R) Celeron(R) processor at 1.80GHz and 4.00GB Ram with Windows 10 Pro operating system. Moreover, when doing the runs, 18 or more times, the MGA and MGAP algorithms would obtain a value that is almost equal or equal to the IEC given values. This gives the two algorithms a 'confidence interval' greater than 90% when searching for coefficients. This makes these two algorithms to be more reliable since they quickly attain stable and precise results and go on to consistently converge at the same point. This cements these two as the best performing and most suitable algorithms for the computational problem. Tables 7 to 9 give the fault point impedances that were obtained by all the GAs and conventional methods (CMs). The impedances that were obtained using the GAs computed coefficients were almost equal to the impedances that the CMs obtained using IEC coefficients. This is because the coefficients that were used by the GAs and CMs were within an approximate range. From Tables 7 to 9, disregarding MGAF which has the most abnormal deviations stated in Section 6.1, for faults at the source terminals and faults at the load terminals i.e. at points W and Z in Fig. 4, there was a small difference in the obtained values of impedance between the CMs and the GAs. The percentage deviations between GAs and CMs impedance values are all less than 1% and 0.8% at points W and Z respectively. For faults on the lowvoltage busbars and low-voltage subdistribution board i.e. at points X and Y which are distant enough from the rotating machines, the difference in the obtained values is significant. The percentage deviation between GAs and CMs values at point X is around 4% and at point Y it is around 1.5%. CMs give much larger impedance values. THE COMPUTED IMPEDANCES A large impedance value means that when that impedance value is substituted into Kirchhoff's voltage and current laws, a small value of short circuit current will be obtained. Fig. 8 is the plot of asymmetrical three-phase line-to-line currents that were computed using the impedances from the GAs and CMs. The trends for the computed symmetrical and asymmetrical three-phase currents are the same. The only difference is in their fault current magnitudes. This means that for faults on busbars and subdistribution boards, the CMs fault evaluation procedures tend to understate the magnitude of short circuit current. This is dangerous, especially in the setting of protection devices. Standard IEC 60909 tries to rectify this problem but fails to do so sufficiently. The Standard IEC 60909 states that for faults at points far away from the sources where there is a considerable effect of spinning loads e.g. motors [1,5,16] • It is easier for conventional methods to 'estimate conservatively' the fault currents than to calculate the equivalent impedances [1,16]. • Currents by motors at these points can be calculated using the 'motor + cable' total impedance or the current can be estimated using starting motor current (Istart) and rated current of a generator (Ir) [5,16]. I start I r * rated motor current The estimates used by the CMs, from Standard IEC 60909, provide 'conservative protection' current values. Nonetheless, these fault values are not the precise fault magnitudes such as the ones that can be obtained by the GAs at any network level. Table 10 contains the computed symmetrical three-phase fault currents and Table 11 gives the computed asymmetrical three-phase line-to-line currents. I3P is the symmetrical three-phase fault, ILL is the asymmetrical three-phase line-to-line fault; CM is the conventional methods and RMC are the reverse motor currents. THE COMPUTED CURRENTS In this research, two fault conditions were computed i.e. symmetrical three-phase faults and asymmetrical three-phase line-to-line faults. The symmetrical three-phase fault was computed because it is generally considered that symmetrical three-phase faults induce the highest fault currents. Its investigation is necessary because it plays a key role in equipment selection (equipment with the highest electrodynamic and current withstand capability) [16,17]. The asymmetrical three-phase line-to-line fault was computed to check if the proposed methodology applied to asymmetrical three-phase short circuit faults. There are some slight differences between the GAs results and the CMs results. The reasons for the small discrepancies in the obtained values are: [5]. The GAs include the total upstream reactances and resistances in their computational processes i.e. they do not neglect the sources and their parameters for faults far away from the sources. Nonetheless, the results obtained from the proposed methodology of using GAs and the results from the CMs based on the Standard IEC 60909 (alongside IEC, 61313, IEC 60034 and IEC 60076) are very similar and within an approximate range. Protection units that can be selected using values obtained from either of the methods would be the same [5,32]. This means that the proposed methodology can be successfully used for the computation of three-phase short circuit faults. The successful optimisation of a practical network example demonstrated in Section 4 highlights the strength and diverse applicability of GAs to power systems' computational problems. The advantages of using GAs and the proposed computational procedures are: ▪ Unique Optimisation: The computational procedures/algorithms optimise power systems with regards to their unique specifications i.e. they do not rely on the IEC estimated coefficients or use Equation (37). This means that the procedures can be reliably used for the evaluation of any nominal voltage within 550kV [15,16]. ▪ Precise fault magnitudes: The computational procedures give more precise fault magnitudes because unlike conventional methods (CMs), o They take all components into consideration and do not ignore some of their base properties. CMs ignore non-spinning loads and protection devices [3,6,33]. o They also include the effects of sources when computing faults far away from sources [5,18]. ▪ Enhanced modifications: When using the proposed computational procedures, GAs can be modified and enhanced with regards to the desired precision level and complexity of the problem [21,29]. ▪ Choose specifications: During fault evaluation, the user can specify the optimisation bounds and there can also be fitness scaling of the functions [14, 34]. CONCLUSIONS Standard IEC 60909 and IEC 61313 layout all the short circuit fault evaluation procedures. However, in their methodologies, they use a lot of estimations. The commonly used estimates are R/X and R/Z ratios. During fault evaluation, these ratios play a key role in determining the upstream and fault point impedances. The IEC lays out these ratios over a wide range and does not sufficiently cater for every nominal voltage within 550kV. When the need arises, the user has to estimate these values accordingly. In this paper, modified genetic algorithms were developed and used to stochastically determine these ratios during fault evaluation. One of the objectives of this research was to minimise the weaknesses of the genetic algorithm before using it for fault evaluation. Some adjustments were made to the traditional GA to reduce premature convergence, loss of population diversity and trapping into suboptimal solutions. Meticulous parameter selection was also implemented and Fmincon and Patternsearch minimisation functions were added to improve the algorithm. This resulted in the development of 3 algorithms i.e. MGA, MGAF and MGAP. The 3 algorithms were initially tested on a benchmark function i.e. the Rastrigin function. The proposed modelling of the algorithms and the conscientious parameter selection proved to improve the algorithms significantly. The obtained results on the benchmark test function showed that the proposed algorithms were much more robust, fast, efficient, reliable and accurate as compared to the traditional GA. A model of a power system with nominal voltages within a range that is well catered for by the IEC was developed and optimised. The GAs managed to obtain coefficient values that were within an approximate range to the IEC values. MGA and MGAP coefficients deviated by less than 4% from the IEC values. This resulted in their impedances deviating by less than 4% from the CMs impedances. Moreover, in determining the R/X and R/Z values, the MGA and MGAP runs had a 'confidence interval' greater than 90%. The threephase fault currents that the GAs went on to obtain were similar to the fault currents that were obtained by the CMs, with the GAs results arguably much better because of their efficacious and dependable fault evaluation procedures. This implies that if the methodology could give comparable results to CMs within the well-defined ranges, the proposed methodology can reliably still go on to sufficiently satisfy nominal voltage regions that are not well catered for by CMs and the Standard IEC. GAs can sufficiently sustain any nominal voltage because the proposed methodology optimises power systems on a case-to-case basis with regards to the parameters and unique specifications of a power system. The developed methodology was tested for its robustness in dealing with uncertainties during fault computation. Its precision and reliability when there is an increase in the number of machines contributing to the fault current was tested. Regardless of the uncertainties, the GAs would still produce results within an approximate range to those produced by the CMs. The successful computation and evaluation of the network in Section 5 shows that GAs can support both small and large networks of the radial distribution sub-systems. This means that GAs can also support the ring and the meshed distribution sub-systems since they are derivatives of the radial distribution sub-system. Henceforth, GAs can be successfully used for the complex problem of computing three-phase short circuit faults for any nominal voltage within 550kV.
2021-01-06T07:10:23.699Z
2020-09-27T00:00:00.000
{ "year": 2020, "sha1": "34eebc620975312665785bd456df4445d09943cd", "oa_license": "CCBY", "oa_url": "https://computingonline.net/computing/article/view/1891/933", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "e5a97c440cb5886fd2726ddb667eec7a822360c3", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Mathematics" ] }
260850772
pes2o/s2orc
v3-fos-license
Editorial: Environmental effect on neuroinflammation and neurodegeneration, volume II COPYRIGHT © 2023 Sarkar, Rangaraju, Espinosa-Garcia and Langley. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. Editorial: Environmental e ect on neuroinflammation and neurodegeneration, volume II Introduction Emerging epidemiological data in the last decade have indicated that environmental factors like pesticides, and metals, among others, play a critical role in driving neurodegenerative disorders like Alzheimer's disease (AD), Parkinson's disease (PD), and others.Recent research in cell culture and animal models has further shown that exposure to these neurotoxicants leads to neuroinflammation and neurodegeneration through various mechanisms.With the field of environmental toxicology gaining prominence, this set of articles in the Research Topic titled "Environmental effects on Neuroinflammation and neurodegeneration volume II" concentrated on the potential role of epigenetics in regulating neuroinflammation as well as the mechanism of various environmental factors contributing to neurodegeneration. Epigenetics and neuroinflammation and neurodegneration Recent studies have shown that immune cells can memorize exposure to a chemical or micro-organism and then mount a more robust immune response when exposed to similar xenobiotics or chemicals.This concept is known as trained immunity.Huang et al. demonstrated both in vitro and in vivo that microglial cells, the brain's resident immune cells, exhibit trained immunity in response to a neurotoxic metal, manganese, that has been shown to increase the risk of PD.Furthermore, this study showed that epigenetic markers modulate this trained immune response in microglia cells in vitro and in mice in response to LPS priming and subsequent Mn exposure.H3K27ac and H3K4me3, along with H3K4me1, were all upregulated, leading to microglial cells mounting an enhanced response. IFN-β was one of the first disease-modifying therapies approved for Multiple Sclerosis.In a clinical study, by Xavier et al. demonstrated that IFN-β treatment reduced the whole blood DNA methylation profiles of various genes that are targeted by interferons.This study suggests that epigenetic markers play a key role on MS etiology and drive neuroinflammation and neurodegeneration. Neurodegeneration and pesticide exposure Meyer et al. presented novel work on the NADPH oxidase inhibitor, mitoapocynin, as a feasible countermeasure in an animal model of organophosphate (OP) toxicity.Their study specifically uses a rat model of diisopropylflurorophospate (DFP) exposure to demonstrate promising improvements in several inflammatory and oxidative stress markers in the serum of mitoapocynintreated mice 1-week post-challenge; although NOX2 protein upregulation in response to DFP as well as reactive gliosis indicators was not attenuated in the brain tissue.Overall, the study highlights the need for follow-up dose optimization studies for promising compounds to effectively dampen environmental responses in neuroinflammatory and neurodegenerative effects in the central nervous system.Although the conceivable utility of mitoapocynin has now been shown across several rodent models of neurodegenerative and neuroinflammatory conditions, careful and sufficient dosing or perhaps alternative administration routes will need to be considered to fully overcome reactive gliosis and oxidative damage in the CNS.This is especially true for OPs, for which there are no effective medical countermeasures to mitigate chronic health effects from resulting exposures. Pollution and amyotrophic lateral sclerosis Saucier et al. published a systematic review, finding almost 50 epidemiological studies evaluating alleged connections among urbanization, air pollution, and water pollution with the development of amyotrophic lateral sclerosis (ALS), also known as Lou Gehrig's disease.Importantly, this study cast a wider net to look at several exposure routes within the same systematic review as well as branched out from previous works that focused on rural settings or single contaminants of concern.Moreover, very few studies have carefully assessed the quality of individual articles included in reviews discussing environmental factors possibly linked to ALS.Although urbanization was the most well-studied, there was no clear association with ALS.In terms of air pollution as a risk factor, diesel exhaust exposure and its primary product of combustion, nitrogen dioxide, were linked to an increased risk of ALS.Water pollution also had two potential risk factors of ALS: heavy metal contamination, selenium, as well as proximity of residence to lakes prone to cyanobacterial blooms. This Research Topic highlights how our epigenome may play a critical role in driving neuroinflammation and in the pathogenesis of neurological diseases.Innate, as well as adaptive immunity, may also play important roles in driving neuroinflammation through trained immunity.With the dawn on exposomics as well as our increase in knowledge about the effects of per-and polyfluorinated alkyl substances (PFAS), also known as "forever chemicals, " we anticipate that the interactions between environmental exposures and mechanisms of neuroinflammatory and neurodegenerative diseases will be an active area of research for years to come.
2023-08-13T15:15:52.321Z
2023-08-11T00:00:00.000
{ "year": 2023, "sha1": "cd18de97dc17a683d951d6caeecf128a0f559267", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fncel.2023.1269180/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0bfd78699dbf23f05681913348a8e1f47bade1cb", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
86755249
pes2o/s2orc
v3-fos-license
Movements of the bottlenose Dolphin ( Tursiops truncatus ) in the Rio de Janeiro State , Southeastern Brazil Aiming to verify the movements of the bottlenose dolphin (Tursiops truncatus) at Rio de Janeiro State coast, southeastern Brazil, we performed a photoidentification comparison between the catalogued individuals of the Cagarras Archipelago (23° 02’ S and 43° 12’ W) in 2004 and 2006 (n = 26) and the images obtained (n = 179) during the Southeastern Cetaceans Expedition, conducted during months of June and November of 2005. Eight individuals (three females and five dolphins of unknown gender) identified in the Cagarras Archipelago were resighted in the Grande Island (23° 21’S and 44° 15’ W), about 100 km southwestwards from Cagarras Archipelago. The observed movements include distances commonly recorded for the species elsewhere and are probably related to search for prey. The areas sampled during the SCE included mainly waters within the continental shelf of the States of Rio de Janeiro, Espírito Santo and Bahia, with occasional navigation in waters deeper than 200 m (Engel et al. 2007).Photographs obtained during SCE were enlarged to the largest size possible without distorting details of the back edge of dorsal fins.Functions of the software Adobe ® Photoshop (such as overlap of images, manipulation of color, size and position) were used to aid in the comparison with the CA catalog. Results Twenty-six individuals were identified in the CA in 2004 (CA# 001 to CA#020) and 2006 (CA#021 to CA#026).The dolphins sighted in the CA in 2005 could not be individually identified because the poor quality of the images taken in this year. During SCE, nine groups of bottlenose dolphins were sighted (Figure 1), of which 134 photographs of apparent dorsal fins were obtained in Campos Basin (four groups) and 45 photographs in Grande Island (one group).These photographs were only analysed in order to find resightins and not to compile a catalog of individual dolphins. One group of bottlenose dolphins was sighted near Grande Island (23° 21' S and 44° 15' W) in November 24 th of 2005 during the second phase of SCE.This group had more than 20 individuals including calves, and was observed foraging near fishing boats and mariculture.From the dolphins identified in the CA (n = 26), eight (seven in 2004 and one in 2006) were resighted in the group observed near Grande Island.Of the resighted dolphins, three were females and eight were of unknown gender.Distance between the two locations was approximately 54 nautic miles (~100 km) (Figure 1). From the 20 dolphins identified in 2004 in the CA, 12 (60%) were resighted in 2006 in the archipelago.The RI of the dolphins #001, #011, #012, #013, #017 and #018 in the CA varied from 0.3 to 1.0 in the two years of study, all of them were also observed near Grande Island in 2005.One dolphin (#15) was observed in the archipelago only in 2004 (RI= 0.5).Another dolphin (#021) resighted in Grande Island was observed in eleven surveys of the year 2006 in the CA (RI = 0.9) (Table 1). Despite the dolphins sighted in the CA in 2005 could not be individually identified, the last sighting in the area was in September 21 st , of a group with 15 dolphins, including three calves. Discussion Movements of coastal populations of T. truncatus may range from short distances (25-65 km) (Ballance 1992) to long distances of up to 670 km (Wells et al. 1990).Movements of 4,200 km were reported for oceanic waters (Wells et al. 1999). According to Möller et al. (1994) and Simões-Lopes & Fabián (1999), the movements of T. truncatus in southern Brazil are probably related to their foraging behaviour, because they occurred mostly during the yearly mullet (Mugil sp.) migration, an important prey for this species' diet.However, Möller et al. (1994) do not reject the possibility that these movements are also associated to dispersion related to genetic exchange between groups of adjacent areas.Albeit individual interchanges have been detected, Möller et al. (1994) and Introduction Coastal populations of the bottlenose dolphin, Tursiops truncatus (Montagu, 1821), may show a wide range of movements patterns, which include seasonal migration, stable residency and temporary residence with seasonal or yearly fidelity (e.g., Shane et al. 1986).Once distribution, movements, habitat use and home range for the species are influenced by the coastal habitat heterogeneity and its biological requirements, the environmental conditions may influence the prey distribution and consequently affect dolphins' distribution, abundance and seasonal variations of different bottlenose dolphin populations worldwide (Shane 1980, Balance 1992, Felix 1997, Bearzi et al. 1997, Harzen 1998, Defran & Weller 1999, Bristow & Rees 2001, Bearzi 2005, Kerr et al. 2005). Despite the fact that T. truncatus is considered the most studied dolphin species, the information about movements and home ranges in the Western South Atlantic Ocean are still scarce.Movements of more than 300 km northwards were registered for six dolphins in the Península Valdés, Argentina (Würsig 1978).In Brazil, movements of five bottlenose dolphins were reported for the southern of Rio Grande do Sul and Santa Catarina States (Möller et al. 1994, Simões-Lopes & Fabián 1999) with distances ranging between 65 and 314 km. Bottlenose dolphins have been studied through video-identification and behavior observations in the Cagarras Archipelago (CA) since 2004.Dolphins occur in the archipelago in winter and spring seasons, and are typically observed foraging in groups of 15 individuals, but also in groups as large as 30 dolphins (Lodi 2005, and unpublished data).However, their movement patterns, site fidelity, residence and home range remain poorly known. Aiming to verify the movements of the T. truncatus at Rio de Janeiro State coast, this paper reports onto a photoidentification comparison between the catalogued individuals of the CA and the pictures obtained during the Southeastern Cetaceans Expedition (SCE). Material and Methods In the CA (23° 02' S and 43° 12' W) dolphins were observed during 30 surveys between August and November from 2004 and 2006.These surveys were conducted using a 10 m boat with a 40 hp diesel engine.Video-identification was made using the following digital camcorders: HI-8 Handycam Sony DCR-TRV330 with optical zoom 25x (2004); DVD Handycam Sony DCR-DVD101 ( 2005) and mini-DV Handycam Sony DCR-HC26 with optical zoom 20x and enlarger (2x) coupled with the lens (2006). Dolphins were individually identified naturally through photographs and/or video captured images of their dorsal fin, which generally lose tissue on the posterior edge.The pattern of scarification (number, shape and position of nicks and notches) on the back edge distinguishes most part of the individuals within a population permitting reliable identification of each individual (Hammond et al. 1990). The degree of residency of different bottlenose dolphins of the CA was established using a simple Residency Index (RI): number of sightings of the dolphin / total number of surveys (Simões-Lopes & Fábian 1999).The term residence was regarded here as the time spent by an animal in a particular area (Wells & Scott 1990).Adult dolphins accompanied by a calf (less than half the size of the adult) in at least five surveys were considered to be a female.The other dolphins were of unknown gender.The RI was calculated only for the dolphins from CA, where systematic research effort was carried on. The SCE comprised two phases, when during 56 days of effort more than 2.000 nautical miles were sampled.The first phase occurred from June 6 th to 26 th of 2005, and the second phase started on November 1 st and lasted until December 5 th of 2005.Photographs of http://www.biotaneotropica.org.br/v8n4/en/abstract?short-communication+bn00808042008 http://www.biotaneotropica.org.brNeotrop., vol. 8, no. 4, Out./Dez. 2008 Figure The high RI (0.7 to 1.0) obtained for some individuals in the CA suggests that many dolphins were resident in the winter and spring in the archipelago, while others were more transient. Biota Though the majority of individuals could not have their gender identified, the three identified females (#17, #18, #21) in the CA presented a high level of residence in the archipelago in 2004 and 2006.This result agree with previously reported for other areas such Sarasota, Florida (Wells 1991) and Laguna, southern Brazil (Simões-Lopes & Fabián 1999).Wells (1991) states that adult males tend to range farther than adult females and possibly when they leave the community home range, they serve as a vector for genetic exchange between populations or sub-populations. In the present work, we identified movements of eight dolphins along a 100 km stretch of coast in the southeastern Brazilian coast.Open coastal habitats present a patchy and fragmented prey distribution when compared to estuarine systems, which provide enough nutrient resources to maintain a resident population.Distribution patterns inside a certain geographical area may reflect the oceanographic differences which directly or indirectly may affect the prey abundance and movement and, consequently, dolphin's habitat use and movements.The fact that dolphins were observed foraging in both areas (CA and Grande Island) suggests that movements of T. truncatus in the study area may be linked to the distribution and abundance of feeding resources, offering new foraging areas to the dolphins to search for food. We did not find any resighting of dolphins sighted far from the coast (e.g., Campos Basin), suggesting that oceanic and coastal populations of T. truncatus may exist, with distinct ecological characteristics (Connor et al. 2000).However, a larger sampling effort is needed to study this matter. Due to the complex social structure and behavioral flexibility, more information about occurrence, movements, home range and residence would represent important tools for conservation purposes, besides providing important data about population structure/ interchange. 1 .Figura 1 . Figure 1.Sightings of Tursiops truncatus during the Southeastern Cetaceans Expedition and locality of resightings along the coast of the state of Rio de Janeiro, Brazil.Figura 1. Avistagens de Tursiops truncatus durante a Expedição Cetáceos do Sudeste e local das reavistagens ao longo da costa do estado do Rio de Janeiro, Brasil.
2018-12-23T23:11:32.969Z
2008-12-01T00:00:00.000
{ "year": 2008, "sha1": "815f118c0af38c968f952a58a83a3c36faf4e060", "oa_license": "CCBYNC", "oa_url": "https://www.scielo.br/j/bn/a/YthKsrsffz9dYmYqGLxGH5g/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "815f118c0af38c968f952a58a83a3c36faf4e060", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
51902097
pes2o/s2orc
v3-fos-license
Industrial Applications of Enzymes : Recent Advances , Techniques , and Outlooks Enzymes as industrial biocatalysts offer numerous advantages over traditional chemical processes with respect to sustainability and process efficiency. Enzyme catalysis has been scaled up for commercial processes in the pharmaceutical, food and beverage industries, although further enhancements in stability and biocatalyst functionality are required for optimal biocatalytic processes in the energy sector for biofuel production and in natural gas conversion. The technical barriers associated with the implementation of immobilized enzymes suggest that a multidisciplinary approach is necessary for the development of immobilized biocatalysts applicable in such industrial-scale processes. Specifically, the overlap of technical expertise in enzyme immobilization, protein and process engineering will define the next generation of immobilized biocatalysts and the successful scale-up of their induced processes. This review discusses how biocatalysis has been successfully deployed, how enzyme immobilization can improve industrial processes, as well as focuses on the analysis tools critical for the multi-scale implementation of enzyme immobilization for increased product yield at maximum market profitability and minimum logistical burden on the environment and user. This review focuses on how enzyme catalysis has been advantageously used in chemical processes and which industries can further exploit enzyme catalysis for improved outcomes. Further, this review contains an in-depth discussion of the latest enzyme immobilization techniques, how enzyme immobilization can aid in the realization of fully optimized biocatalysts, and the combination of technical expertise that will drive the scale-up of these economically competitive immobilized-biocatalytic processes for industrial applications. Enzyme Implementation: A Societal Need The pharmaceutical, food and beverage, detergent, and biofuel industries have reaped the advantages of enzyme catalysis in commercial-scale applications, while other industries, such as natural gas conversion and fine chemical production, are only recently considering their use [1][2][3][4].In industrial-scale chemical production, the benefits of biocatalysis are often multifaceted, and as such, enzymes are attractive catalysts owing to mild reaction conditions, high product selectivity, and low environmental impact, and thus have been employed for both simplified chemical synthesis routes and improved chemical process economics [1,3,5]; Table 1 illustrates the broad applications of enzyme catalysis throughout various industries. Enzyme Immobilization for Expanded Scope of Implementation Studies in enzyme immobilization, i.e., the attachment of the biocatalyst to a material with desired physical, chemical, electrical, or mechanical properties, have shown that immobilizing biocatalysts can improve their activity and stability across a broader range of operating conditions, with the additional functionality being imparted depending upon both the method of immobilization as well as the inherent properties of the materials used in such immobilization [49,[62][63][64][65][66].It was further demonstrated that the immobilized biocatalysts are novel in that their application simultaneously also allows for a reduced number of processing steps due to the facile separation of the biocatalyst itself from its reaction mixture, their retention of catalytic activity, and the resulting appreciable degree of reusability [5,[20][21][22][23][24][25][26][27].Table 2 lists both the common advantages and disadvantages associated with the use of an immobilized biocatalyst, as highlighted in previous research. Three main immobilization techniques (Figure 2) have been largely reported in the literature, namely carrier-bound attachment, encapsulation or entrapment, and the formation of crosslinked enzyme aggregates [20][21][22][23][24][25][26][27].Two kinetic parameters are often calculated for an immobilized enzyme to assess the effects of immobilization on the enzyme's catalytic efficiency when compared to the non-immobilized enzyme counterpart, i.e., the Michaelis constant K m and maximal reaction velocity V max .K m compares the rates of substrate-enzyme binding and dissociation with smaller values of K m suggesting that binding dominates and indicating higher enzyme-substrate affinity [1].V max measures the rate at which an enzyme converts the substrate to product, and when controlled for catalytic mass, the value of V max is an appropriate measure of catalytic activity [1]. Figure 2. Schematic of different enzyme immobilization techniques.The crystal structure of Glucose Oxidase (GOx) isolated from Aspergillus Niger was used as a model enzyme (PDB ID: 3QVP) [67].Physical and covalent immobilization techniques are discussed relative to flat nanosupport graphene and curved nanosupport, carbon nanotube respectively.Encapsulation is discussed relative to a pore geometry higher than the diameter of the enzyme, while cross-linking is illustrated relative to the distance between two individual GOx. Carrier-Bound Enzyme Immobilization through Both Physical and Chemical Binding Carrier-bound enzyme immobilization is characterized by the attachment of the biocatalyst onto a prefabricated solid material with the appropriate immobilization methods being selected to allow for optimization of the catalytic performance [5,68].The two common methods of carrier-bound enzyme immobilization are physisorption and chemisorption, with physical adsorption offering the benefit of a generally universal, facile immobilization method, since the binding mechanism is not dependent on a site-specific chemical reaction between the enzyme and the support [49], while the covalent bonding requires site-specific chemical interactions between the enzyme and the support or the use of a cross-linking reagent [17]. Even though a wide range of both organic and inorganic supports, including ceramics and metal oxides [69,70], nanomaterials [68,[71][72][73][74][75], and polymers [76][77][78][79][80][81][82][83][84] have been investigated as supports for enzyme immobilization [24], the application of such physically adsorbed enzyme-support conjugates is limited by enzyme leaching as well as a decrease in the enzyme's catalytic efficiency [24].Falus et al., for instance, reported the immobilization of subtilisin A onto various silica gels for the continuous production of racemic N-Boc-phenylalanine ethyl thioester, an important pharmaceutical intermediate [83].Subtilisin A was physisorbed to surface-grafted silica gel and used as packing in three reactors in series for the dynamic kinetic resolution of racemic N-Boc-phenylalanine ethyl thioester.At optimal conditions, the continuous flow process yielded a 97% conversion of the substrate at an enantiomeric excess of 99.5%, with the immobilized subtilisin A retaining catalytic activity after 120 h in continuous flow operation.The improved activity retention and shelf life was reported to be up to 1 year and was attributed to the increased thermostability of the enzyme upon physisorption [83]. Burkholderia sp.lipase, an enzyme widely studied for the production of biodiesel, was immobilized onto magnetic nanoparticles and evaluated for its catalytic activity.Tran et al. found that methyl-grafted Fe 3 O 4 -SiO 2 nanocomposites had a high affinity for lipase (29.5 mg lipase g −1 nanocomposite being adsorbed) most likely due to the porous structure of the silica coating.A higher K m value and lower V max value were however reported for the immobilized enzyme, thus indicating that the immobilization process decreased both the catalytic activity and its efficiency, likely due to the non-specific attachment and deformation of the enzyme active site and its increased mass transfer resistance.The immobilization also allowed for the improved reusability and separation of lipase in the transesterification of olive oil with methanol to produce fatty acid methyl esters (FAMEs).Physisorption of lipase onto magnetic methyl-grafted Fe 3 O 4 -SiO 2 nanoparticles was shown to retain significant activity for up to 10 reaction cycles owing to its increased stability from multi-point hydrophobic interactions with grafted methyl groups [70]. Zhang et al. reported on the immobilization of catalase onto carbon nanotubes for application in nanoelectronics, biosensing, and high-resolution imaging.Carbon nanotubes have been extensively studied as supports for enzymes due to their high surface area-to-volume ratio and biocompatibility [72].An optimal enzyme loading (1.88 mg m −2 ) was found for the physisorption of catalase onto oxidized single wall nanotubes (O-SWNT).A K m value for O-SWNT-catalase conjugates, relative to that of the free enzyme, was reported to be 27.0%,indicating that the adsorptive interactions induced conformational changes in the secondary structure of the enzyme, as confirmed by Fourier transformation infrared spectroscopy and circular dichroism (CD) analyses.V max for O-SWNT-catalase conjugates was reported to be 6.3 times lower than that of free enzyme.An analysis of CD spectra for immobilized catalase suggested that hydrogen bonding between enzyme and O-SWNT caused increased enzyme rigidity and therefore increased activity retention [72].Lastly, Nidetzky's group has shown that chimeras of target enzymes can be combined with silica binding modules (SBM) through noncovalent interaction and become very tightly attached to such underivatized glass, even at physiological pH conditions.Moreover, the research has shown that the immobilized enzymes displayed full biological activity, suggesting that their binding to such a glass surface could be controlled through their specific orientation at the SBM interface [85,86]. Immobilization via covalent attachment was shown to offer strong chemical bonding that prevents significant enzyme leaching and further mitigates the loss of enzyme active sites [1].Covalent binding methods are however more intensive and chemically harsher than physical adsorption, often requiring activation steps capable of inducing enzyme denaturation [87].Further, the selection of an enzyme to be covalently immobilized must be carefully evaluated to ensure optimal catalytic efficiency, and as such, the enzyme-support covalent bond, for instance, should not affect the amino acids associated with the enzyme active site, or the immobilization method may cause loss of catalytic activity [87]. Zhu and Sun successfully immobilized lipase from Candida rugosa onto poly(vinyl alcohol-co-ethylene) (PVA-co-PE) nanofibrous membranes via glutaraldehyde activation for hydrolysis of p-nitrophenyl palmitate [75].It was determined that covalent bonding caused an increase in K m and a decrease in V max due to slower substrate diffusion and decreased enzyme mobility at the interface.Immobilized lipase was also found to retain nearly 90% of its activity after incubation in a phosphate buffer system at 55 • C for 75 min, while free enzymes were found to retain only approximately 20% of their initial activity.Significantly more activity than free lipase was also retained after 30 days of storage at 4 • C most likely due to a decrease in denaturation [75]. Kuo et al. reported on the immobilization of the same enzyme for the synthesis of 2-phenylethyl acetate, the major aromatic ester of rose fragrance.In this study, lipase was covalently bonded to polyvinylidene fluoride (PVDF) membrane, activated via 1,4-diaminobutane and glutaraldehyde, resulting in an enzyme loading of 1.71 mg enzyme g −1 PVDF.The immobilization technique also led to improved catalytic activity with only slightly hindered catalytic efficiency in n-hexane, likely due to the preservation of the tertiary structure in the organic medium resulting from covalent immobilization [77].Complementarily, a study by Mendes et al. demonstrated that the optimal immobilization protocol among carrier-binding methods for lipase from Penicillium camembertii is covalent attachment to an epoxy-silica-polyvinyl alcohol composite.The covalently bound lipase was found to have a lower, less variable enzyme loading capacity than the physically adsorbed lipase.Moreover, the optimal case for covalently bound lipase yielded a hydrolytic activity nearly double that of physical adsorption as well as a greater activity retention.Covalent attachment of lipase to epoxy-silica-polyvinyl alcohol also resulted in improved thermostability compared to that of free lipase [88]. Epoxide hydrolase (EH) has also been studied for its potential application in the synthesis of high-value, enantiomerically pure pharmaceutical intermediates and other bioactive molecules.Petri et al., for instance, proposed the covalent attachment of EH from Aspergillus niger to epoxide-activated silica gel for the enantioselective hydrolysis of p-nitrostyrene oxide.Immobilization onto the silica gel resulted in a relatively high immobilization yield of nearly 70%, and the immobilized EH was found to retain about 90% of its activity relative to free EH, as well as good storage stability over the span of few months.The covalent immobilization of EH caused no decrease in the enantiomeric selectivity of p-nitrostyrene oxide hydrolysis and markedly improved the stability of EH in organic solvent of 20% DMSO [89]. Nanomaterials have been studied as supports for biocatalysts due to minimal mass transport limitations and high specific surface area for volume-efficient catalysis [22,68].Li et al., for instance, used an electrospun polyacrylonitrile-glycopolymer nanofibrous membrane as a support for covalent binding of catalase from bovine liver.Immobilized catalase activity was about 50% of that of the free catalase, but was found to be stable across broader ranges of temperatures and pHs.It was also found that covalently immobilized catalase retained approximately 80% relative activity after storage at 4 • C for 30 days, whereas free catalase retained no relative activity when in the same conditions [68].Alptekin et al. optimized a protocol of the chemical attachment of catalase onto Eupergit C, a macroporous derivative of methacrylamide reported to be chemically and mechanically stable as a catalyst for operation in batch and plug flow reactors.The ratio of K cat to K m was calculated to assess the catalytic efficiency of free and immobilized catalase and was found to be nearly 2 orders of magnitude, thus suggesting that the immobilized enzyme was less efficient in converting the substrate to product.However, immobilization was shown to improve enzyme shelf life and operational stability as a biocatalyst in batch and plug flow reactors.Studies showed that immobilized catalase retained nearly 78% of initial activity when measured 28 days after immobilization, whereas free catalase was inactive after only 11 days of storage.Furthermore, immobilized catalase retained 50% activity at 82 min in a plug flow reactor [90]. Enzyme Entrapment Enzyme entrapment is the immobilization of a biocatalyst into carriers of varying degrees of porosity and permeability [27].Enzymes immobilized via entrapment exhibit improved stability due to intensified control of their microenvironment and were also shown to be more catalytically active at higher temperatures in organic solvents, as well as easily separated from substrate-product reaction mixture [87]. Immobilization via entrapment in a variety of carriers, e.g., sol gels, hydrogels, polymers, nanomaterials, has been researched for the employment of biocatalysts in the synthesis of organic compounds and for novel biosensing systems [64,66].Complementarily, the immobilization of lipase has been proposed for application in the production of flavor and fragrance chemicals, as well.Ferraz et al., for instance, investigated the viability of geranyl propionate synthesis using lipase from Penicillium crustosum as biocatalyst.Lipase was entrapped in beads nearly 0.5 cm in diameter via a crosslinking reaction between calcium chloride and sodium alginate.Calcium-alginate beads containing lipase were optimized further for geraniol and propionate conversion, as well as tested for reusability.Results show that the activity retention of immobilized lipase decreased linearly with respect to the number of cycles of use, suggesting that activity loss was due to enzyme leaching during each cycle [91].Risso et al. studied the same entrapment method for the immobilization of inulinase from Kluyveromyces marxianus, an important biocatalyst in the production of high fructose syrups.Inulinase, entrapped in calcium-alginate beads, was characterized by the determination of its kinetic parameters, as well as its thermostability and pH stability in varying degrees of organic solvents.The K m value of immobilized inulinase was found to be significantly less than that of free inulinase at optimal mass fractions of organic solvent, while the V max value of immobilized inulinase was comparable to that of free inulinase in the same conditions.However, mass transfer resistances, which would likely be the rate-limiting process, were not considered in the kinetic analysis of the immobilized biocatalyst [92]. Arica et al. proposed the entrapment of catalase from bovine liver in thermally reversible cylinders of poly(isopropylacrylamide-co-hydroxyethylmethacrylate) for reactor system applications.Immobilized catalase exhibited a decrease in catalytic activity and enzyme-substrate affinity, and retained less activity at higher temperatures than free catalase.It was also found that an increase in temperature caused for a decrease in hydrogel swelling and higher mass transfer resistance.The apparent kinetic parameters of the immobilized catalase were largely attributed to the temperature-dependent behavior of the hydrogel carrier itself.The entrapment technique allowed for enzyme reusability and increased storage stability.Immobilized catalase also showed 78% activity retention after storage at 4 • C for 20 days, while free catalase retained none of its activity in the same storage conditions.Furthermore, hydrogel-entrapped catalase was found to retain approximately 95% activity for 6 cycles in the batch reactor system [93]. Singh et al. studied the apparent kinetic and stabilizing effects of the encapsulation of bovine liver catalase in hollow silica nanoparticles (HSNPs).No absorption peaks were observed for catalase or hydrogen peroxide in the supernatant liquid isolated from the immobilization procedure, thus indicating an immobilization yield of nearly 100%.It was further determined that the encapsulation technique decreased both enzyme's activity and enzyme-substrate affinity.However, immobilized catalase showed significantly improved stability throughout a broad range of pHs and temperature conditions.Free catalase was completely denatured when tested for activity at 70 • C, while encapsulated catalase was found to have optimal catalytic activity at 80 • C. The encapsulation of catalase within HSNPs-rather than the physical adsorption of catalase onto HSNPs-was demonstrated by the thermostability results for immobilized enzyme.It is expected that physically adsorbed enzyme would show a loss of catalytic activity near the denaturation temperature of free enzyme [94]. Yan et al. reported the successful nanogel encapsulation of bovine carbonic anhydrase (BCA), a metalloenzyme that is studied for applications in carbon capture and biocatalytic enrichment of natural gas where industrial application is limited by the almost total loss of enzyme catalytic activity at 63 • C due to the irreversible aggregation of BCA.Acryloylation and subsequent in-situ polymerization of BCA to form single BCA nanogels were performed, thus imparting molecular structural stability to the enzyme while mitigating mass transfer limitations.BCA nanogels exhibited similar catalytic activity as free BCA and showed significant retention of activity at temperatures greater than 63 • C. It was also determined that nanogel encapsulation preserved the secondary structure of BCA, therefore inhibiting irreversible aggregation and allowing for catalytic activity even at 81 • C [95]. Enzyme entrapment in biocompatible nanoparticles and solid supports has also been reported as a novel approach for the improvement of enzyme activity as a result of biocatalyst-carrier interactions.Studies of enzyme entrapment in solid carriers have shown that for optimal immobilization conditions it is possible to "lock" immobilized enzymes into more catalytically active conformations [96].Prakasham et al., for instance, investigated the kinetic parameters and stability of amylase entrapped in matrices comprised of nickel-impregnated silica paramagnetic particles.It was observed that the entrapped amylase had more rapid starch hydrolysis than the free amylase, for all the tested pHs and temperature conditions.A lower K m value was however recorded for the immobilized amylase, most probably indicating that the entrapment technique yielded a more efficient, robust biocatalyst [96]. Wu et al. reported on the facile co-immobilization of enzymes glucose oxidase (GOx) and horseradish peroxidase (HRP) into a metal-organic framework.The entrapment was performed by mixing solutions of zinc nitrate, GOx, and HRP, and 2-methylimidazole at ambient conditions for 0.5 h resulted in the enzyme-embedded zeolitic imidazolate framework (GOx&HRP/ZIF-8).The catalytic activity of such conjugate was compared to that of a mixture of GOx/ZIF-8 and HRP/ZIF-8 to determine any changes in efficiency as resulted from the co-immobilization technique.Analysis showed that GOx&HRP/ZIF-8 exhibited a 2 times higher activity than the mixture of single-immobilized conjugates due to a significant decrease in mass transfer resistance.Furthermore, GOx&HRP/ZIF-8 were found to retain significantly more activity than free enzyme in organic solvent and when stored at room temperature [97].Currently however, the successful scale-up of entrapped enzymes for biocatalysis is prevented by mass transfer limitations of substrate through carrier material, enzyme leaching, and low total catalytic mass of enzyme-carrier conjugate [24].Lastly, Lin et al. reported on the entrapment of HRP in inorganic interfaces made with cooper phosphate supports and in aqueous solution.Results showed that the hierarchical flower-like spherical structures considerably enhanced enzyme's activity relative to that of the free enzyme in solution.In addition, the hybrid interfaces also exhibited excellent reusability and reproducibility even when several cycles for evaluating the active hydrogen peroxide (H 2 O 2 ) release were performed [98]. CLEAs were shown to offer the benefits of enhanced shelf life and operational stability, reusability, and exceptional resistance to the leaching of immobilized biocatalyst in aqueous media, while not suffering from substrate diffusion limitations that could potentially reduce catalytic activity [100].In certain instances, CLEAs were shown to possess higher catalytic activities than the corresponding free enzymes, and this phenomenon, known as hyperactivation, was attributed to the aggregation of enzyme in a pre-organized tertiary structure that rendered it permanently insoluble upon cross-linking [24].Thus, CLEAs showed a large potential for application in industrial-scale processes owing to high catalytic productivity and inexpensive immobilization methods [24].However, the successful scale-up of applications were dependent on improving CLEA's mechanical properties while better defining separation criteria for continuous processes [24].Specifically, Lai et al. reported the direct formation (from fermentation broth) and stability analysis of CLEAs with lipase from Penicillium expansum (PEL) in various solvents, for the production of biodiesel from corn and microalgal oil, respectively.In this study, PEL-CLEAs were found to be less catalytically active than free PEL, likely due to mass transfer limitations of the large substrate molecules.PEL-CLEAs also showed an improved stability over free PEL at increased temperatures and in various conditions of pH.The clumping of PEL-CLEAs and loss of enzyme active sites was however determined to cause a decrease in the yield of biodiesel.PEL-CLEAs also exhibited substantial activity retention in nonaqueous solutions, suggesting that the immobilization method can be geared toward industrial production of biodiesel [106]. Nguyen and Yang produced combined cross-linked enzyme aggregates (combi-CLEAs) of GOx and HRP for the catalysis of a cascade chemical reaction applicable to glucose detection biosystems [105] and pharmaceutical wastewater treatment [109].The combi-CLEAs were optimized for the cross-linking density and mass ratio of GOx to HRP for ensuring maximal catalytic activity and enzyme stability.Upon optimization, combi-CLEAs showed similar catalytic activity relative to free enzymes, but lower values of K m were most likely a result of two factors: the distance of mass transfer for hydrogen peroxide intermediate, which was substantially reduced by the co-immobilization technique; and the cross-linking of GOx, resulting in decreased inhibition in the presence of H 2 O 2 [105]. Vafiadi et al. reported similarly promising results for the use of combi-CLEAs on three commercial enzyme mixtures exhibiting feruloyl esterase activity.The authors reported on the immobilization via aggregate cross-linking and assessed its kinetic activity relative to free enzyme in ternary mixtures of n-hexane, 1-butanol, and water.Combi-CLEAs were designed to retain maximal catalytic activity by the evaluation of 10 aggregating agents, while the efficiency was optimized by varying the concentration of the cross-linking agent.A product yield of 97% was reported upon the enzyme's precipitation via ammonium sulfate and cross-linking at a glutaraldehyde concentration of 100 mM.The use of ammonium sulfate with this precipitating agent was found to be advantageous because its solvation is an endothermic reaction.Notably, the activity of enzyme aggregates prior to cross-linking were found to be higher than that of the free enzyme, supporting evidence that suggests suitable immobilization techniques can lock enzymes in highly active conformations.Furthermore, combi-CLEAs were easily separated from the reaction mixture containing unreacted methyl ferulic esters, synthesized 1-butyl ferulate by centrifugation and later reused for furuloyl esterase activity, though the immobilized enzymes showed poor activity retention and stability [102]. Martins et al. formed magnetic cross-linked enzyme aggregates (mCLEAs) from rhamnopyranosidase (Rhmnase), a hydrolytic enzyme applicable in the production of valuable pharmaceutical compounds such as lipoprotein associated phospholipase A2 inhibitors, which are administered in the treatment of atherosclerosis [110].Such magnetic aggregates were evaluated for their catalytic activity with different cross-linking and precipitating agents and subsequently compared to CLEAs@Rhmnase for reusability in a batch reactor system.CLEAs@Rhmnase were found to retain nearly 100% activity after 5 reutilization cycles of 24 h each, however, the CLEAs@Rhmnase showed a significant loss in activity after 7 reutilization cycles.Conversely, mCLEAs@Rhmnase showed an initial loss in activity of approximately 40% after one reutilization cycle and near constant activity thereafter, likely due to the higher physical stability of the magnetic aggregates.Additionally, mCLEAs@Rhmnase were shown to be more catalytically active and efficient than CLEAs@Rhmnase, suggesting that the selection of immobilization materials should be critically assessed for ensuring high biocatalytic turnover.It was determined that magnetic enzyme aggregates are more suitable biocatalysts for a scaled up process owing to improved reusability and stability [104]. Zhao et al. reported the use of CLEAs of Pseudomonas sp.lipase (CLEA-PSL) as a biocatalyst for the enantioselective resolution of (S)-N-(2-ethyl-6-methylphenyl) alanine, a chemical precursor in the production of widely used herbicides [111].Precipitation and cross-linking conditions were optimized for the formation of CLEA-PSL, and kinetic parameters were determined for free and immobilized lipase, respectively.CLEA-PSL were found to be more active than the free lipase, and it was also noted that 48 h were required for free lipase to reach a substrate conversion of 50%, while only 12 h were needed for the immobilized lipase to achieve the same conversion and enantiomeric excess.The time difference was presumably due to an induced change to a more catalytically active enzyme conformation upon immobilization.The evaluation of kinetic parameters determined that CLEA-PSL also showed an improved affinity for substrate, most likely due to the changes in the enzyme's secondary structure caused by immobilization.Lastly, the immobilized lipase was found to be more thermostable than the free lipase and retained nearly 80% of its initial activity after ten reutilization cycles in a batch reactor and with no reported loss of enantioselectivity [103]. Enzyme aggregate cross-linking has also been proposed as a method to improve biocatalysts that are currently employed on an industrial scale.Illanes et al., for instance, implemented CLEAs from recombinant penicillin acylase for the production of cephalexin with increased enzyme stability and global productivity (g cephalexin g −1 biocatalyst).Free penicillin acylase was found to require a lesser reaction time, but CLEAs were advantageous for preservation of enzyme activity.Free enzyme retained 50% residual activity after 30 h, while CLEAs retained an equal amount of activity after 78 h.Furthermore, CLEAs were found to have an increased total specific productivity (135.5 g cephalexin g −1 biocatalyst) than that of free penicillin amylase (of only 40.1 g cephalexin g −1 biocatalyst) with such an increase in the reusability, justifying a slight loss of catalytic activity due to an overall increase in the production potential of cephalexin [101]. Lastly, the intramolecular cross-linking of non-aggregated enzymes has been investigated as a method of inducing increased rigidity to and preventing non-specific protein-protein associations of multimeric enzymes, thus preserving catalytic activity when coupled with another immobilization method [112][113][114].Dinu et al. demonstrated that cross-linking of perhydrolase S54V (AcT), i.e., an enzyme that catalyzes the perhydrolysis of propylene glycol diacetate to decontaminant agent peracetic acid, allowed for the novel integration of nanobiocatalytic conjugates with latex-based paint that led to the formation of a bioactive decontaminating composite.The study also found that AcT, cross-linked with polyfunctional aldehyde dextran, retained a greater degree of catalytic activity when covalently bound to single walled carbon nanotubes (SWNTs) as compared to covalent bonding of AcT to SWNTs.The superior activity retention of AcT was attributed to cross-linking with aldehyde dextran that conferred increased rigidity to the enzyme and led to the preservation of its secondary structure upon covalent immobilization onto SWNTs [112]. Intensified Approach for Designing Improved Biocatalysts The combination of different technical expertizes has allowed for the improved design of immobilized biocatalytic processes, but profitability seems to remain the determining factor for further enzyme-induced process development and implementation [5,115,116].For instance, significant progress was made in protein design via directed evolutionary approaches, with such progress allowing for improved activity, stability, and substrate affinity, as well as reduced costs to isolate enzymes [1,3,23,28,29].Directed evolution requires the administration of random mutations to the amino acids constituting an enzyme, which could be employed through chemical mutagenesis or DNA shuffling, followed by screening for the desired phenotype and the isolation of genes coding for any identified improved genetic variant [3].As a result of such an advance, research teams have been successful not only in developing biocatalysts that may be deployed at high temperatures and extreme pH conditions, but also biocatalysts that have catalytic activities of several orders of magnitudes greater than those of naturally occurring ones [1,3,29].For instance, researchers at Codexis and Merck successfully used such multiple iterations to increase enzyme-substrate affinity and to design an economically competitive enzyme-catalyzed process for subtilisin, a pharmaceutical used for diabetes treatment [2].However, while progress in protein engineering has helped drive increased applications in industrial enzyme catalysis, it has not addressed all limitations, like the poor mechanical stability and limited reusability of the biocatalysts, the costs associated with their in vitro production, or further adoption in commercial-scale processes [5,[20][21][22][23][24][25][26][27]. Molecular dynamics simulations (MDS) provide atomic level understanding of phenomena that determine the physical and catalytic characteristics of an immobilized biocatalyst [117][118][119][120][121]. Studies of molecular dynamics simulations, which carry out numerical integration of Newton's laws of motion at an atomistic scale, have been used to predict the structures and catalytic properties of enzymes at the molecular level, which have been extended to research in enzyme immobilization where accurate characterization of enzyme-carrier interactions provides insight into binding mechanisms [117][118][119][120].An understanding of atomistic-level interactions has led to the development of efficient and optimized biodevices.For instance, Franca et al. were able to determine through MDS that the active site of acetyl co-enzyme A carboxylase (ACC) had a positive surface potential.This insight into ACC was used to devise an optimal electrostatic adsorption of ACC onto an AFM tip for improved biodevice functionality [119]. Basso et al. performed molecular simulations on endo-and exoinulinase to explain differences in regioselectivity between the two structures.Analyses of the three-dimensional structures were subsequently used to formulate an optimal immobilized biocatalyst that showed hyperactivity when compared to its native structure [118].Qu et al. employed molecular simulations of a hydrolase MfphA adsorbed onto single walled carbon nanotubes (SWNTs) for verification of and insight into analytical results.Molecular modeling results illustrated preferred binding of two particular amino acids Trp201 and Met81 to carbon nanotubes, thus resulting in a loss of hydrolase activity due to blocking of the active site [117].Studies using molecular simulations have illustrated the utility of computational modeling in optimization of immobilization techniques to reduce lab material costs and create insight into molecular phenomena for the development of optimal immobilized biocatalysts [117][118][119][120]. These analyses show that the adoption of critical evaluation criteria for immobilized enzyme processes on multiple scales-including molecular-level modeling and analysis, life cycle assessments, and techno-economic analyses-is paramount for economical scale-up [5,44,[50][51][52][117][118][119][120].Ultimately, the appropriateness of immobilized biocatalyst for industrial processes boils down to the fruition of additional profitability with the immobilized form of an enzyme often having to hold multiple benefits over free enzyme for process economics that could overcome the additional costs and constraints associated with the immobilization to a reduced loss of catalytic activity, while balancing the cost used for covering the possible materials to be used as supports [5,116]. Glucose Isomerase: A Model for Enzyme Immobilization The immobilization of glucose isomerase (GI) is considered an excellent model for commercial application of an immobilized enzyme where GI efficiently catalyzes the conversion of d-glucose to d-fructose in the production of high-fructose syrup (HFCS) [122].The enzymatic production of HFCS was previously determined to be more economically competitive than conventional chemical methods requiring alkaline catalysis due to improved product quality, simplified production route, and reduction of undesired byproducts such as mannose and psicose [122]. Much research has been done on immobilization of GI since its development and first industrial use in 1967.A broad range of immobilized GI products have been sold by producers like Genencor, DuPont, Novozymes SA, and Solvay, and continued progress in GI immobilization has yielded iterative improvements in the production of HFCS using immobilized GI as biocatalyst [5].In most contemporary processes, HFCS is produced in continuous fixed bed reactors containing immobilized GI as catalytic packing, which results in a mixture of nearly 42% d-fructose, 50% d-glucose, and small amounts of other sugars, and a 55% mixture of d-fructose-required for commercial application as sweetener-is attained via chromatographic enrichment [5].The success of immobilized GI is rooted in the biochemical properties of the enzyme as well as the technological developments that allowed for the commercialization of the immobilized enzyme-catalyzed process.Currently, the production of HFCS is the largest industrial process employing an immobilized biocatalyst, with a nearly 10 million tons produced per year [5].Studies showed that the temperature-dependent position of isomerization equilibrium, along with the relatively high K m value of GI, were two of the biochemical factors that drove the development of an immobilized enzyme process to improve upon a free enzyme process.At higher temperatures the equilibrium of the isomerization is shifted to favor higher yields of fructose, so the use of thermostable immobilized enzyme allowed for improved yields of HFCS.Immobilized GI allowed for implementation in continuous processes, which proved to be advantageous, as the high-concentration throughput of substrate helped to overcome the low efficiency of enzyme-substrate binding indicated by the low K m of GI.Furthermore, the production cost of GI was significant at the time of development, as the immobilization of GI decreased the total amount required for HFCS production by allowing for enzyme reuse [5].The GI-catalyzed process is being further researched for the employment of thermostable enzyme with good activity retention at 90 • C, at which point the equilibrium conditions shift such that a 55% mixture of fructose can be obtained, obviating the need for chromatographic enrichment in the process [5]. Environmental Impact Assessment and Economic Approaches for Enzyme Implementation in Industrial Catalysis While the development of highly efficient immobilized biocatalyst is the short-term goal of lab-scale research, critical economic and environmental evaluations of immobilized enzyme-catalyzed processes are required to determine the suitability of an immobilized enzyme for scale-up [50][51][52].In such a context, life-cycle assessments (LCAs) and techno-economic analyses (TEAs) are increasingly important tools as a growing number of immobilized enzymes are assessed for commercial-scale biocatalytic processes [5]. LCAs are used to identify process energy and material requirements, as well as waste and emissions, which are subsequently used to analyze the sustainability and environmental impact of a process.The use of enzymes in industrial processes is often associated with reduced consumption of energy, chemical inputs, and waste streams.For example, using phospholipase to degum vegetable oil led to a decrease of 44 tons of equivalent CO 2 generation per 1000 tons of oil produced, due to improvement in oil yield and a subsequent decrease in feedstock requirements [5].In another study, the enzymatic production of biodiesel reduced the amount of steam needed to preheat feedstock due to milder reaction conditions, and also improved each measure of environmental impact, including human toxicity, ozone depletion, and global warming potential [50].Immobilized enzyme-catalyzed processes have been found to further reduce the environmental burden of free enzyme-catalyzed processes [51].Raman et al. performed LCA on production of biofuel from alkali catalyst, free lipase, and immobilized lipase to determine an optimally sustainable process.Both free lipase and immobilized lipase reduced process energy consumption when scaled to 1000 kg per year production due to milder reaction conditions.Furthermore, the immobilized lipase was reported to improve the free-enzyme catalyzed process because its reuse reduced consumption of carbohydrates and the minerals required for its free form production [51].The general decrease in material and energy consumption exhibited by enzymatic processes, as a result of reduction of energy consumption, indicates that biocatalytic processes are potentially both more environmentally benign and economically lucrative.However, LCA does not account for productivity or process economics, and thus is insufficient as a standalone metric for process implementation. TEAs study the economic viability of a process based on technology readiness and process economics such as utilities, feedstocks, labor, and capital investments [52].Olafsson et al. reported a TEA for a comparison of integrated and off-site cellulase catalysis in fermentation of lignocellulosic material for ethanol production.The authors found that off-site production of ethanol using similar technologies was a more economically competitive option due to the production of more marketable byproducts [52].Analysis showed that while profitability remains the critical driving force of process development, TEAs and LCAs in combination are invaluable tools for the full diagnosis of both benefits and drawbacks associated with scale-up of biocatalytic processes. Pharmaceuticals Industry Enzyme catalysis has been successfully used for the production of pharmaceutically active chemicals at the industrial scale.The most significant advantages enzyme catalysis holds over conventional catalysis are the high regio-, chemo-, and stereoselectivities at which enzymes convert substrate to product [2,123].A high degree of product specificity is largely desirable in such pharmaceutical processes due to the streamlining of product synthesis routes and subsequent improvement in process economics [8,30].For example, the production of many pharmaceuticals requires the introduction and subsequent removal of protecting groups from pharmaceutically active ingredient intermediates to ensure adequate product selectivity.The use of appropriate enzymes not only obviates such steps, but has also been shown to yield higher enantiomeric excesses of desired stereoisomers [8].Furthermore, enzyme-catalyzed synthesis routes often reduced or eliminated the need for chemically harsh substances or high-temperature conditions that can otherwise require intense process safety considerations [2]. The interdisciplinary approach that allowed for the economically advantageous implementation of biocatalysis on the industrial scale is highlighted by the development of the enzyme-catalyzed synthesis of sitagliptin, a drug marketed by Merck for type II diabetes treatment [31].Sitagliptin is a dipeptidyl peptidase-4 inhibitor that prevents an increase in the blood-retinal barrier and inhibits diabetes-induced tight junction disassembly [32].Conventional synthesis of sitagliptin requires a high-pressure hydrogenation of enamine via a rhodium-based catalyst and subsequent carbon treatment to remove trace amounts of rhodium, yielding sitagliptin in 97% enantiomeric excess (e.e.) [100].Research teams at Codexis and Merck conducted extensive protein engineering to produce an R-selective transaminase (R-ATA) from Arthrobacter sp.capable of converting 200 g L −1 of prositagliptin ketone to sitagliptin in dimethyl sulfoxide (DMSO) in greater than 99.95% e.e. [33].In addition to a yield of higher enantiomeric purity, the enzyme-catalyzed route had 10% increased yield and a 53% increase in productivity (kg sitagliptin L −1 day −1 ), and also eliminated the need for a rare heavy metal-based catalyst that necessitated purification and special equipment necessary for high-pressure operation [31]. Codexis and Merck have also invested heavily in research for scale-up of a monoamine oxidase (MAO)-catalyzed process for enantiomerically pure desymmetrization of a bicyclic proline intermediate, an important precursor in the synthesis of boceprevir, a NS3 protease inhibitor used for treatment of chronic hepatitis C infections [2,34].Conventional synthesis of bicyclic proline is an intensive process requiring an excess of metal-based oxidant and reductant through 8 reaction steps; the enantioselective, MAO-catalyzed synthesis of the intermediate is an attractive alternative with the potential to greatly reduce operation time and waste generation [30].Although significant improvements in MAO activity, solubility, and thermostability were achieved through protein engineering via 4 rounds of evolution involving the introduction of random mutations and subsequent screening for desired phenotypes, the addition of bisulfate to the MAO-catalyzed process for the capture of imine compounds was necessary to mitigate its irreversible inhibition [30]. The combination of biocatalysts genetically, engineered for robust catalytic capabilities and topological process optimizations, illustrates the overlap of technical expertise needed for successful scale-up of enzyme catalysis [2,5].As such, the enzyme-catalyzed process showed several marked improvements over the conventional synthesis of the intermediate when compared at the same scale, namely decreases of 59.8% in raw materials, 32.8% in water, and 63.1% in process waste per unit of product synthesized [30].Though further work is needed for economically feasible industrial-scale implementation, the comparison between the conventional synthesis and the MAO-catalyzed synthesis route suggests that enzyme catalysis could greatly improve over current industry standards and outcomes. In a study by Hayes et al., authors reported an improved commercial-scale synthesis route for (S,S)-reboxetine succinate, a noradrenergic anti-depressant for the treatment of fibromyalgia in the latter stages of development at Pfizer [35,36].The production of reboxetine requires an acetylation of diol intermediate.However, conventional synthesis routes rely on classical chemical acetylation that suffers from di-acetylation and poor enantioselectivity, therefore generating considerable amounts of unwanted byproducts [2].The proposed generation synthesis route successfully employed Candida antartica lipase B, an active, commercially available enzyme, for the highly enantioselective acetylation of diol intermediate [36].The lipase-catalyzed process resulted in the selective mono-acetylation of diol intermediate with 98% regioselectivity and greater than 99% yield.Furthermore, it was possible for the enzyme to be removed from the reaction mixture via simple filtration, and it thus maintained high regioselectivity at the lab scale upon reuse, all at low costs [36].As such, the new generation synthesis route resulted in a 58% improvement in the commercial product yield of (S,S)-reboxetine succinate and a nearly 1300 MT year −1 reduction in process waste at peak process throughput [36]. Because lipases can hydrolyze a broad spectrum of substrates, they have been researched as biocatalysts for many other pharmaceutical syntheses [7,36,37].Martinez et al., for instance, proposed a new-generation synthesis route for the industrial-scale production of pregabalin, a neuroactive drug exhibiting anticonvulsant, pain killing, and anti-anxiety activity used for the treatment of epilepsy, anxiety, and social phobia [37,38].The proposed route utilized lipolase, a commercially available lipase, for the selective hydrolysis and subsequent separation of the S-enantiomer intermediate from the R-enantiomer for conversion to pregabalin via decarboxylation [37].While screening for industrial enzyme increased selectivity and reduced waste generation, several process optimizations allowed for the improved process economics of such a new-generation route.The addition of Ca 2+ and Zn 2+ mitigated lipolase inactivation by forming complexes with chemical species that acted as enzyme inhibitors.In addition, the rapid phase splitting of S-enantiomer from R-enantiomer enabled racemization and thus efficient reuse of starting material [37].The new-generation synthesis route resulted in a 40-45% increase in yield of pregabalin at 99.5% purity and 99.75% e.e.Furthermore, the amount of waste generated per kilogram of product yielded was calculated to be just 20% that of the classical route [37]. The Food-Water-Fuel Nexus The large-scale production of biofuels, i.e., fuels derived from biomass, animal fats, waste oils, and other renewable resources that encompass chemical products such as bioalcohols, biodiesel, biosynthetic oils, and biogas, has been recognized for its potential to supplement or replace fossil fuels, particularly as oil reserves are depleted to meet global energy demands [18,49].The benefits of economically feasible biofuel production are two-fold: biofuel offers improved sustainability over traditional fuel sources as well as significantly reduces the environmental impact owing to its lower emission of carbon monoxide, nitrogen oxides, sulfur oxides, and particulate matter [18].According to British Petroleum (BP), global production of biofuels rose by an average of 14.1% from 2006 to 2016, illustrating the growing impact that biofuels have on the world energy landscape [124].While government incentives have helped to drive industry-scale biofuel production, the economic viability of biofuel production will be determined by the development of processes that efficiently use waste from agriculture and industry as feedstock, thus side-stepping the ethical dilemma of using fresh water and land resources for fuel production [125].Numerous technologies exist for the conversion of raw biological materials to usable, high-energy bioproducts with most production routes requiring either the transesterification of oils or the esterification of fatty acids [5,18].The traditional chemical process uses sodium methoxide for conversion of plant oil triglycerides to fatty acid methyl esters (FAMEs), which subsequently results in a product contaminated with high alkali salt content requiring costly purification [5]. The use of lipase as a biocatalyst for esterification was researched for its efficiency at mild reaction conditions and high-purity product yields and was proposed to eliminate the need for purification.However, for biofuel production, economical implementation of such an enzymatic process requires efficient recovery and reuse of lipase due to the required scale of production, therefore necessitating an immobilized enzyme [5,49]. A prominent trend in the production of biofuels is the design of processes based on inexpensive, largely abundant starting materials because cheap feedstock is one of the biggest driving forces in the profitability of biofuels processes [19].The biggest such feedstock is lignocellulosic biomass-made up of lignin, cellulose, and hemicellulose-due to its massive abundance and wide range of sources including crop residues, softwood and hardwood, herbaceous biomass, and municipal solid waste [19].The most difficult technical barrier required to unlock lignocellulosic biomass is however the extensive mechanical or chemical pretreatment required to further enable the processing of cellulose in lignocellulosic biomass via hydrolysis to glucose and subsequent conversion to bioethanol via whole-cell fermentation [126][127][128].Current research suggests that ionic liquids, i.e., salts that exist in molten states at temperatures below 100 • C with strong chemical and thermal stabilities and extremely low vapor pressures, will play a key role in the development of processes that viably release cellulose from lignocellulosic material [126][127][128]. Chemical conversion of cellulose to glucose requires the use of diluted acids and high temperatures, which implies high energy inputs to result in the generation of a significant amount of unwanted byproducts [19].A more ideal approach is the use of cellulase, i.e., a mixture of hydrolytic enzymes that act synergistically in the conversion of cellulosic material, for the selective enzymatic hydrolysis of cellulose to glucose, which requires longer reaction time but leads to improved yield from the subsequent fermentation due to a low generation of unwanted byproduct [19,[53][54][55].The economically viable scale-up of cellulase-catalyzed cellulose conversion to glucose for bioethanol production is limited by poor biocatalyst recovery, slow enzyme-catalyzed reaction rates, and low biocatalyst stability under industrial operation conditions [54,55].Therefore, immobilized cellulase is a requirement for industrial catalysis, particularly considering the acid pretreatment required for cellulose [55]. Reported lab-scale work on the immobilization of cellulase illustrates the potential for the employment of cellulase for biodiesel production.Khorshidi et al., for instance, showed that immobilized cellulase was significantly more active than free cellulase at lower pH conditions and at higher temperatures, showing that the immobilization technique can functionalize biocatalyst at industrial conditions [54].Lima et al. found that immobilized cellulase had increased thermostability when compared to free cellulase and retained nearly 70% of its initial activity after eight cycles of converting cellulosic biomass to glucose.The significant activity retention of immobilized cellulose suggests that immobilization techniques can improve process economics by allowing for efficient reuse of biocatalyst [56]. In 2006, Hainabaichuan Co. Ltd. (Guangzhou, China) began processing waste palm oils and waste edible oils for lipase-catalyzed production of 20,000 tons of biodiesel per year and subsequent scale-up to 40,000 tons per year in 2008 [49].Lvming Environmental Technology Co. Ltd.(Shanghai, China) implemented a commercially available immobilized lipase as a catalyst in a FAME production line with an annual capacity of 10,000 tons in 2007 [57].The enzymatic reaction, designed for a feedstock of high acid value (AVN160 mg KOH g −1 ) waste cooking oil, was carried out in a stirred tank reactor at an enzyme loading of 0.4% relative to charged substrate, which led to a FAME yield of 90% at optimal conditions [57].Piedmont Biofuels announced in 2012 the successful scale-up of a continuous enzymatic transesterification of free fatty acids (FFAs) via immobilized Candida antarctica lipase B for biodiesel production; the enzymatic reaction eliminated the need for caustic stripping of the chemical intermediate due to its high product selectivity [5]. Natural Gas Conversion Recent advances in the extraction and recovery of natural gas resources have made accessible vast reserves of natural gas in North America [129].A review of world energy production and energy markets by BP reported a proven reserve of nearly 8.7 trillion m 3 of natural gas in the US alone [124].Additionally, production of natural gas in the US comprises over 20% of global natural gas production at 750 billion m 3 in 2016 [124]. The composition of natural gas is 80-95% methane with varying degrees of heavier hydrocarbons, but methane has a relatively low market value due to difficulties in storage and transportation as well as limited use as fuel [129,130].The low market value and high greenhouse gas potential of methane have initiated a surge in research and development of technologies that can be employed to convert methane to high-quality, value-added chemicals. Much current research on the economically viable use of methane as feedstock is focused on conversion to methanol, which can be more readily converted to olefins and other valuable hydrocarbons [130,131].Current processes for a traditional chemical conversion of methane to methanol, such as steam reformation and the Fischer-Tropsch process, are limited by several significant drawbacks.The chemical conversion route requires the use of high-temperature, high-pressure unit operations as well as noble metal catalysts, resulting in a poor selectivity of methanol [130].The low yield of methanol necessitates a large process throughput in order to overcome large capital cost investments, thus the process is only profitable at a massive scale, thus placing further constrictions on process employment due to the difficulty of transporting methane from an extraction site to a production plant [129,130]. The use of the biocatalyst methane monooxygenase (MMO), for the conversion of methane to methanol, has recently gained interest in the wave of expanding natural gas extraction.MMO has been shown to convert methane to methanol at ambient conditions with selectivity approaching 100% and thus has been researched for the scale-up of methane conversion, considering its multiple advantages over the chemical conversion route [14,[129][130][131].The high selectivity of MMO-catalyzed methanol production eases the intensity of product separation and could significantly reduce the number of steps required in the conversion process.Furthermore, the enzymatic reaction occurs at mild reaction conditions, and could thus cut back on costs associated with heating, pressurization, and other feedstock conditioning steps [129,130]. Much lab-scale research remains to assess the MMO-catalyzed conversion of methane for industrial applications because isolation of MMO is an intensive process, suggesting it may be beneficial to immobilize the enzyme for reuse.Blanchette et al. reported the use of a 3D printed microbioreactor with immobilized MMO as packing for continuous methane conversion to methanol.Although immobilized MMO retained good activity through 20 consecutive reuses, the overall product yield was significantly less than the biocatalytic mass required for methane conversion [129].Ultimately, enzymatic conversion of methane to methanol is a developing technology with several major hurdles to overcome before successful economic scale-up.Due to the inexpensive costs of methane and methanol, enzyme-catalyzed process must be efficient to ensure economic viability.Currently, low catalytic activity of MMO is a significant limiting factor [14,15,[129][130][131]. Furthermore, a more intensive examination of process configurations is required to mitigate the mass transfer limitations that arise from the low solubility of oxygen and methane in aqueous media [130]. Food and Beverage Industry In many instances, traditional chemical synthesis routes are not viable for food products due to reagent toxicity and complex reaction chemistries that result in unfavorable process economics [11].Biocatalysts, on the other hand, present an opportunity for simplified, efficient production routes that mitigate the need for harsh substances, and thus are more economically competitive [9][10][11]132].As such, the use of biocatalysts in food and beverage processes dates back thousands of years to the advents of culinary practices like wine and cheese making [10].In modern times, the widespread use of enzymes in food and beverage industries for food quality preservation or modification is one of the earliest successful industrial applications of biocatalysis, observed in beer fermentation, juice debittering, and bread baking [133].The replacement of conventional chemical treatment with enzyme-catalyzed pathways for conversion of starch to glucose and fructose first took place several decades ago [12,122]. The conventional production route requires temperatures up to 175 • C and considerable pressurization, whereas biocatalytic processes can be carried out at temperatures near 100 • C and at ambient pressure via sequential α-amylase-catalyzed reactions encompassing both liquefaction and saccharification steps [122].In addition to milder reaction conditions, the multi-enzymatic process resulted in higher product selectivity and therefore allowed for better defined production routes for varying sugar products like maltose, fructose syrup, and crystalline sugar, as dictated by biocatalyst selection [122]. An emerging trend is the use of enzyme catalysis for commercial-scale production of probiotics, artificial sweeteners, and rare sugars [2].Probiotics, such as oligosaccharides, lactulose, lactilol hydrolysates, and inulin, are non-digestible food additives that stimulate growth of gut bacteria and can reportedly improve human health [2,9,133].Dietary supplement producers have become particularly interested in simple, efficient enzyme-catalyzed synthesis routes of probiotics due to above-average projected market growth and accompanying increase in demand [2,9].Yakult Honsha Co. Ltd. of Japan and Friesland Food Domo of The Netherlands, among others, have carried out commercial-scale, enzyme-catalyzed production of galacto-oligosaccharides (GOS), a lucrative probiotic with digestive health benefits and use as low-calorie sweeteners [9].GOS are produced by transgalactosylation simultaneous to hydrolysis of lactose via β-galactosidase; lab-scale results have shown GOS yields near 40% for free enzyme, while immobilized enzymes show the potential for larger yields of up to 50% through implementation in a continuous system resulting in decreased product inhibition [9]. The enzymatic production of protein hydrosylates for use as nutritional supplements and flavor enhancers has been developed due to milder reaction conditions and increased control over product formation relative to traditional chemical routes [133].When hydrolyzed, a parent protein forms biofunctional peptides exhibiting antioxidant, antimicrobial, and antihypertensive properties, among other therapeutic effects [132,133].The production of fish protein hydrolysates from seafood processing waste via papain, a proteolytic enzyme derived from papaya that has found widespread industrial application, has garnered attention recently because the process is a potential solution for minimizing pollution from fishing industries [39].Additionally, papain has been researched as a biocatalyst for production of protein hydrosylates from Chinese walnuts; lab-scale work on papain catalysis has shown moderate yields and purities of hydrosylates, and peptides obtained from produced hydrosylates showed good antioxidant properties [40]. Flavors and Aromas Industry Biocatalytic processes economically and environmentally advantageous to conventional chemical processes are in development for commercial-scale production of fragrance compounds, flavor compounds, and aromatics.The chemical structures of such substances are often characterized by regio-, chemo-, or stereoselective positioning of functional groups like alcohols, aldehydes, ketones, and esters; thus, scale-up of efficient enzyme-catalyzed processes for production of aromatic compounds is a potentially lucrative endeavor, particularly in light of promising global market projections for flavors and fragrances [42]. Lipase enzymes play an integral role in the development of biocatalytic fragrance and flavor production due to its capability to transfer acyl groups from esters to other nucleophiles [42].Though most progress has been in bench-scale synthesis of aroma esters, the results suggest that a combination of protein engineering and process engineering can facilitate scale-up to profitable industrial processes.For instance, Vosmann et al. achieved a 94% conversion of oleic acid selectively to benzyl oleate with benzyl alcohol as an acyl acceptor in 1 h using a commercially available lipase, which illustrates strong product specificity that could make enzymatic conversion advantageous [41].Badgujar et al. reached 99% conversion of vinyl propionate to p-cresyl propionate in 1.5 h via immobilized lipase in heptane, highlighting that certain immobilization techniques could functionalize lipase for implementation in industrial environments [43].However, further work to achieve more efficient, robust biocatalytic reactions is required before enzymatic production of aromatic esters can be scaled to larger process throughputs [41][42][43]. Detergents Industry Successful employment of biocatalysts is cited as the driving force of production of cost effective, environmentally benign detergents [44].In the instance of the detergents industry it should be noted that enzymes are a product rather than a chemical process-specific catalyst.Nonetheless, favorable market trends in the detergents' industry reinforce the underlying view that biocatalytic products are inherently safer and more sustainable than traditional chemical products that pose health and safety risks [44][45][46].Alkaline proteases-which are effective in the removal of protein stains and the cleaving of damaged cotton fibers-isolated from microbial sources comprise significant portions of multiple detergents produced and sold at commercial scale by manufacturers like Novozymes SA, Kao Corporation, and Genecor International [45].The high reaction specificity of enzymatic reactions further mitigates damage to fabrics and surfaces that is characteristic of chemically harsh detergent agents [47].Furthermore, the ratio of catalytically active enzymes in detergent mixtures are optimized for specific detergent applications; for instance, dishwashing detergents often contain varying degrees of amylase and lipase intended for the removal of starch food deposits and fats and oils, respectively [47,48]. Projections of Economic Growth and Implementation Potential The market for industrially relevant enzymes-including those applied in food, animal feed, detergent, and technical industries-is expected to grow globally through 2021 at an annual growth rate of 4.7%.Development of novel enzyme technologies, increased demand for naturally made food products, and policy requiring larger shares of renewable energy sources like biofuels [44] are not only becoming an alternative but an implementation reality.Of these categories, enzymes used in technical industries such as papers, textiles, leathers, and biofuels comprise the greatest potential for use of immobilized biocatalysts owing to process-oriented application [44].Thus, market projections for technical enzymes are a reasonable indicator of process development for biocatalytic processes designed for commercial-scale production lines. The most significant driving force in projected market growth for technical enzymes is renewed interest in enzyme-catalyzed biofuel production using lignocellulosic materials, which is a direct result of stricter environmental regulations [44].In this regard, enzymes used in biofuels processes represent the largest portion of the technical enzyme market, which is dominated by two major players, Novozymes and Danisco/DuPont [44].In 2015, DuPont and Quad County Corn Processors signed a multiyear contract for the production of bioethanol from cellulosic material in corn kernel fiber; the same year, DuPont acquired enzyme production technology assets from Dyadic, which is expected to further drive process development in the technical enzyme sector [44].The same technologies are being investigated for further integration into paper and pulp industries, where xylanase, catalase, and lipase are frequently used for biomechanical pulping, de-inking of recycled fibers, and modification of fiber properties [58][59][60][61]. The bulk of the global technical enzyme market is reported for markets in Europe, the Middle East, and Africa (EMEA) at nearly 35% of the global share in 2016 [44].However, strong market growth is projected for North America and Asia-Pacific, with annual growth rates of 6.8% for North America and 7.9% for the Asia-Pacific region through 2021 (Figure 3) [44].Furthermore, the technical enzyme market is expected to overtake the EMEA market as the largest global segment of the market by 2021, suggesting progress in process development will largely take place in the Asia-Pacific region and in North America through 2021 [44]. If further progress is to be achieved, optimizing the structure-function relationships of enzyme-based conjugates while minimizing the production costs of the individual components (i.e., support and biocatalysts), as well as the implementation costs of the biocatalytic process is imperious (Figure 4).The modification of conventional schemes for enzyme immobilization and integration will not only need to facilitate product yield but, furthermore, optimize the process' metrics to thus limit logistical burden to both the environment and the individual [134][135][136].Coupling or transient enzyme-driven reactions will also aim to reduce any public pressure associated with implementation of fossil fuels and thus lead a safer and eco-friendly solution for industrial revolution. The challenges and opportunities associated with enzyme catalysis implementation will not only have to consider profit and marketability, but it will further have to consider integration of biomimetic approaches or "one pot" processes that could allow for efficiency, profitability, reusability and stability, all under the complexity of multi-chain driven specific reactions. .Schematic representation of a multidisciplinary approach aimed to define optimal biocatalytic processes.Implementation of biocatalysts in industrial technologies will have to not only consider optimization of the enzyme functionality but further, lead to increase in enzyme operational stability at the interface with supports used for immobilization. Figure 1 . Figure 1.Global enzyme market in 2016 (top) and projected global enzyme market in 2021 (bottom).Figure adapted from data included in Ref. [44]. Figure 1.Global enzyme market in 2016 (top) and projected global enzyme market in 2021 (bottom).Figure adapted from data included in Ref. [44]. Figure 3 . Figure 3. Global market growth projections for technical enzymes by geographical regions. Figure 4 Figure 4. Schematic representation of a multidisciplinary approach aimed to define optimal biocatalytic processes.Implementation of biocatalysts in industrial technologies will have to not only consider optimization of the enzyme functionality but further, lead to increase in enzyme operational stability at the interface with supports used for immobilization. Table 1 . Industrial applications of enzyme catalysis.
2018-07-30T18:02:15.820Z
2018-06-05T00:00:00.000
{ "year": 2018, "sha1": "904792d8cdb33947ae2df252354316d8971c3c8a", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4344/8/6/238/pdf?version=1528203129", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "904792d8cdb33947ae2df252354316d8971c3c8a", "s2fieldsofstudy": [ "Biology", "Engineering" ], "extfieldsofstudy": [ "Chemistry" ] }
209516473
pes2o/s2orc
v3-fos-license
Decomposition of mixed pixels in MODIS data using Bernstein basis functions Abstract. The decomposition of mixed pixels in Moderate Resolution Imaging Spectroradiometer (MODIS) images is essential for the application of MODIS data in many fields. Many existing methods for unmixing mixed pixels use principal component analysis to reduce the dimensionality of the image data and require the extraction of endmember spectra. We propose the pixel spectral unmixing index (PSUI) method for unmixing mixed pixels in MODIS images. In this method, a set of third-order Bernstein basis functions is applied to reduce the dimensionality of the image data and characterize the spectral curves of the mixed pixels in a MODIS image, and then the derived PSUIs (i.e., the coefficients of the basis functions) are calibrated by means of the abundance values of the ground features from the Landsat Enhanced Thematic Mapper Plus (ETM+)/Operational Land Imager (OLI) classification images corresponding to the date and region of the MODIS image. The proposed method was tested on MODIS and ETM+/OLI images, and it obtained satisfying unmixing results. We compared the PSUI method with conventional methods, including the pixel purity index, the N-finder algorithm, the sequential maximum angle convex cone, and vertex component analysis and found that the PSUI method outperformed the other four methods. Introduction Moderate Resolution Imaging Spectroradiometer (MODIS), as well as later-developed hyperspectral sensors have made great breakthroughs in spectral channel settings compared with earlier remote sensors. There are 36 discrete channels, including 20 reflective spectral channels, in a MODIS image, and each pixel of the image acquires many bands of light intensity data from the spectrum, instead of just the three bands of the RGB color model, which makes it possible to accurately depict the spectrum characteristics of typical ground features using not only the wavelengths, ranges, and intensities of the peaks and valleys but also the integral area that is in the range enclosed by the spectral reflectance curves of the ground features and the x-axis (in Cartesian coordinates). The MODIS visits the globe once or twice per day with coarse resolution of 250 to 1000 m. However, the spatial resolution of MODIS images is not high enough to clearly distinguish different ground features. In many cases, a MODIS pixel is a mixed pixel that is covered by multiple land cover types, which has a significant influence on the information that can be derived. 1,2 Thus, the decomposition of mixed pixels in MODIS images is critically important for the application of MODIS data in many fields, such as mapping land cover distributions, 3 evaluating vegetation/soil fractional cover, [4][5][6] monitoring and evaluating karst rocky desertification, 7 flood mapping, 8,9 and retrieving fire temperature and area. 10 The spectral characteristics of ground features are the basis not only for identifying them in remote sensing images but also for decomposing mixed pixels in images. The decomposition of mixed pixels is generally based on a linear spectral mixture model (LSMM) or a nonlinear spectral mixture model (NLSMM). 11 Although the NLSMM is more applicable when the multiple scattering among distinct endmembers is not negligible, 12 such as in intimate mineral mixtures and vegetation canopies, 13 the LSMM is a mature and more widely used technique than the NLSMM. 14,15 To apply existing methods for decomposing mixed pixels, the endmembers must be obtained. Endmember extraction is the process of selecting a collection of pure signature spectra of ground features present in a remote sensing image. [16][17][18] The corresponding abundance of each endmember is usually estimated by using the fully constrained least squares (FCLS) method based on the LSMM. 19 The endmember extraction is generally performed in two ways: (1) by deriving them directly from the remote sensing images, which is referred to as image endmember analysis; 1 or (2) from a spectral library that contains the spectra of known target features measured in the field or laboratory, which is referred to as library endmember analysis. 20 When considering the effect factors, such as the atmospheric interaction and remote sensor peculiarities and noise, image endmember analysis is now most widely used. Two major approaches are used to extract endmembers based on the LSMM. One approach uses geometrical methods, including the pixel purity index (PPI), 21 the N-finder algorithm (N-FINDR), 22 the sequential maximum angle convex cone (SMACC), 23 vertex component analysis (VCA), 24 etc., of which the PPI and SMACC methods are widely used for decomposing mixed pixels in remote sensing images due to their publicity and availability in the Environment for Visualizing Images software. 25 Another approach uses statistical methods, such as independent component analysis. 26 It is usually difficult to acquire pure pixels in a MODIS image because of its spatial resolution limit. Many researchers have suggested that there are no pure pixels in remote sensing images with low spatial resolution. 17,27,28 Some authors have tried to use nonnegative matrix factorization (NMF) for hyperspectral data unmixing. 29, 30 Miao and Qi 31 presented a constrained NMF (MVC-NMF) method without the pure-pixel assumption for unsupervised endmember extraction from highly mixed image data. The accuracy of extracted endmembers has a great impact on the unmixing accuracy. To assure unmixing accuracy, an unmixing method for MODIS data that does not resort to extracting endmember spectra is taken into account. Adjacent channels in multispectral/hyperspectral imagery have good correlation and often contain similar information, which produces redundancies in a multispectral/hyperspectral dataset. 32,33 Thus, many conventional unmixing methods, e.g., PPI, 21 manual endmember selection tool, 32 N-FINDR, 22 spectral mixture analysis based on simulated annealing, 34 VCA, 24 simplex growing algorithm, 35 Gaussian elimination method, 36 etc., use statistical techniques such as principal component analysis (PCA) to reduce the dimensionality of the image data for both computational time saving and signal-to-noise improvement. Then, a set of uncorrelated variables (principal components) are generated, and those containing the most information from the original bands are selected to extract endmember spectra. Each endmember spectrum can be constructed as a linear combination of the principal components. 32 As a statistical technique, the PCA transformation is highly dependent on the numerical characteristics of the image. Hence, the principal components vary with the images, and the difficulty of interpreting a priori the content of the principal components is an inherent problem of PCA. 33,37 A set of basis functions are independent of each other as well as principal components, and they are purely theoretical functions. In mathematics, a complex curve can be represented as a linear combination of a set of basis functions. 38,39 Similarly, the spectral curve made by mixing spectra with more than one ground cover type can also be represented as a linear combination of a set of basis functions. The basis functions can be employed to reduce the dimensionality of the image data and characterize the spectral curve of each pixel without redundant information. A comparison of basis functions with the principal components generated by using PCA shows that on one hand, the basis functions can be used to depict each endmember spectrum with a linear combination as well as the principal components do. On the other hand, there exists the difference that the basis functions are invariant and independent of image data. Thus, the coefficients of the basis functions for pixels in different images are comparable, and the coefficients can be employed to depict the spectral curves of mixed pixels with various combinations of ground feature abundance fractions. Thus, to ensure unmixing accuracy, an unmixing method for MODIS data based on a set of basis functions, which does not resort to extracting endmember spectra, is proposed and tested in our study. This study exploits a set of third-order Bernstein basis functions to construct the pixel spectral unmixing indexes (PSUIs), i.e., the coefficients of the basis functions, for a MODIS image without resort to extracting endmember spectra, and then a higher spatial resolution image, such as a Landsat Enhanced Thematic Mapper Plus (ETM+)/Operational Land Imager (OLI) image from the same region and same day with the MODIS image, is utilized to calibrate these indexes, which then creates a calibration model. The calibration model indicates the relationship between the PSUIs and the component abundances and thus can be used for calculating the abundances of the mixed pixel's components in MODIS images. This method was tested on MODIS and ETM +/OLI images in different scenes or at different times and was compared with other methods, such as the PPI, the N-FINDR, the SMACC, and VCA. Bezier Curve and Bernstein Basis Functions Given a set of control points, P i , i ¼ 0; 1; : : : ; n, its n'th-order Bezier curve is defined as E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 1 ; 1 1 6 ; 5 1 6 PðtÞ ¼ X n i¼0 P i B i;n ðtÞ; t ∈ ½0;1; where P i is the control point, and B i;n ðtÞ is known as the n'th-order Bernstein basis function. 40 For the n'th-order Bernstein basis function, the expansion terms of the binomial expression 1 ¼ ½t þ ð1 − tÞ n are defined as When n ¼ 3, it is known as a Bernstein basis function of order 3 (see Fig. 1), which may be defined as E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 3 ; 1 1 6 ; 3 6 3 In a plane or in a higher-dimensional space, the explicit form of this cubic Bezier curve with four control points can be written as E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 4 ; 1 1 6 ; 7 1 1 PðtÞ ¼ P 0 B 0;3 ðtÞ þ P 1 B 1;3 ðtÞ þ P 2 B 2;3 ðtÞ þ P 3 B 3;3 ðtÞ; t ∈ ½0;1: . Acquiring pure pixels containing only one ground object from a MODIS image is difficult because of its spatial resolution limit, but it is possible for each sampling pixel to be dominated by only one category of ground features. For the convenience of discussion, in this paper, such sampling points are called pseudo-MODIS pure pixels, and the ground features estimated from the pseudo-pure pixels are called quasiground features. There are four main kinds of spectral reflectance curves for quasiground features (e.g., water body, sediment-laden water, vegetation, and bare soil) derived from a MODIS image. Figure 2 shows the spectral reflectance curves of the four types of quasiground features obtained from the sampling points for the above-mentioned MODIS image (a total of 280 samples, each category accounts for a quarter of the total sampling points) and the spectral reflectance curve of a random mixed pixel in the MODIS image, where the original reflectance curve obtained from the MODIS data has been normalized with respect to the total area enclosed by the curve and the x-axis. Normalization offers the advantage that it can reduce statistical fluctuations without losing any information. Each curve in Fig. 2 contains 13-channel reflectance data points distributed in a wavelength range from 405 to 2155 nm. Channels 13 to 18 and 26 are not used in Fig. 2, because channels 13 to 16 and 26 are invalid on land and the wavelength ranges of channels 17 and 18 overlap with that of channel 19. According to the spectral reflectance curves of the quasiground features derived from MODIS data shown in Fig. 2(a), different quasiground features reach high reflectance in different channels, e.g., water body in blue-green channels, sediment-laden water in red channels, vegetation in near-infrared (with shorter wavelengths) channels, and bare soil in near-infrared (with longer wavelengths) channels. The peak feature Fig. 2 (a) Spectral reflectance curves of four types of quasiground features that are derived from a MODIS image (280 samples, each category accounts for a quarter of the total sampling points). G 0 , G 1 , G 2 , and G 3 are groupings of the spectral reflectance data. (b) Spectral curve of a mixed pixel in a MODIS image. Gray-shaded areas with the names S 0 , S 1 , S 2 , and S 3 were used to show the spectral integral areas corresponding to these four groups of the mixed pixel. Red rectangles below the horizontal axis indicate the locations of MODIS channels. of spectral reflectance curves is important for identifying different ground features. Based on this point, the reflectance data on each spectral curve can be divided into four groups [see Fig. 2(a)], including G 0 at wavelengths from 405 to 565 nm, G 1 at wavelengths from 620 to 876 nm, G 2 at wavelengths from 915 to 1250 nm, and G 3 at wavelengths from 1628 to 2155 nm. This way of grouping reflectance data guarantees that high reflectance of the quasiground features appears in different groups. In addition, according to the property of Bernstein basis functions, B i;n ðtÞ reaches a maximum when t i ¼ i∕n, which means that the peaks of different basis functions appear at different t-values. Both the Bernstein basis function curves and the spectral curves of the ground features have evident peak features. Thus, the third-order Bernstein basis functions with four curves (Fig. 1) are used to characterize the spectral signatures of mixed pixels in MODIS data by using their coefficients. A cubic Bezier curve from a linear combination of the third-order Bernstein basis functions consists of innumerable data points, whereas a spectral reflectance curve derived from MODIS data consists of 13 data points. Consequently, the spectral reflectance curve of each mixed pixel should be mapped to a cubic Bezier curve before employing the third-order Bernstein basis functions to characterize the spectral curve with their coefficients. A cubic Bezier curve mapped to the mixed spectral curve can be expressed as E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 5 ; 1 1 6 ; 5 3 2 f∶ FðλÞ → PðtÞ; (5) where FðλÞ represents the spectral curve of a mixed pixel, and PðtÞ represents the mapped cubic Bezier curve, which is determined by four control points. Here, the spectral integral area (S 0 , S 1 , S 2 , and S 3 ), which is in the range enclosed by the spectral curve for each group and the x-axis [see Fig. 2(b)], and the t-value (t i ¼ i∕n, i ¼ 0, 1, 2, 3 and n ¼ 3) are used together to generate four data points for the mapped cubic Bezier curve. The spectral integral area, which is a combination of sequentially related channels, is employed to replace the single reflectance value. This is because data points generated by 4 of the 15 valid channels of the MODIS sensor cannot fully reflect information about channel width and interrelation, whereas data points generated by the spectral integral areas can do so. Thereafter, four control points can be determined by these four data points. Thus, the cubic Bezier curve is determined. The components in the LSMM are endmembers with physical meaning, and the abundances are nonnegative. The components in Eq. (4) are third-order Bernstein basis functions, namely B 0;3 ðtÞ, B 1;3 ðtÞ, B 2;3 ðtÞ, and B 3;3 ðtÞ, which have exact shapes. The coefficients in Eq. (4), namely P 0 , P 1 , P 2 , and P 3 , express the content of the four basic functions for the mixed spectrum and can be positive or negative. Because the geometric shapes of the four basic functions are invariant, P 0 , P 1 , P 2 , and P 3 can objectively describe the complex spectral curves of mixed pixels in MODIS images. Here, these coefficients are called PSUIs. Calculation process There are four steps used to generate PSUIs (P 0 , P 1 , P 2 , and P 3 ) for each mixed pixel in a MODIS image. Step 1: Divide the spectral reflectance data of each pixel in the MODIS data into four groups (G 0 , G 1 , G 2 , and G 3 ) according to the peak locations of the spectral curves of the four types of quasiground features [see Fig. 2(a)]. Step 2: Calculate the spectral integral areas corresponding to these four groups for each pixel as S 0 , S 1 , S 2 , and S 3 , respectively. E Q -T A R G E T ; t e m p : i n t r a l i n k -; s e c 2 . 2 . 2 ; 1 1 6 ; 1 6 3 E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 6 ; 1 1 6 ; 1 0 2 where R i represents the reflectance (%) at the i'th channel of a pixel, and λ i represents the central wavelength (nm) of the i'th channel: E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 7 ; 1 1 6 ; 7 1 3 To reduce statistical fluctuations without losing any information, S 0 , S 1 , S 2 , and S 3 are normalized to be dimensionless [Eq. (7)]. Hereinafter, S 0 , S 1 , S 2 , and S 3 represent the normalized values of the spectral integral areas, respectively. Step 3: According to the property of Bernstein basis functions that B i;n ðtÞ reaches a maximum when t i ¼ i∕n, and considering the importance of the peak feature of spectral curves, we set then, four data points of a cubic Bezier curve are generated as ðt 0 ; S 0 Þ, ðt 1 ; S 1 Þ, ðt 2 ; S 2 Þ, and ðt 3 ; S 3 Þ. ; t e m p : i n t r a l i n k -; e 0 0 8 ; 1 1 6 ; 5 2 2 Solving Eq. (8) for P 0 , P 1 , P 2 , and P 3 , we have E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 9 ; 1 1 6 ; 4 2 1 Figure 3 shows the flowchart for employing the third-order Bernstein basis functions to characterize the spectral signatures of mixed pixels in MODIS data by using their coefficients (PSUIs). Here, a Terra MODIS image (date: 2001324, time: 03:10) of the Pearl River Delta region of China was taken as an illustrative example of decomposing mixed pixels. After preprocessing the MODIS image (e.g., geometric correction and cloud masking 41 ), the PSUIs were obtained using Eq. (9) for the mixed pixels (Fig. 4). Figure 4(a) presents the pseudocolor image derived from channels 7, 2, and 1 of the MODIS data. Figure 4(b) shows the distribution of the normalized difference water index (NDWI), 42 which is used to evaluate the water distribution information in remote sensing applications, 43,44 whereas the normalized difference vegetation index (NDVI) is usually used to evaluate green coverage and vegetation growth [ Fig. 4(c)]. Figure 4(d) shows the distribution of the normalized difference soil index (NDSI), 45 which is used to enhance soil information. The PSUIs, namely P 0 , P 1 , P 2 , and P 3 , are shown in Figs. 4(e)-4(h), respectively. The index P 0 , which mainly reflects the distribution information for the B 0;3 ðtÞ function, can be used to identify the distribution of water, as NDWI does. The correlation coefficient between P 0 and NDWI is 0.98. The index P 2 , which reflects the distribution information for the B 2;3 ðtÞ function, may be used to estimate vegetation growth and to evaluate green coverage, as NDVI does. The correlation coefficient between P 2 and NDVI is 0.94. The index P 3 , which reflects the distribution information for the B 3;3 ðtÞ function, can be applied to estimate the distribution of bare soil or outcropped areas, as NDSI does. The correlation coefficient between P 3 and NDSI is 0.98. The index P 1 , as the coefficient of the B 1;3 ðtÞ function, can be correlated well with sediment-laden water, and it may have a potential application in estimating the sediment content of water. Figure 4 reveals that the B 0;3 ðtÞ, B 1;3 ðtÞ, B 2;3 ðtÞ, and B 3;3 ðtÞ functions can reflect information about water body, sediment-laden water, vegetation, and bare soil, respectively, by their coefficients (P 0 , P 1 , P 2 , and P 3 ). Thus, the third-order Bernstein basis functions can be employed to characterize the spectral curves of mixed pixels in MODIS data with physical meaning, which is superior to principal components. Abundance Calculation Based on the Calibration Model The PSUIs P 0 , P 1 , P 2 , and P 3 , which are derived from a MODIS image by adopting Eq. (9), indicate the spectral signals from water body, sediment-laden water, vegetation, and bare soil, respectively. Because the PSUIs represent the relative proportions of ground features in each mixed pixel in the MODIS image, they need to be calibrated by means of the reference abundance values of ground features from high spatial resolution remote sensing images (e.g., Landsat ETM+ or QuickBird image) using the FCLS method, which creates a calibration model for calculating the abundances of the components of every mixed pixel in MODIS images. Here, a Landsat ETM+/OLI image is taken as an illustrative example. Because it is difficult to distinguish the sediment-laden water from a water body when classifying an ETM+/OLI image, the sediment-laden water and water body are classified as the same type (water body). Moreover, water body, vegetation, and bare soil are three basic categories of ground features on the earth's surface, 46 which means that P 0 , P 2 , and P 3 contain most of the spectral information of each pixel. Thus, P 0 , P 2 , and P 3 are used for the calibration model. The steps for calibrating the PSUIs can be considered as follows: ( (2) Collect a series of quasiground feature samples (i.e., water body, vegetation, or bare soil are the main ones in each sample) from the MODIS image. Here, a uniform sampling cell of 3 × 3 pixels (3 × 3 km) was used for collecting these samples in order to reduce the projection error, and then the corresponding average values of P 0 , P 2 , and P 3 were respectively calculated for each sampling cell. (3) According to the latitudes and longitudes of the four corners of each sampling cell in the MODIS image, project the boundaries of the samples onto the ETM+/OLI image and the ETM+/OLI classification image (see Fig. 5). Then, respectively calculate the percentages of water body, vegetation, and bare soil pixels accounting for the total pixels in each projection scope in the ETM+/OLI classification image, which are taken as the reference abundances that are used to calibrate the PSUIs. The calibration model for the PSUIs can be expressed as where Y w , Y v , and Y s denote the abundances of water body, vegetation, and bare soil, respectively, which are obtained from the ETM+/OLI classification image; P 0 , P 2 , and P 3 represent, respectively, the PSUIs; and a ij (i ¼ 1, 2, 3, j ¼ 0, 1, 2, 3) are the fitting coefficients. (4) In the illustrative example, we took 189 samples from the MODIS and ETM+ classification images (Fig. 6). We substituted the abundance values of water body, vegetation, and bare soil obtained from the ETM+/OLI classification image and the PSUIs (P 0 , P 2 , and P 3 ) obtained from the MODIS image for the samples into Eq. (10), and then obtained the fitting coefficient a ij using a least squares method. The calibration model for each pixel in the MODIS image can be expressed as E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 1 1 ; 1 1 6 ; 2 9 7 where Y w , Y v , and Y s denote the abundances of water body, vegetation, and bare soil, respectively, and P 0 , P 2 , and P 3 are the PSUIs. The test results for the calibration model are shown in Table 1. All of the multiple correlation coefficients are larger than 0.97, indicating that there are significant linear correlations between the PSUIs (P 0 , P 2 , and P 3 ) that were derived from the MODIS data and the reference abundances of water body, vegetation, and bare soil that were obtained from the ETM+ classification image. The test results for the calibration model show that all the observed values for the F-test are evidently larger than the critical F-test value at the 99% confidence level [F 0.01 (3,185)]. Thus, there is a marked regression relationship between the PSUIs and the abundances of water body, vegetation, and bare soil, which ensures performance accuracy. The significance test for each PSUI shows that all the significance probabilities are larger than 99.00% and that each index has a significant effect on the abundances. Thus, the calibration model is acceptable. (6) Evaluate the accuracy of the component abundances obtained by decomposing the mixed pixels in MODIS images. In this accuracy evaluation, the error is defined by the difference between the calculated component abundance and the reference component abundance, where to ensure the objectivity of the accuracy evaluation, the reference abundance used to evaluate the accuracy of the calculated component abundances and the abundances employed to calibrate the PSUIs should be taken from the ETM+/OLI classification images at different times or in different scenes. The flowchart for the proposed approach to decomposing mixed pixels in MODIS images is shown in Fig. 7. From the above, we can see that although Eq. (4) is still a linear mixture model as is the LSMM, this method using the third-order Bernstein basis functions is different from the LSMMbased approaches in that it does not need to resort to extracting endmember spectra. For the sake of convenience, hereinafter, using PSUIs for decomposing mixed pixels in the MODIS images is called the PSUI method. Experiment Design and Datasets A calibration model used to calculate the abundances of every mixed pixel's components in MODIS images was built based on a set of MODIS and ETM+ images of the Pearl River (Table 2). One group (E1-E5) is conducted to apply the calibration model to MODIS data at different times or in different areas to test the robustness of the PSUI method. Two experiments (E1-E2) in this group, which are carried out to decompose mixed pixels in MODIS images in the Pearl River Delta region at different times, can be used to test whether a good unmixing process is performed for MODIS data at different times compared with that used for building the calibration model. In the Pearl River Delta region, the water bodies are sea water (main type), rivers, lakes, or dike-ponds; the vegetation is forests (main type), croplands, or grassland; and the bare soil is urban and built-up (main type), or barren/sparse vegetation. Three experiments (E3 to E5) in this group are then conducted in different areas with different types of water bodies, vegetation, or bare soil, which can be used to test whether a new calibration model is needed in different areas. E3 is carried out in the Kubuqi desert region of China, where the water bodies are mainly rivers and lakes, the vegetation is mainly croplands, and the bare soil is mainly barren or sparse vegetation. E4 is carried out in the North China Plain, where the water bodies are mainly lakes and rivers, the vegetation is mainly croplands, and the bare soil is mainly urban and built-up. E5 is carried out in Texas, where the water bodies are mainly sea water and lakes, the vegetation is mainly savannas and grassland, and the bare soil is mainly urban and built-up. The other (E6) is conducted to compare the PSUI method with conventional methods (PPI, N-FINDR, SMACC, and VCA). Six groups of datasets are to be tested in this section (see Table 2). Each dataset consists of a MODIS image (MOD021KM: level 1b calibrated, 1000 × 1000 m spatial resolution), derived from the LAADS DAAC, and a Landsat ETM+/OLI image (30 × 30 m spatial resolution), derived from the USGS GloVis, in the same area, and from the same day or from two consecutive days (Table 2). Application of the Calibration Model In this section, we present the application of the calibration model [Eq. (11)] to MODIS images (see Table 2) in different areas or at different times to test the robustness and performance of the PSUI method (E1 to E5). The abundance maps of water body, vegetation, and bare soil for the MODIS images were then obtained (see Fig. 8). Five sets of sampling grids of 3 × 3 pixels, which were randomly collected from the MODIS images, were taken as test samples to evaluate the accuracy of the calculated abundances in these experiments. The accuracy evaluation results of these five experiments (Table 3) demonstrate good accuracy for decomposing mixed pixels in MODIS images in different areas at different times by using the calibration model. Therefore, the calibration model can be used for MODIS data in different areas or at different times, which means that there is no need to build a calibration model for every MODIS image. Comparison with Conventional Methods To examine the effectiveness of the PSUI method, we compared it with the PPI, N-FINDR, SMACC, and VCA methods using the same MODIS image, against the abundance values of the ground features derived from the ETM+/OLI classification image from the same day as the MODIS image. The PPI, N-FINDR, SMACC, and VCA methods are widely applied for endmember extraction due to their light computational burden and clear conceptual meaning. 15 Detailed descriptions of these four methods can be found in the literature. 15,[20][21][22][23][24][25] In this experiment (E6), the MODIS image (date: 2001356, time: 03:10) and ETM+ image (path/row: 122/044, date: 2001356) used for the method comparison are taken from the same area but not at the same time as those used for calibrating the PSUIs (date: 2001324). The 295 sampling points of 3 × 3 pixels that were randomly collected from the MODIS image (date: 2001356) were taken as test samples. As shown in Table 4, the mean error (ME), mean absolute error (MAE), root-mean-square error (RMSE), and root-mean-square abundance angle distance (rmsAAD) obtained by using the PSUI method are obviously smaller than those obtained by using the PPI, N-FINDR, SMACC, and VCA methods. Furthermore, the errors derived from the PSUI method are distributed around 0% and are centralized [see Fig. 9(a)], whereas those derived from the PPI, N-FINDR, SMACC, and VCA methods exhibit a more disperse distribution [see Figs. 9(b)-9(e)]. The accuracy evaluation results demonstrate that the PSUI method outperforms the PPI, N-FINDR, SMACC, and VCA methods. In the comparison experiment, the PSUI method and four conventional unmixing methods were performed with an Intel Core i7-8550U CPU running at 1.80 GHz with 8.0 GB RAM. The PPI, N-FINDR, and VCA methods were performed in MATLAB R2017b, and the running time of these methods was 8.19, 5.53 and 6.42 s, respectively. The running time for building and Table 4 show that the PSUI method took less time than the PPI, N-FINDR, and VCA methods. Discussion The existing methods of decomposing mixed pixels, based on either LSMM or NLSMM, are mainly based on pixel spectral information that is characterized by a single spectral curve composed of discrete data points and require extracting endmember spectra. 1,15,16,18,20 The procedures adopted by the methods, such as the PPI 21 and the SMACC, 23 have been quite successful when pure pixels are present in the original image data. However, it is very difficult to find pure pixels containing only one ground object in MODIS images with low spatial resolution. Many authors have argued that there are no pure pixels in remote sensing images with low spatial resolution. 17,27 Miao and Qi 31 and Plaza et al. 17 suggested that a trend in the hyperspectral imaging community was to design endmember identification algorithms that do not assume the presence of pure pixels to ensure the endmember accuracy and unmixing accuracy. ka i kkb a i k measures the similarity between reference abundances (a i ) and calculated ones ( b a i ) of sampling grids; N is the number of sampling grids (N ¼ 295). The best results of the four algorithms are in bold font in the table. The PSUI method proposed herein provides a solution that is different from previous work on the effective decomposition of mixed pixels. This method does not need to resort to extracting endmember spectra from MODIS data. It was tested on five sets of MODIS and ETM+/OLI images, and satisfying unmixing results were obtained (see Fig. 8 and Table 3). The calibration model can be applied to MODIS data in different areas or at different times with high accuracy. The PSUI method was also compared with other methods using the same MODIS data, such as the PPI, N-FINDR, SMACC, and VCA, and the experimental results (Table 4) showed that the accuracy of the PSUI method was obviously higher than that of the PPI, N-FINDR, SMACC, or VCA methods. In the PSUI method, the PSUIs quantify the relative proportions of spectrally distinct signals from several ground features in each mixed pixel of MODIS data, thus the indexes need to be calibrated with the abundance values of the ground features from a high spatial resolution remote sensing image such as Landsat ETM+ image. One might say that since the PSUIs need to be calibrated with the ETM+/OLI classification images, it would be more convenient to use the results from the ETM+ images directly. However, the low temporal resolution of the 16-day revisit cycle of Landsat ETM+ has long limited its use in many fields, such as studying global biophysical processes, understanding changes in the terrestrial carbon cycle, or mapping the quality and abundance of wildlife habitats. 55,56 MODIS visits the globe once or twice per day with coarse resolution of 250 to 1000 m. In addition, the calibration model is applicable for MODIS data in different areas or at different times, which means that there is no need to build a calibration model for every MODIS image. One of the advantages of the PSUI method is that it combines MODIS data of high temporal resolution with Landsat ETM+ data of high spatial resolution, which may be the reason the new method is superior to the PPI, N-FINDR, SMACC, and VCA methods in terms of decomposition accuracy for mixed pixels in MODIS images. As we all know, there are 15 reflective spectral channels valid on land in a MODIS image, and these are distributed over a wavelength range of 405 to 2155 nm. These 15 reflective spectral channels can reflect the key spectral characteristics of ground features, such as the locations and intensities of absorption and reflection bands, which are obviously demonstrated in a spectral curve. Three very different ground features (i.e., water body, vegetation, and bare soil) having spectral curves that are easily distinguishable based on their peak locations are involved in the unmixing process. Thus, a good unmixing process for the PSUI method can be performed. However, the PPI, N-FINDR, SMACC, and VCA methods were originally proposed for hyperspectral data, [21][22][23][24] and thus would not be expected to perform for multispectral data with limited spectral resolution as well as for hyperspectral data. Furthermore, the endmembers in these conventional methods are specific components, i.e., specific types of mineral or vegetation. [21][22][23][24] There may be several specific types of vegetation and bare soil in a MODIS image. However, in the method comparison experiment, mixed pixels in MODIS data were decomposed into three general categories of water body, vegetation, and bare soil. Thus, the performance of the conventional methods may be affected. For the PSUI method, the training samples used to establish the calibration model were derived from a MODIS image and an ETM+ classification image from the same day and in the same area, which were in almost the same atmospheric conditions. Furthermore, the unmixing accuracies of MODIS images without atmospheric correction were good, whether the MODIS images were the same as that used for the calibration model or not (see Table 3). Thus, atmospheric correction was not necessary for the PSUI method, which could save time and reduce workload for time series analysis with MODIS imagery. To examine the effectiveness of the PSUI method, it was compared with the PPI, N-FINDR, SMACC, and VCA methods using the same MODIS image without atmospheric correction. The conventional methods did not perform so well in this comparison experiment because they all required atmospheric correction. [21][22][23][24] The PSUI method, which is based on third-order Bernstein basis functions and does not resort to extracting endmember spectra, has been shown to be effective in decomposing mixed pixels in MODIS data. However, it should be noted that this study was the first attempt to decompose mixed pixels by characterizing the spectral curves of the mixed pixels in MODIS data with a set of Bernstein basis functions. There are still some limitations for the PSUI method. First, the PSUI method is now only suitable for decomposition into three general components (water body, vegetation, and bare soil) in images acquired by a coarse resolution multispectral sensor (e.g., MODIS). It would not be able to decompose mixed pixels into specific vegetation or soil types. Future studies should be carried out to apply the PSUI method to much more complicated ground feature situations. There are two situations: (1) if some of the ground features have very similar spectral signatures, spatiotemporal information as well as spectral information from MODIS data should be utilized comprehensively; or (2) if the high reflectance of each ground feature appears at different wavelengths, Bernstein basis functions of a higher order should be utilized. Second, the calibration model, without atmospheric correction, might work only at low aerosol optical depth (AOD), as the shape of the reflectance spectra at the top of the atmosphere would be highly dependent on the AOD. The impact of absorption and scattering of atmospheric aerosol on reflectance data varies with wavelength, which would change the shape of the spectral reflectance curves and should be corrected by an atmospheric correction algorithm (e.g., the fast lineof-sight atmospheric analysis of spectral hypercubes (FLAASH) algorithm 57 ). A new calibration model should be built and applied based on MODIS data with atmospheric correction if AOD is high. Conclusions In this paper, the PSUI method, which provides a solution that is different from previous work on the decomposition of mixed pixels, was proposed. This method does not need to resort to extracting endmember spectra from MODIS data, which provides a new way of decomposing mixed pixels to assure the unmixing accuracy. In the PSUI method, the spectral integral area that is in the range enclosed by the spectral reflectance curves of ground features and the x-axis (in Cartesian coordinates) and a set of third-order Bernstein basis functions are applied to characterize the spectral curves of mixed pixels in a MODIS image, and the derived PSUIs (i.e., the coefficients of the basis functions) are used for representing the spectral characteristics of the mixed pixels. Then the PSUIs are calibrated with the abundance values of the ground features from a high spatial resolution remote sensing image such as Landsat ETM+ image, which creates a calibration model for calculating the abundances of the components of every mixed pixel in MODIS images. The calibration model is applicable for MODIS images in different areas or at different times, which has been proved by the experimental results using five sets of MODIS and Landsat EMT+/OLI images. The PSUI method was compared with four conventional methods, i.e., the PPI, N-FINDR, SMACC, and VCA. And the comparison results show that the PSUI method outperforms the other four methods for decomposing mixed pixels in MODIS data. Although the PSUI method performs well for decomposing mixed pixels in MODIS images with low AOD into three general categories of water body, vegetation, and bare soil, further study is needed to apply the PSUI method to MODIS images with much more complicated ground feature situations or high AOD.
2019-11-22T00:34:15.379Z
2019-10-01T00:00:00.000
{ "year": 2019, "sha1": "d0382125e0e71e6ed718cc2287c4000351819a02", "oa_license": "CCBY", "oa_url": "https://www.spiedigitallibrary.org/journals/journal-of-applied-remote-sensing/volume-13/issue-4/046509/Decomposition-of-mixed-pixels-in-MODIS-data-using-Bernstein-basis/10.1117/1.JRS.13.046509.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "1365e79da744345a2eea9cc51507a3e05e9b0f92", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Engineering", "Mathematics" ] }
219986997
pes2o/s2orc
v3-fos-license
Impact of the COVID-19 outbreak on cancer patient flow and management: experience from a large university hospital in Spain We report the dramatic shift of the oncology activity at our department during a 5-week period (9 March–13 April 2020) as compared with the same calendar interval in 2019 . Overall, our Medical Oncology Department has experienced remarkable drop in activity. The number of outpatient’s visits decreased by 23%. One of the most worrisome concern is that the new oncology referrals were reduced by 37%, and the number of patients enrolled in clinical trials decreased by 43%. These data would mean that nearly 4 out 10 patients with cancer have been missed or their treatment delayed. Another significant fluctuation was on the number of patients and treatments administered in the outpatient treatment unit, which decreased 20.8% and 37.9%, respectively (882 patients/1865 treatments in 2019 vs 698 patients/1157 treatments). Of interest, despite … We report the dramatic shift of the oncology activity at our department during a 5-week period (9 March-13 April 2020) as compared with the same calendar interval in 2019 . Overall, our Medical Oncology Department has experienced remarkable drop in activity. The number of outpatient's visits decreased by 23%. One of the most worrisome concern is that the new oncology referrals were reduced by 37%, and the number of patients enrolled in clinical trials decreased by 43%. These data would mean that nearly 4 out 10 patients with cancer have been missed or their treatment delayed. Another significant fluctuation was on the number of patients and treatments administered in the outpatient treatment unit, which decreased 20.8% and 37.9%, respectively (882 patients/1865 treatments in 2019 vs 698 patients/1157 treatments). Of interest, despite the reduction on treatments, the prescriptions of granulocytecolony stimulating factor increased by 158% in March and 134% in April 2020. The prompt expansion of contagious of COVID-19 has led to a worldwide pandemic. By 28 April 2020, there have been 229 422 positive cases and 23 521 deaths in Spain due to COVID-19. 1 This wave of infection had seriously impacted the activity of health provision centres, including large cancer centre. 2 Illustratively, we had more than 1200 patients hospitalised at our institution due to COVID-19. In addition, the contagiousness of the infection and its severity impacted our clinical pathways and treatment strategies. Herein, we report the dramatic shift of the oncology activity at our department during a 5-week period (9 March-13 April 2020) as compared with the same calendar interval in 2019 (table 1). Overall, our Medical Oncology Department has experienced remarkable drop in activity. The number of outpatient's visits decreased by 23%. One of the most worrisome concerns is that the new oncology referrals were reduced by 37%, and the number of patients enrolled in clinical trials decreased by 43%. These data would mean that nearly 4 out 10 patients with cancer have been missed or their treatment delayed. Another significant fluctuation was on the number of patients and treatments administered in the outpatient treatment unit, which decreased 20.8% and 37.9%, respectively (882 patients/1865 treatments in 2019 vs 698 patients/1157 treatments). Of interest, despite the reduction on treatments, the prescriptions of granulocyte colonystimulating factor increased by 158% in March and 134% in April 2020. Several expert-based guidelines and recommendations for prioritisation and treatment of patients with cancer during the COVID-19 pandemic have recently been published. 3 The impact on oncological outcomes may be anticipated. We have noticed a drop in cancer diagnosis, which may translate in inadequate early multidisciplinary cancer team treatment planning and implementation. We predict the observed reduction may worsen in the following months as a consequence of the recent shut down for many specialties, such as radiology, scopic procedures and surgery. Delayed treatments may have consequences in the effectiveness of palliation but also on long-term cures. Intervals longer than 8 weeks after surgery have been associated with worse survival, 4 and delayed adjuvant therapy causing deleterious survival has been described in colorectal or breast cancer. 5 In summary, many patients have already faced postponed diagnosis and treatment and more is to come. Over time, this issue may emerge as another healthcare crisis. Twitter Luis Manso @luigimanso Contributors LM is the corresponding author. Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors. Competing interests None declared. Patient consent for publication Not required. Provenance and peer review Not commissioned; internally peer reviewed. Open access This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, any changes made are indicated, and the use is non-commercial. See: http:// creativecommons. org/ licenses/ by-nc/ 4. 0/.
2020-06-24T13:06:55.270Z
2020-06-01T00:00:00.000
{ "year": 2020, "sha1": "735e6312bc9663cccf390792c3f2f06fd622f747", "oa_license": "CCBYNC", "oa_url": "http://www.esmoopen.com/article/S205970292030048X/pdf", "oa_status": "GOLD", "pdf_src": "ElsevierCorona", "pdf_hash": "d415fb7fbc8d1738f4ed46c8f5e933f1d9e6d76a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
226454236
pes2o/s2orc
v3-fos-license
“A test of our values”: The Moldovan experience with COVID-19 In all cases, contexts matter. The 2020 global health crisis, known as the “COVID-19 Pandemic”, has drawn comparisons with earlier global health crises, most prominently the 1918–1919 “Spanish flu”. It’s a natural response to a substantially unknown and shared threat. But these comparisons are of little practical value in this case because the unique character of this threat and its impacts have been experienced differently by people in different societies according to their personal, social and particularly cultural contexts. "A test of our values": The Moldovan experience with COVID- 19 Dr. Darrell L. Whitman be developed from comparing these two spaces, which argues that cultural narratives remain a dominant feature of modern life even where powerful institutional structures have been constructed to secure the power of modern states. In this case, COVID-19 provided me an opportunity to examine these two spaces where contexts overlap and challenge assumptions about the authenticity and power of liberal governance. Conflicted narratives of COVID-19 The manipulation of narratives to achieve preferred political outcomes is ubiquitous in human history. In this case, the COVID-19 narrative in modern institutional states rests on one of the pillars of modern state governance -the authority of science. However, this institutional narrative of health science has significally less authority in nation-states with long and deeply embedded cultural histories. Institutional health science has offered information developed through the scientific method of observation and analysis of data, which itself is heavily dependent on the construction of institutions tethered to and politically and economically subordinate to modern states. This means that the COVID-19 narrative offered through modern institutions of science is not "objective", but subject to political manipulation by the political class that controls a particular state. This manipulation becomes most apparent in the way that narratives of institutional science are argued as empirical facts, rather than informed opinions, by modern state political actors, when health scientists themselves qualify their findings and subject them to rigorous peer debates. This manipulation of health science by modern state political actors has substantially undermined the authority of not only modern institutional science, but the authority of the modern states who engage in this manipulation. The culturally based narrative of COVID-19 had its own strengths and weaknesses. Its greatest strength, which is now emerging as the manipulation of modern institutional science has been revealed, is that it relies not on knowledge that can be manipulated by modern state actors, but on the collective experience and values of a society, such as exists in Moldova but has been replaced by institutional knowledge in modern states, such as has occurred in California. This reliance on collective experience increases the legitimacy of policy choices, even if they prove wrong over time. Thus, it has allowed Moldovan political leaders the luxury of adaptating to a changing scientific narrative, rather than making them prisoners of an institutional politics of health science. However, a culturally based narrative of science requires a deeper and broader understanding of science as knowledge production, which can't be assumed but must be cultivated and exists in varying degrees in in the culture of societies. Whether or not these two competing narratives are, in fact, irreconcilable, is a question not about the compatibility of the facts they employ, but about the outcomes each narrative seeks to secure. In that sense, both narratives began with provable "truths" but migrate toward a treatment of those provable facts to promote preferred outcomes. In this case, modern institutional states are constricted by questions of institutional political economy, while culturally defined nation-states are constrained by questions of social values. This isn't an "either/or" context, as modern institutional states must contend with culturally defined social values, and culturally defined nation -states are accountable for politicaleconomic outcomes. Rather, it is how these states/nation-states define and manage their dominant sources of authority, which can be found in the histories of these competing narratives and their host states and nation-states. These histories reflect an institutional narrative of science tied to technological development, concentrations of political power, and colonial expansion, and a cultural defined science that serves social values that are unrelated to technological development, and the expansion and concentration of political power. Arguably, this is also related to the size and internal cohesion of a state/nation-state, which in the case of COVID-19 shows small, internally cohesive nation-states, such as Cuba, Latvia, Moldova and Vietman outperforming their larger, more powerful modern institutional state neighbors in managing COVID-19. The role of globalized communications in the response to COVID-19 It is almost impossible to exaggerate the important role played by global communications in the COVID-19 experience. As I watched the competing narratives of modern institutional state and the culturally defined nation-state unfold, it became apparent that the virtual system of communications allowed by the global internet was reshaping traditional day-to-day systems of personal interactions. If San Francisco is representative of the global system of the internet, Moldova is representative of day-to-day personal interactions that use internet communications as a tool, rather than accept the internet as a virtual world. This contrast is driven, in part, by the presence of deeply embedded cultural values that emphasize interpersonal communications in Moldova, which don't exist in California and most other modern, institutional state societies. The San Francisco Bay Area is the poster child for a society built by science and technology to serve the rising power of corporate telecommunications. It is almost second-nature for the citizens of its "globalized" society to assume the internet reflects a "world community" with the modern institutional state, and particularly California, at its center. This encouraged those working in telecommunications to annoint themselves as the moral/intellectual leaders of this world with the power (and responsibility) to manage the communications of this world community, defining what was truth and who was entitled to engage in public discourse about important subjects, such as but not limited to COVID-19. This quickly translated into defining institutional science knowledge as not just fact but "truth", while censoring all other discussions, including observations by frontline medical professionals who questioned the official narrative. When the official institutional science narrative began to break down as new science knowledge emerged, the reaction of telecommunication "giants", such as Google, Facebook, Twitter, and YouTube was to aggressively "defund", isolate, and marginalize it, thus limiting the possibilities for informed judgements about COVID-19. In contrast, Moldova had a substantively different history, culture and politics with respect to their understanding of COVID-19. It is a culture of small towns and villiages, defined by commonly held values of family, faith, community with a tolerance for change and other cultures. It is now among the poorest countries in Europe, but during its long 7,500 years history it produced a technological and socially developed cultural society, which during the Neolithic Cucuteni-Trypillian era built large cities, developed advance agricultural methods and produced a rudimentary written language, all within an egalitarian society without social differentiation and division that lasted 2,500 years. It has successfully survived invasions by a long list of would-be imperial powers, which tried unsuccessfully to impose institutional political and economic structures by adapting to the changes in the world around it. Thus, when COVID-19 came to Moldova it was met by an embedded system of values, not institutional structures, which led protective measures that were compassionate but socially acceptable. Even before it appeared, the Moldovan Health Ministry was carefully watching COVID-19, because attention to change is an adaptive strategy that has served to protect Moldovans for centuries. When the virus did appear, the Health Ministry quickly distributed masks, gloves and information, then consulted widely with business and community organizations before closing its borders and non-essential businesses. Also, from the beginning, the Molodovan Health Ministry released daily, detailed information about COVID-19 cases, listing not just raw numbers, but providing details about its victims' ages, "A test of our values": The Moldovan experience with COVID- 19 Dr. Darrell L. Whitman location and pre-existing medical conditions to acknowledge and put a human face on those who died. There were never any food or other shortages in the markets, never any questions about providing health care to everyone, even to foreigners like me, and never any heavyhanded attempts to extend the authority of government. 1 As of 03 September 2020, the Moldovan Health Ministry reported 37,440 cases of COVID-19 infection had been confirmed. Of these, 26,575 people have been treated and released (70%), and currently there are 10,141 people under medical supervision: 8385 people in observation,1247 with mild cases, and 509 in severe condition. To date, 2.8%, 1024 cases of COVID-19 have resulted in death. 2 While these data present a very good response by the Moldovan health care system to COVID-19 compared to other countries, 3 they also expose the underlying failure of statistical reports to capture the actual risk COVID-19 posed. For example, an earlier report by the Moldova Health Ministry on 23 June 2020 illuminated this problem when it addressed the reality that testing was very limited and primarily focused on people who had contact with the health system, while informal assessments were that at least 150,000 people in Moldova had been infected with COVID-19. Adopting the informal assessment numbers suggests that only 0.03% of those infected died, an effect that is much closer to typical annual influenza deaths. Further, the official numbers do not account for those whose deaths can be attributed to predictably fatal underlying health conditions, the shutdowns, which may be twice the number who died from the virus, or the long-term costs to society as a whole. Nor do they reveal the trade-off in responding to one group in society over another. 4 As of 3 September 2020, 90% of businesses in Moldova have reopened. The streets and streetcars are again filled with people living their ordinary lives. Many continue to use masks in closed spaces and most continue to observe protocols that promote public health. Few, if any, are afraid of COVID-19, and even as some are angry about the avoidable impositions they suffered, most are informed about what COVID-19 is and isn't. And policymakers are confessing their short-comings and pledging to do better if Moldova faces another health crisis. Change is easier in a small, culturally empowered country like Moldova, which may help explain why small countries, like Cuba, Latvia, Switzerland and Vietnam, appear to have been able to navigate COVID-19 with less stress and more success. In contrast, California is continuing its shutdown, keeping small businesses, parks, schools, the beach and even wilderness areas closed. But, this action now lacks support from the broad scientific community, which has confirmed there is little risk to children, limited risk for people under the age of forty, and, as USC Professor Joel Hay reported, there is no evidence that social distancing in other than closed spaces prevents the spread of coronavirus. 5 At the same time, civil unrest is growing as the economic crisis spawned by political policymaking takes it toll, and as millions of Californians face eviction or foreclosure, thousands of small-and medium-sized businesses surrender to bankruptcy and thousands of people choose to exit the Golden State. Takeaways The lessons of COVID-19 are only now beginning to be understood. 6 These include: • • Lockdowns are justified only for a short period of time, and only for the purpose of assessing the risks. • • Science can only play an advisory role because scientific knowledge is specialized and always a "work in progress" rather than a "truth". • • Both scientific and social narratives are important sources of power because of their roles in authorizing political choice, but, power attracts manipulation by those who seek to advance these narratives for personal or political gain. • • Communication technologies are two-edged swords that offer both powerful new tools to society but also complex new problems in managing these tools, which if miused can exacerbate social divisions and conflict. As political crises spread through modern institutional states, a broad rethinking about the nature of politics is beginning, including the oft-hidden role of cultural politics. But the authority of both institutional science and government has been damaged, and as the world returns to thinking about the future of what we call "progress" this damage will impose limits on their ability to exercise persuasive power, undermining their claims to represent the broad public interest. From my experience witnessing COVID-19 from two highly contrasting contexts, the journey toward effective governance must begin with a large dose of humility, coupled with a commitment to inclusive rather than exclusive discussions and discourses. We humans are inventive, which is why we have survived on this planet. But invention is always a collective enterprise where inspiration is borrowed from many sources. We need science knowledge for this journey, but we cannot do that without social cohesion that only cultural politics can provide. Thus, we can afford to invest in institutions, whether scientific or political, that are bound to narrow and privileged political classes, or that ignore and attack cultural forces that may oppose them -they will not be able to transcend the past and adapt to a changing world. Adaptation has always been the first principal of survival, as Moldovans have learned. The failure to learn this lesson in pursuit of power and control will always fail in the end, ultimately succombing to the realities of human society. The question remaining is when and how that will happen. 05 September 2020, Chisinau, Moldova
2020-10-28T19:11:29.662Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "b83da54cb4cf25e98c6507553f51f25d7e907eef", "oa_license": "CCBY", "oa_url": "https://www.scienceopen.com/document_file/a7c4b892-4c30-4d44-84f3-d95700b56e2b/ScienceOpen/jglobfaul.7.1.0091.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "d12b9386f4204de29ecfd0c34f0b8d48910b01a3", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Political Science" ] }
5549873
pes2o/s2orc
v3-fos-license
Short Proofs of the Kneser-Lov\'asz Coloring Principle We prove that the propositional translations of the Kneser-Lov\'asz theorem have polynomial size extended Frege proofs and quasi-polynomial size Frege proofs. We present a new counting-based combinatorial proof of the Kneser-Lov\'asz theorem that avoids the topological arguments of prior proofs for all but finitely many cases for each k. We introduce a miniaturization of the octahedral Tucker lemma, called the truncated Tucker lemma: it is open whether its propositional translations have (quasi-)polynomial size Frege or extended Frege proofs. Introduction This paper discusses proofs of Lovász's theorem about the chromatic number of Kneser graphs, and the proof complexity of propositional translations of the Kneser-Lovász theorem. We give a new proof of the Kneser-Lovász theorem that uses a simple counting argument instead of the topological arguments used in prior proofs, for all but finitely many cases. Our arguments can be formalized in propositional logic to give polynomial size extended Frege proofs and quasipolynomial size Frege proofs. Frege systems are sound and complete proof systems for propositional logic with a finite schema of axioms and inference rules. The typical example is a "textbook style" propositional proof system using modus ponens as its only rule of inference, and all Frege systems are polynomially equivalent to this system [7]. Extended Frege systems are Frege systems augmented with the extension rule, which allows variables to abbreviate complex formulas. The size of a Frege or extended Frege proof is measured by counting the number of symbols in the proof [7]. Frege proofs are able to reason using Boolean formulas; whereas extended Frege proofs can reason using Boolean circuits (see [9]). Boolean formulas are conjectured to require exponential size to simulate Boolean circuits; there is no known direct connection, but by analogy, it is generally conjectured that there is an exponential separation between the sizes of Frege proofs and extended Frege proofs. This is one of the important open questions in proof complexity; for more on proof complexity see e.g. [2,4,5,7,10,13]. As discussed by Bonet, Buss and Pitassi [2] and more recently by [1,6], we have hardly any examples of combinatorial tautologies, apart from consistency statements, that are conjectured to exponentially separate Frege and extended Frege proof size. These prior works discussed a number of combinatorial principles, including the pigeonhole principle and Frankl's theorem. Istrate and Crãciun [8] recently proposed the Kneser-Lovász principle as a candidate for exponentially separating Frege and extended Frege proof size. In this paper we give quasi-polynomial size Frege proofs of the propositional translations of the Kneser-Lovász theorem for all fixed k. Thus they do not provide an exponential separation of Frege and extended Frege proof size. Our proof is also interesting because it gives a new method of proving the Kneser-Lovász theorem. Prior proofs use (at least implicitly) a topological fixedpoint lemma. The most combinatorial proof is by Matoušek [12] and is inspired by the octahedral Tucker lemma; see also Ziegler [14]. Our new proofs mostly avoid topological arguments and use a counting argument instead. These counting arguments can be formalized with Frege proofs. Indeed, one of the important strengths of Frege proofs is that they can reason about integer arithmetic. These techniques originated in polynomial size Frege proofs of the pigeonhole principle [3] which used carry-save-addition representations for vector addition and multiplication in order to express and prove properties about integer operations in polynomial size. For the Kneser-Lovász theorem, the counting arguments reduce the general case to "small" instances of size n ≤ 2k 4 . For fixed k, there are only finitely many small instances, and they can be verified by exhaustive enumeration. As we shall see, this leads to polynomial size extended Frege proofs, and quasi-polynomial size Frege proofs, for the Kneser-Lovász principles. It is surprising that the topological arguments can be largely eliminated from the proof of the Kneser-Lovász theorem. The only remaining use of topological arguments is to establish the "small instances". It would be interesting to give an additional argument that avoids having to prove the small instances separately. One possibility for this would be to adapt the proof based on the octahedral Tucker lemma to quasi-polynomial size Frege proofs. The first difficulty with this is that the octahedral Tucker lemma has exponentially large propositional translations. To circumvent this, we present a miniaturized version of the octahedral Tucker lemma called the truncated Tucker lemma. The truncated Tucker lemma has polynomial size propositional translations. We prove that the Kneser-Lovász tautologies have polynomial size constant depth Frege proofs if the propositional formulas for the truncated Tucker lemma are given as addi-tional hypotheses. However, it remains open whether these truncated Tucker lemma principles have (quasi-)polynomial size Frege or extended Frege proofs. The (n, k)-Kneser graph is defined to be the undirected graph whose vertices are the k-subsets of {1, . . . , n}; there is an edge between two vertices iff those vertices have empty intersection. The Kneser-Lovász theorem states that Kneser graphs have a large chromatic number: Theorem 1 (Lovász [11]). Let n ≥ 2k > 1. The (n, k)-Kneser graph has no coloring with n − 2k + 1 colors. It is well-known that the (n, k)-Kneser graph has a coloring with n − 2k + 2 colors (see e.g. the appendix to the arXiv version of this paper), so the bound n−2k+1 is optimal. For k = 1, the Kneser-Lovász theorem is just the pigeonhole principle. Istrate and Crãciun [8] noted that, for fixed values of k, the propositional translations of the Kneser-Lovász theorem have polynomial size in n. They presented arguments that can be formalized by polynomial size Frege proofs for k = 2, and by polynomial size extended Frege proofs for k = 3. This left open the possibility that the k = 3 case could exponentially separate the Frege and extended Frege systems. It was also left open whether the k > 3 case of the Kneser-Lovász theorem gave tautologies that require exponential size extended Frege proofs. As discussed above, the present paper refutes these possibilities. The formulas Kneser n k are the natural propositional translations of the statement that there is no (n − 2k + 1)-coloring of the (n, k)-Kneser graph: Definition 3. Let n ≥ 2k > 1, and m = n−2k +1. For S ∈ n k and i ∈ [m], the propositional variable p S,i has the intended meaning that vertex S of the Kneser graph is assigned the color i. The formula Kneser n k is Section 2.1 gives the new proof of the Kneser-Lovász theorem; this is later shown to be formalizable with polynomial size extended Frege proofs. Section 2.2 gives a slightly more complicated but more efficient proof, later shown to be formalizable with quasi-polynomial size Frege proofs. The next definition and lemma are crucial for Sects. 2.1 and 2.2. Any two vertices in a color class P ℓ have non-empty intersection. One way this can happen is for the color class to be "star-shaped": The next lemma bounds the size of color classes that are not star-shaped. It will be used in our proof of the Kneser-Lovász theorem to establish the existence of star-shaped color classes. The idea is that non-star-shaped color classes are too small to cover all n k vertices. Lemma 7. Let c be a coloring of n k . If P ℓ is not star-shaped, then Proof. Suppose P ℓ is not star-shaped. If P ℓ is empty, the claim is trivial. So suppose P ℓ = ∅, and let S 0 = {a 1 , . . . , a k } be some element of P ℓ . Since P ℓ is not star-shaped, there must be sets S 1 , . . . , S k ∈ P ℓ with a i / ∈ S i for i = 1, . . . , k. To specify an arbitrary element S of P ℓ , we do the following. Since S and S 0 have the same color, S ∩ S 0 is non-empty. We first specify some a i ∈ S ∩ S 0 . Likewise, S ∩ S i is non-empty; we second specify some a j ∈ S ∩ S i . By construction, a i = a j , so S is fully specified by the k possible values for a i , the k possible values for a j , and the n−2 k−2 possible values for the remaining members of S. Therefore, |P ℓ | ≤ k 2 n−2 k−2 . ⊓ ⊔ Argument for Extended Frege Proofs Let k > 1 be fixed. We prove the Kneser-Lovász theorem by induction on n. The base cases for the induction are n = 2k, . . . , N (k) where N (k) is the constant depending on k specified in Lemma 8. We shall show that N (k) is no greater than k 4 . Since k is fixed, there are only finitely many base cases. Since the Kneser-Lovász theorem is true, these base cases can all be proved by a fixed Frege proof of finite size (depending on k). Therefore, in our proof below, we only show the induction step. There is an N (k) so that, for n > N (k), any (n−2k +1)coloring of n k has at least one star-shaped color class. Proof. Suppose that a coloring c has no star-shaped color class. Since there are n − 2k + 1 many color classes, Lemma 7 implies that For fixed k, the left-hand side of (1) is Θ(n k−1 ) and the right-hand side is Θ(n k ). Thus, there exists an N (k) such that (1) fails for all n > N (k). Hence for n > N (k), there must be at least one star-shaped color class. We are now ready to give our first proof of the Kneser-Lovász theorem. Proof (of Theorem 1, except for base cases). Fix k > 1. By Lemma 8, there is some N (k) such that for n > N (k), any (n − 2k + 1)-coloring c of n k has a star-shaped color class. As discussed above, the cases of n ≤ N (k) cases are handled by exhaustive search and the truth of the Kneser-Lovász theorem. For n > N (k), we prove the claim by infinite descent. In other words, we show that if c is an (n − 2k + 1)-coloring of n k , then there is some c ′ which is an ((n − 1) − 2k + 1)-coloring of n−1 k . By Lemma 8, the coloring c has some star-shaped color class P ℓ with central element i. Without loss of generality, i = n and ℓ = n − 2k + 1. Let be the restriction of c to the domain n−1 k . This discards the central element n of P ℓ , and thus all vertices with color ℓ. Therefore, c ′ is an ((n − 1) − 2k + 1)coloring of n−1 k . This completes the proof. ⊓ ⊔ Argument for Frege Proofs We now give a second proof of the Kneser-Lovász theorem. The proof above required n − N (k) rounds of infinite descent to transform a Kneser graph on n nodes to one on N (k) nodes. Our second proof replaces this with only O(log n) many rounds, and this efficiency will be key for formalizing this proof with quasipolynomial size Frege proofs in Sect. 3.2. We refine Lemma 8 to show that for n sufficiently large, there are many (i.e., a constant fraction) star-shaped color classes. The idea is to combine the upper bound of Lemma 7 on the size of non-star-shaped color classes with the trivial upper bound of n−1 k−1 on the size of star-shaped color classes. Lemma 9. Fix k > 1 and 0 < β < 1. Then there exists an N (k, β) such that for n > N (k, β), if c is an (n − 2k + 1)-coloring of n k , then c has at least n k β many star-shaped color classes. Proof (of Theorem 1, except for base cases). Fix k > 1. By Lemma 9 with β = 1/2, if n > N (k, 1/2) and c is an (n − 2k + 1)-coloring of n k , then c has at least n/2k many star-shaped color classes. We prove the Kneser-Lovász theorem by induction on n. The base cases are for 2k ≤ n ≤ N (k, 1/2), and there are only finitely of these, so they can be exhaustively proven. For n > N (k, 1/2), we structure the induction proof as an infinite descent. In other words, we show that if c is an (n − 2k + 1)-coloring of n k , then there is some c ′ that is an ((n − n 2k ) − 2k + 1)-coloring of n− n 2k k . For simplicity of notation, we assume n 2k is an integer. If this is not the case, we really mean to round up to the nearest integer ⌈ n 2k ⌉. By permuting the color classes and the nodes, we can assume w.l.o.g. that the n 2k color classes P ℓ for ℓ = n − n 2k − 2k + 2, . . . , n − 2k + 1 are star-shaped, and each such P ℓ has central element ℓ + 2k − 1. That is, the last n 2k many color classes are star-shaped and their central elements are the last n 2k nodes in [n]. (It is possible that some star-shaped color classes share central nodes; in this case, additional nodes can be discarded so that n/2k are discarded in all.) Define c ′ to be the coloring of n−n/2k k which assigns the same colors as c. ⊓ ⊔ When formalizing the above argument with quasi-polynomial size Frege proofs, it will be important to know how many iterations of the procedure are required to reach the base cases, so let us calculate this. After s iterations of this procedure, we have a (( 2k−1 2k ) s n − 2k + 1)-coloring of ( 2k−1 2k ) s n k . We pick s large enough so that ( 2k−1 2k ) s n is less than N (k, 1/2). In other words, since k is constant, will suffice, and only O(log n) many rounds of the procedure are required. We do not know if the bound in Lemma 9 is optimal or close to optimal. An appendix in the arXiv version of this paper will discuss the best examples we know of colorings with large numbers of non-star-shaped color classes. Polynomial Size Extended Frege Proofs We sketch the formalization of the argument in Sect. 2.1 as a polynomial size extended Frege proof, establishing Theorem 4. We concentrate on showing how to express concepts such as "star-shaped color class" with polynomial size propositional formulas. For space reasons, we omit the straightforward details of how (extended) Frege proofs can prove properties of these concepts. Fix values for k and n with n > N (k). We describe an extended Frege proof of Kneser n k . We have variables p S,j (recall Definition 3), collectively denoted just p . The proof assumes Kneser n k ( p ) is false, and proceeds by contradiction. The main step is to define new variables p ′ and prove that Kneser n−1 k ( p ′ ) fails. This will be repeated until reaching a Kneser graph over only N (k) nodes. For this, let Star(i, ℓ) be a formula that is true when i ∈ [n] is a central element of the color class P ℓ ; namely, We use Star(ℓ) := i Star(i, ℓ) to express that P ℓ is star-shaped. The extended Frege proof defines the instance of the Kneser-Lovasz principle Kneser n−1 k by discarding one node and one color. The first star-shaped color class P ℓ is discarded; accordingly, we let The node to be discarded is the least central element of the discarded P ℓ : After discarding the node i and color class P ℓ , the remaining nodes and colors are renumbered to the ranges [n − 1] and [n − 2k], respectively. In particular, the "new" color j (in the instance of Kneser n−1 k ) corresponds to the "old" color j −ℓ (in the instance of Kneser n k ) where And, if S = {i 1 , . . . , i k } ∈ n−1 k is a "new" vertex (for the Kneser n−1 k instance), then it corresponds to the "old" vertex S −i ∈ n k (for the instance of Kneser n k ), For each S ∈ n−1 k and j ∈ [n − 1], the extended Frege proof uses the extension rule to introduce a new variable p ′ S,j defined as follows As seen in the definition by extension, p ′ S,j is defined by cases, one for each possible pair i, ℓ of nodes and colors such that the node i is the least central element of the P ℓ color class, where P ℓ is the first star-shaped color class. The extended Frege proof then shows that ¬Kneser n k ( p ) implies ¬Kneser n−1 k ( p ′ ), i.e., that if the variables p S,j define a coloring, then the variables p ′ S,j also define a coloring. For this, it is necessary to show that there is at least one star-shaped color class; this is provable with a polynomial size extended Frege proof (even a Frege proof) using the construction of Lemma 8 and the counting techniques of [3]. The extended Frege proof iterates this process of removing one node and one color until it is shown that there is a coloring of N (k) k . This is then refuted by exhaustively considering all graphs with ≤ N (k) nodes. ⊓ ⊔ Quasi-polynomial Size Frege Proofs This section discusses some of the details of the formalization of the argument in Sect. 2.2 as quasi-polynomial size Frege proofs, establishing Theorem 5. First we will form an extended Frege proof, then modify it to become a Frege proof. As before, the proof starts with the assumption that Kneser n k ( p ) is false. As we describe next, the extended Frege proof then introduces variables p ′ by extension so that Kneser n−n/2k k is false. This process will be repeated O(log n) times. The final Frege proof is obtained by unwinding the definitions by extension. For a set X of formulas and t > 0, let "|X| < t" denote a formula that is true when the number of true formulas in X is less than t. "|X| < t" can be expressed by a formula of size polynomially bounded by the total size of the formulas in X, using the construction in [3]. "|X| = t" is defined similarly. The formulas Star(i, ℓ) and Star(ℓ) are the same as in Sect. 3.1. A color ℓ is now discarded if it is among the least n/2k star-shaped color classes. The discarded nodes are the least central elements of the discarded color classes. The remaining, non-discarded colors and nodes are renumbered to form an instance of Kneser n−n/2k k . For this, the formula RenumNode(i ′ , i) is true when the node i ′ is the ith node that is not discarded; similarly RenumColor(j ′ , j) is true when the color j ′ is the jth color that is not discarded. For each S = {i 1 , . . . , i k } ∈ n−n/2k k and j ∈ [(n − n/2k) − 2k + 1], we define by extension The Frege proof then argues that if the variables p S,j define a coloring, then the variables p ′ S,j define a coloring, i.e., that ¬Kneser n k ( p ) → ¬Kneser n−n/2k k ( p ′ ). The main step for this is proving there are at least n/2k star-shaped color classes by formalizing the proof of Lemma 9; this can be done with polynomial size Frege proofs using the counting techniques from [3]. After that, it is straightforward to prove that, for each S ∈ n−n/2k k and j ∈ [(n − n/2k) − 2k + 1], the variable p ′ S,j is well-defined; and that the p ′ collectively falsify Kneser n−n/2k k . This is iterated O(log n) times until fewer than N (k, 1/2) nodes remain. The proof concludes with a hard-coded proof that there are no such colorings of the finitely many small Kneser graphs. To form the quasi-polynomial size Frege proof, we unwind the definitions by extension. Each definition by extension was polynomial size; they are nested to a depth of O(log n). So the resulting Frege proof is quasi-polynomial size. Our definition and proof of the truncated Tucker lemma borrows techniques and notation from Matoušek [12]. For A ⊆ [n], let A ≤k denote the least k elements of A. By convention ∅ ≤k = ∅, but otherwise the notation is used only when |A| ≥ k. The Tucker lemma uses the subset relation ⊆ on [n], but the truncated Tucker lemma uses instead a stronger partial order on n k . Definition 12. Let be the partial order on sets in n k ∪{∅} defined by Lemma 13. The relation is a partial order with ∅ its least element. Proof. It is clearly reflexive. For anti-symmetry, A 1 A 2 and A 2 A 1 imply that For Theorem 15 (Tucker lemma). If λ : B n → {1, ±2, . . . , ±n} is antipodal, then there are two elements in B n that are complementary. This implies that Theorem 16 (Truncated Tucker). Let n ≥ 2k > 1. If λ : B n k → {±2k . . . , ±n} is antipodal, then there are two elements in B n k that are k-complementary. For a proof of Theorem 15, see [12]. An appendix to the arXiv version of this paper proves Theorem 16 from Theorem 15. The truncated Tucker lemma has polynomial size propositional translations. For each (A, B) ∈ B n k , and for each i ∈ {±2k, . . . , ±n}, let p A,B,i be a propositional variable with the intended meaning that p A,B,i is true when λ(A, B) = i. The following formula Ant( p ) states that the map is total and antipodal: The following formula Comp( p ) states that there exists two elements in B n k that are k-complementary: (A1,B1),(A2,B2)∈B n k , (A1,B1) (A2,B2) i∈{±2k,...,±n} (p A1,B1,i ∧ p A2,B2,−i ) . The truncated Tucker tautologies are defined to be Ant( p ) → Comp( p ). (We could add an additional hypothesis, that for each A, B there is at most one i such that p A,B,i , but this is not needed for the Tucker tautologies to be valid.) There are < n 2k members (A, B) in B n k . Hence, for fixed k, there are only polynomially many variables p A,B,i , and the truncated Tucker tautologies have size polynomially bounded by n. On the other hand, the propositional translation of the usual Tucker lemma requires an exponential number of propositional variables in n, since the cardinality of B n is exponential in n. Proof (Theorem 1 from the truncated Tucker lemma). Let c : n k → {2k, . . . , n} be a (n−2k+1)-coloring of n k . We show that this implies the existence of an antipodal map λ on B n k that has no k-complementary pairs. Let ≤ be a total order on n k ∪ {∅} that refines the partial order . Define λ(A, B) to be c(A) if A > B and −c(B) if B > A. We argue that there are no k-complementary pairs in B n k with respect to λ. Suppose there are, say (A 1 , B 1 ) and (A 2 , B 2 ). Since λ must assign these opposite signs, either A 1 < B 1 ≤ B 2 < A 2 or B 1 < A 1 ≤ A 2 < B 2 . In the former case it must be that, c(B 1 ) = c(A 2 ) and in the latter case that c(A 1 ) = c(B 2 ). Since B 1 ∩ A 2 and A 1 ∩ B 2 are empty in either case we have a contradiction, since c was assumed to be a coloring. ⊓ ⊔ The above proof of the Kneser-Lovász theorem from the truncated Tucker lemma can be readily translated into polynomial size constant depth Frege proofs. Question 17. Do the propositional translations of the Truncated Tucker lemma have short (extended) Frege proofs?
2015-05-20T22:40:03.000Z
2015-05-20T00:00:00.000
{ "year": 2015, "sha1": "17a346e4289f7dc44960687621abcc0ea42bec49", "oa_license": "elsevier-specific: oa user license", "oa_url": "http://manuscript.elsevier.com/S0890540118300130/pdf/S0890540118300130.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "e5c4390af4ca233190af8869ac7fdfbec0a52f4b", "s2fieldsofstudy": [ "Mathematics", "Philosophy" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
220680841
pes2o/s2orc
v3-fos-license
Propofol abolishes torsade de pointes in different models of acquired long QT syndrome There is conflicting evidence regarding the impact of propofol on cardiac repolarization and the risk of torsade de pointes (TdP). The purpose of this study was to elucidate the risk of propofol-induced TdP and to investigate the impact of propofol in drug-induced long QT syndrome. 35 rabbit hearts were perfused employing a Langendorff-setup. 10 hearts were perfused with increasing concentrations of propofol (50, 75, 100 µM). Propofol abbreviated action potential duration (APD90) in a concentration-dependent manner without altering spatial dispersion of repolarization (SDR). Consequently, no proarrhythmic effects of propofol were observed. In 12 further hearts, erythromycin was employed to induce prolongation of cardiac repolarization. Erythromycin led to an amplification of SDR and triggered 36 episodes of TdP. Additional infusion of propofol abbreviated repolarization and reduced SDR. No episodes of TdP were observed with propofol. Similarly, ondansetron prolonged cardiac repolarization in another 13 hearts. SDR was increased and 36 episodes of TdP occurred. With additional propofol infusion, repolarization was abbreviated, SDR reduced and triggered activity abolished. In this experimental whole-heart study, propofol abbreviated repolarization without triggering TdP. On the contrary, propofol reversed prolongation of repolarization caused by erythromycin or ondansetron, reduced SDR and thereby eliminated drug-induced TdP. Up to 22% of patients treated on intensive care units experience ventricular arrhythmias and have a higher mortality compared to patients without heart rhythm disturbances 1 . Many risk factors have been identified over the past decades and include individual characteristics of each patient but also external factors such as use of pharmacological agents that impair repolarization reserve. Since propofol is commonly used in anaesthesia and intensive care medicine, the impact of propofol alone or in combination with other drugs that influence cardiac electrophysiology is of great interest. As a consequence, some studies have already investigated the electrophysiological effects of propofol in vivo and vitro: Intravenous application of propofol has direct impact on several ion currents including I Na , some potassium channels (I Ks , I K,to , I K,1 ), and I Ca,L 2-5 . However, previous clinical and experimental studies report conflicting data regarding its impact on ventricular repolarization and potential proarrhythmic effects. Higashijima et al. described a significant abbreviation of QTc interval during anaesthetic induction mediated by propofol 6 . In contrast, a recent study demonstrated an increase in ventricular repolarization duration calculated by Fridericiacorrected QT interval with propofol 7 . Additionally, T peak to T end interval (T peak -T end ) was significantly amplified in the presence of propofol in this study. An increased T peak -T end interval is a surrogate for an amplified transmural dispersion of repolarization which in turn represents a major risk factor for drug-induced arrhythmias 8 . In contrast to this study, no significant changes of QTc or T peak -T end have been reported during propofol infusion in children 9 . Some other experimental studies have investigated effects of propofol in different models of long QT syndrome (LQTS): In healthy and transgenic LQT2 and LQT3 rabbits, propofol administration resulted in an increase of QT index resulting in arrhythmia-related death in two LQT2 rabbits 5 . In another study employing a model of (drug-induced) LQTS, propofol reduced the action potential duration increase mediated by erythromycin 10 www.nature.com/scientificreports/ In conclusion, existing studies report conflicting data concerning propofol-induced changes in repolarization duration and heterogeneity and provocation of arrhythmias. Therefore, the purpose of the present study was to elucidate propofol's impact in a sensitive model of repolarization disorders. Previous experimental studies have solely investigated the effect of propofol infusion on ventricular repolarization duration and other ECG markers (e.g. T peak -T end ) but did not investigate other proarrhythmic mechanisms such as dispersion of repolarization or action potential shape. Thus, this study aimed at elucidating further potential mechanisms in arrhythmia initiation induced by propofol. Methods All experimental protocols were approved by the local animal care committee (Landesamt für Natur, Umwelt und Verbraucherschutz Nordrhein-Westfalen, Germany) and were carried out in accordance with the Guide for the Care and Use of Laboratory Animals published by the US National Institutes of Health (NIH Publication No. 852-3, revised 1996). Since hearts served as their own control, no randomization of the hearts was performed. The experimental setting of the antegradely-perfused Langendorff-heart has been described earlier extensively 11 . In short, 35 hearts of female New Zealand white rabbits were explanted and mounted to a Langendorff apparatus. Spontaneously beating hearts were perfused by a warmed, oxygenated (95% O 2 , 5% CO 2 ) modified Krebs-Henseleit buffer (NaCl 118 mM, NaHCO 3 24.88 mM, d-glucose 5.55 mM, KCl 4.70 mM, Napyruvate 2 mM, CaCl 2 1.80 mM, KH 2 PO 4 1.18 mM, MgSO 4 0.83 mM) at a constant flow (52 mL/min) with a pressure around 90 mmHg. Monophasic action potentials (MAP) were acquired by eight specifically designed MAP catheters that were placed endo-and epicardially. Hearts were immersed in a warmed tissue bath, thereby enabling recording of a volume-conducted 12-lead ECG. Spontaneously beating hearts were mechanically AV node-ablated using surgical tweezers in order to perform the following stimulation protocol. Hearts were stimulated at seven different cycle lengths (900-300 ms), thus obtaining cycle-lengths dependent QT interval and action potential duration (APD 90 ). APD 90 was measured between the fastest upstroke and 90% of repolarization. Premature extra-stimuli (S 2 and S 3 ) were delivered to the hearts in order to assess ventricular vulnerability and to determine effective refractory periods (ERP) at different basic cycle-lengths (900-300 ms, see Fig. 1). In case sustained ventricular arrhythmias occurred after short-coupled extra-stimuli, hearts were defibrillated, and the pacing protocol was halted for 5 min to assure recovery of the hearts. Post-repolarization refractoriness (PRR) was calculated as the difference between ERP and APD 90 . Spatial dispersion of repolarization was determined by the difference of maximum and minimum of the APD 90 of the eight MAPs. Configuration of action potentials is displayed by the ratio of APD 90 /APD 50 12,13 . Hearts were divided up into three different groups: In the first group, propofol in ascending concentrations (50, 75, 100 µM) was infused after generating baseline data and the protocol was repeated for each concentration. In this experimental arm, premature extra-stimuli were solely delivered after pacing the hearts at a basic cyclelength of 500 ms in order to abridge the experimental protocol. The second group was perfused with 300 µM erythromycin after generating baseline data. Afterwards, hearts were additionally treated with 75 µM propofol. The last group was infused with 5 µM ondansetron and thereafter 75 µM propofol was added. Before continuing the experimental protocol with a new drug or concentration, hearts were equilibrated for 15 min. Statistics. Electrograms and action potentials were recorded on a multi-channel recorder and digitalized at a rate of 1 kHz with a 12-bit resolution. Variables are shown as mean ± standard deviation. Statistical analyses were performed using SPSS Statistics for Windows (version 24.0). Drug effects on APD 90 , QT interval, dispersion of repolarization, action potential configuration (APD 90 /APD 50 ), ERP and PRR were analysed employing Wilcoxon signed rank test. P values < 0.05 were considered to be statistically significant. Data are expressed as mean ± standard deviation. 3 episodes of ventricular tachycardia or fibrillation were inducible by programmed ventricular stimulation (S 2 and S 3 ) under baseline conditions. No episodes occurred with 50 µM propofol (p = ns) while 2 episodes of VT/VF were inducible under the influence of 75 µM propofol (p = ns). With the highest propofol concentration (100 µM) 6 episodes of VT/VF were inducible (p = ns). Fig. 3) while APD 90 was just slightly increased from 168 ± 18 ms to 171 ± 28 ms (p = ns). Propofol reversed these effects and abbreviated QT interval to 267 ± 18 ms (p < 0.01) and APD 90 to 158 ± 23 ms (p < 0.01). Spatial dispersion of repolarization was significantly amplified in the presence of erythromycin (baseline: 40 ± 16 ms; erythromycin: 48 ± 18 ms, p < 0.01) and reduced by the additional treatment with propofol to 42 ± 12 ms (p = 0.01 compared to erythromycin). There was a trend towards an increase of APD 90 /APD 50 after infusion of erythromycin from 1.49 ± 0.12 to 1.52 ± 0.19 (p = 0.13), representing a triangulation of action potential shape. With propofol, a non-significant decrease of the APD 90 /APD 50 ratio was observed (1.48 ± 0.10, p = 0.21). No episodes of torsade de pointes occurred in the spontaneously beating, AV-blocked hearts under baseline conditions. With ondansetron, 36 episodes of torsade de pointes were observed (p < 0.02, Fig. 5). Again, propofol treatment eliminated torsade de pointes in each heart (0 episodes, p < 0.02). Discussion To our best knowledge, this is the first experimental whole-heart study investigating propofol's effects on cardiac electrophysiology in different models of acquired long QT syndrome. This study demonstrates that sole propofol infusion slightly abbreviates ventricular repolarization without triggering torsade de pointes. Furthermore, administration of propofol on top of proarrhythmic agents such as erythromycin or ondansetron reduces repolarization and spatial dispersion of repolarization and thereby eliminates torsade de pointes. Impact of propofol on cardiac electrophysiology. In the present study, propofol induced a significant abbreviation of cardiac repolarization as indicated by APD 90 and QT interval. This is in line with the majority of former clinical studies investigating repolarization duration under the influence of propofol 14 . Previous data www.nature.com/scientificreports/ concerning propofol's influence on T peak -T end is equivocal. While a recent clinical study showed a prolonged T peak -T end interval with propofol 7 , no changes were observed in another trial with a paediatric study cohort 9 . T peak -T end interval has been proposed as a surrogate for transmural dispersion of cardiac repolarization 15 . In contrast to the QT interval that mediocrely predicts occurrence of torsade de pointes, an increased transmural dispersion of repolarization is a good indicator for drug-induced arrhythmias 8,13 . This study clearly indicates a stable dispersion of repolarization during propofol-treatment even at supratherapeutic concentrations. A stable dispersion of repolarization (even in the presence of a prolonged cardiac repolarization) is linked to a safe electrophysiologic profile of several antiarrhythmic drugs 16 . Furthermore, the shape of action potential duration was transformed by the highest concentration of propofol to a more rectangular shape as indicated by a decrease in APD 90 /APD 50 . A rectangulation of action potential reduces the risk of arrhythmias and is mediated by an acceleration of phase 3 repolarization which reduces the time in the window voltage for calcium channel reactivation and subsequent triggered activity 12 . Thus, no arrhythmias were observed in bradycardic hearts even with the highest concentration of propofol used. As a consequence, this study highlights a good safety profile of propofol. With propofol, post-repolarization refractoriness was significantly lengthened. Prolongation of PRR protects the myocardium against premature beats, is therefore antiarrhythmic 11,17 and a common pharmacological property of class I antiarrhythmic drugs. Consequently, ventricular vulnerability as tested by programmed ventricular stimulation was not increased with propofol. In this study, supratherapeutic concentrations of propofol have been employed to determine adverse drug effects. Mean propofol concentration during anesthesia induction is 11.7 (± 5.0) µg/mL which equals approximately 65.6 µM 18 . However, since genetic polymorphisms in hepatic metabolizing enzymes (e.g. CYP2C9) may further increase propofol concentrations during anaesthesia 18 , higher plasma concentrations might be achieved. Therefore, concentrations of up to 100 µM have been employed in this study. www.nature.com/scientificreports/ Models of acquired long QT syndrome. With erythromycin, a marked prolongation of repolarization duration, an amplification of spatial dispersion of repolarization and a trend towards a triangulation of action potential shape were observed. This is in line with previous studies in which the I Kr inhibitor erythromycin was employed to simulate LQT2 syndrome 11 . Similar results have been achieved for ondansetron which also inhibits hERG (human Ether-a-go-go Related Gene) potassium channels 19 . Accordingly, ondansetron augmented repolarization duration and amplified spatial dispersion of repolarization 20 . Erythromycin and ondansetron changed the shape of the action potential to a more triangular shape which can be explained by an inhibition of I Kr (Fig. 6). This leads to a slowing of phase 3 repolarization which in turn prolongs the time frame in which early afterdepolarizations and subsequent torsade de pointes can be generated 12 . Consequently, early afterdepolarizations and torsade de pointes were observed with both drugs. In contrast, infusion of propofol reversed the changes induced by erythromycin or ondansetron. To be more precise, propofol abbreviated repolarization and reduced spatial dispersion of repolarization in both groups. Previous studies demonstrated that a decrease of spatial dispersion of repolarization is a crucial antiarrhythmic mechanism in acquired long QT syndrome 11,17 . Recently, Bossu and colleagues 21 elegantly demonstrated that reduction of spatial dispersion of repolarization induced by the I Na,L inhibitor GS967 predominantly inhibits perpetuation of torsade de pointes in the chronic atrioventricular block dog. It is noteworthy that early afterdepolarization which are regarded as initiating mechanism were just slightly suppressed. Consequently, the prevention of perpetuation instead of prevention of the initiation of the arrhythmia can be regarded as the antiarrhythmic mechanism of GS967 in this study 21 . Similarly, the crucial antiarrhythmic action of propofol in this study might not be inhibition of triggered activity but rather prevention of perpetuation of torsade de pointes tachycardia. There was a non-significant trend towards a rectangular action potential shape with additional propofol treatment. As a consequence, triggered activity (early afterdepolarizations or torsade de pointes) occurred neither However, no further mechanistic investigations were performed, and no arrhythmias were recorded due to the experimental setup 10 . These findings were confirmed in a clinical setting in which propofol reversed QT interval prolongation induced by sevoflurane 22 . Surprisingly, in transgenic LQT2 rabbits, propofol prolonged repolarization and subsequently triggered torsade de pointes 5 . Even though above-mentioned studies indicated different effects of propofol in LQTS-linked arrhythmias, one would actually expect similar results since inhibition of I Kr either by erythromycin or ondansetron is likely to result in similar electrophysiologic effects as observed in LQT2. Of note, the electrophysiologic effects of propofol in acquired LQTS observed in this study are comparable to those obtained for the sodium current inhibitor mexiletine 17 . Limitations The present study was conducted in isolated rabbit hearts. Therefore, a direct extrapolation to humans is not possible. However, previous studies indicate that the rabbit heart is a reasonable model for studying cardiac ion channel function and especially for investigating cardiac repolarization disorders 23 . Furthermore, the rabbit heart is particularly suitable for studying complex ventricular arrhythmias like ventricular fibrillation due to its effective size, which relates the size of the heart to the wavelength of the arrhythmia 24 . Following this concept as proposed by Panfilov 24 , the effective size of the rabbit heart is similar to the human heart leading to a similar arrhythmia pattern in both species. However, this model does not allow precise statements concerning direct effects on ion channels. Reduction of repolarization duration induced by propofol can probably be explained by the predominant inhibition of sodium and calcium channels that overrides the effects of potassium channel block. Accordingly, distinct effects of propofol on different human cardiac channels have been described and this multi-channel inhibition most likely explains the results observed in this study: To be more precise, previous patch clamp studies reported that propofol inhibits human L-type calcium currents 25 , human sodium 3 as well as human potassium channels 26 . www.nature.com/scientificreports/ conclusion The present study demonstrates a safe electrophysiologic profile of propofol even at high concentrations. Propofol abbreviated cardiac repolarization and did not bear the risk of proarrhythmia. Quite the contrary, propofol abbreviated repolarization duration in different models of acquired long QT syndrome, reduced spatial dispersion of repolarization and thereby eliminated drug-induced torsade de pointes. As a consequence, propofol might even be beneficial in drug-induced QT prolongation by reducing the risk of torsade de pointes. Data availability The datasets generated during and analysed during the current study are available from the corresponding author on reasonable request.
2020-07-22T15:03:31.800Z
2020-07-22T00:00:00.000
{ "year": 2020, "sha1": "76ebc961eec4750c2a7171bf9884a1c99fdc0cc5", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-020-69193-7.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "76ebc961eec4750c2a7171bf9884a1c99fdc0cc5", "s2fieldsofstudy": [ "Medicine", "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
210920531
pes2o/s2orc
v3-fos-license
Liquidity in Credit Networks with Constrained Agents In order to scale transaction rates for deployment across the global web, many cryptocurrencies have deployed so-called ”Layer-2” networks of private payment channels. An idealized payment network behaves like a Credit Network, a model for transactions across a network of bilateral trust relationships. Credit Networks capture many aspects of traditional currencies as well as new virtual currencies and payment mechanisms. In the traditional credit network model, if an agent defaults, every other node that trusted it is vulnerable to loss. In a cryptocurrency context, trust is manufactured by capital deposits, and thus there arises a natural tradeoff between network liquidity (i.e. the fraction of transactions that succeed) and the cost of capital deposits. In this paper, we introduce constraints that bound the total amount of loss that the rest of the network can suffer if an agent (or a set of agents) were to default - equivalently, how the network changes if agents can support limited solvency guarantees. We show that these constraints preserve the analytical structure of a credit network. Furthermore, we show that aggregate borrowing constraints greatly simplify the network structure and in the payment network context achieve the optimal tradeoff between liquidity and amount of escrowed capital. INTRODUCTION Practical implementations of markets require easily useable liquid currency.But as transactions become larger and more frequent, moving and storing a physical asset, like gold or dollar bills, becomes very expensive. Instead, people transfer money via promises to pay later.Banks, for example, used to issue physical bank notes in exchange for gold deposits.Individuals write checks to each other.Some nongovernmental organizations issue their own currencies.Retailers issue prepaid gift cards. Importantly, these debt notes are tradeable independent of the original issuer as a form of currency.But to a first approximation, rational individuals will only accept a debt note if they trust the original issuer to satisfy the obligation. 1 For example, bank notes are usable in lieu of deposited currency, but only so long as the bank does not collapse.In the United States, the Federal Deposit Insurance Corporation ensures that every bank's notes are redeemable in quantities up to $250,000.This enables individuals who use different banks to transact freely, but individuals might want to make sure that they do not have more than $250,000 deposited in a single bank. In order to send a payment, then, a payer needs to exchange her notes for notes acceptable to the payee.Moreover, careful individuals might track the total value of notes owned from a single issuer, to mitigate exposure to the default of a single agent. However, our scenario need not solely consist of consumers transacting using notes issued by large institutions.Individuals can also issue their own debt obligations, via, for example, checks or interpersonal promises, and furthermore can trade these with each other.Generally speaking, most individuals trust their friends to repay small debts, but might worry about being repaid if one friend repeatedly tries to borrow lots of money. Formally, then, consider a system in which agents u 1 , ..., u n are trading debt notes, and where individual u i is willing to accept w (u i , u j ) notes from individual u j .The graph in figure 1 provides a visual representation of a credit network with 5 agents, where an arrow from agent A to agent B labeled with 5 means that w (A, B) = 5.As a real-world example, in an economically healthy country, individuals are typically willing to accept functionally infinite numbers of notes from the central bank, and can transact by trading these notes.The resulting network appears star-like, with the central bank at the center. Such a model is known as a Credit Network.In a general credit network, autonomous agents can issue their own notes, and other agents can choose whether to accept these notes as payment, i.e. they can decide whether to trust any other agent, and for how much.Money is sent along paths of trust, and reduces the amount of "residual trust" along the path; the transaction fails if no path exists.Trust is replenished by a transaction in the opposite direction.Ghosh et al, [9], De Figueiredo et al, [8] and Karlan et al [11] independently formulated this model, and Dandekar et al [6] formalized the model's mathematics.Credit networks have also been used in practice for applications built on existing trust networks.Examples include P2P systems that enable trading goods across a social network [13], Ostra [15], a system to combat email spam, and the Yootle [18], a currency system for quantifying utility in group decision making. More recently, credit networks are in use to improve cryptocurrency transaction rate and latency.In most blockchains, all network participants must agree on a global shared state, which limits transaction rate and can cause hours of latency. Cryptocurrencies enable trustless, anonymous transactions.But in reality, some pairs of agents might know each other and wish to transact repeatedly.If these pairs trusted each other, they could transact without putting any information on a blockchain.Instead, they could privately track the net balance of their transactions and settle this balance only as necessary. One innovation of the Lightning network [17] and analogous "Layer 2" networks on other cryptocurrencies is a way of using escrow to build bilateral relationships that are analogous to a traditional credit network's trust-based transaction channels, without actually requiring real trust for solvency.Individuals need not put every transaction on the blockchain; rather, they need only to threaten to put their transactions on the blockchain.Such threats are made credible if two parties put money into escrow on the blockchain, and the net balance between the parties does not exceed the amount of money in escrow.The result is a large network of private channels of specific "trust" capacities, where transactions can route along paths in the graph.Lightning, therefore, is exactly an implementation of a credit network. For example, consider the Lightning-style network in figure 2. Every undirected edge has, in cryptocurrency parlance, a fixed amount of capital in escrow.The two parties to an edge possess certificates that record how much of the escrowed capital belongs to each party, and one party can "pay" the other by altering this balance.No party can own more than 100% of the escrowed capital.Thus, if A owns 5 units of the money escrowed on edge (A, B), B can accept up to but no more than 5 units of money from A. We can model the Lightning network, therefore, as a credit network where, in this case, w (B, A) = 5.In fact, the set of transactions possible in Lightning-style network in figure 2 are exactly those possible in the credit network in figure 1. A given credit network cannot always resolve every possible transaction.A node that is bankrupt, for example, cannot send additional money.In cryptocurrency applications, when a transaction cannot be resolved by an overlaid credit network, it typically must instead be resolved on-chain.On-chain transaction rates are typically quite limited.The ratio between the number of transactions that a credit network can resolve to the number that it cannot, therefore, acts as a multiplier on the effective transaction rate of a blockchain. However, placing capital into escrow to secure an edge is expensive.Typically, more capital in escrow means a higher probability of transaction success (henceforth, liquidity, i.e. the fraction of transactions which succeed, given an exogenous transaction distribution), so agents must balance liquidity against escrow costs. In this paper, we study how global guarantees on agent behavior (beyond the bilateral trading restrictions from the credit network) can alter the operation of a credit network.Just as the guarantees on debt fulfillment provided by the FDIC streamline real-world transactions, repayment guarantees in network lending contexts or in cryptocurrency contexts (equivalently, restrictions on an agent's global borrowing) can achieve the optimal tradeoff between liquidity and escrowed capital. Formally, we study the liquidity of a credit network in which every node is disallowed from borrowing more than some quantity in aggregate from its neighbors; we call these node constraints.More generally, we study constraints on the total amount of "net borrowing" between any set of nodes and the rest of the network; we call these predicate constraints.In addition to being natural in their own right, they have specific advantages in many real-life situations that are well modeled by credit networks.We give three examples: (1) In Lightning, aggregate node constraint would allow pairwise relationships to be truly trust-based and not based on pairwise escrow; each node could be subject to an aggregate node constraint, and secure its relationships by putting just the aggregate amount in escrow.Such a system can be implemented via multi-party smart contracts.As we show here, when every node has such a constraint, the system as a whole achieves the optimal tradeoff between liquidity and escrow costs.(2) The popular app SplitWise [1] allows a group of friends to track shared expenses.A process called "simplify your debts" cancels debts along cycles.This "cycle-canceling" is an essential aspect of credit networks, and SplitWise can be modeled as a credit network with infinite trust capacities.We believe that node constraints will greatly increase the robustness and usefulness of SplitWise, without substantially decreasing liquidity. (3) The cryptocurrency Stellar [14] uses credit networks in two different ways.First, it allows "anchor" nodes to issue tokens representing claims on fiat currency.Users can then issue "trustlines" declaring how much of each token they are willing to hold.The resulting network of issued notes and lines of trust is very close to a credit network.Note that Stellar allows token issuers to lock the issuing account permanently.This fixes the supply of a token, in effect implementing a node constraint.And second, Stellar is in the process of building a Layer 2 protocol like Lightning.Greater liquidity in this network would mean cheaper payments and forex trades. In [6], Dandekar et al analyze the liquidity of a network for a few classes of graphs of interest, and use computational simulations to conjecture liquidity when analysis is intractable.In this work, we extend their results to new classes of networks that can model agent behavior under interesting classes of constraints.Constraints break an analytical tool fundamental to [6,10]; we show here how to analyze credit networks and their constrained variants with a new set of analytical tools. In section 4, we analyze the liquidity of several natural classes of constrained graphs, showing a tight connection between edge expansion and liquidity.We then show that imposing node constraints not only preserves liquidity but also simplifies network structure and achieves the optimal tradeoff between liquidity and number of edges. As an example, any network that extends to two agents u and v D total units of credit has liquidity between that pair at most 1−1/D.Note that the graph that achieves this consists of D parallel edges between u and v, and thus the liquidity between u and w v is 0. In a d-regular graph with edge expansion β (where edges have capacity 1 and the transaction matrix is uniform), then the total credit available to each node is d but the best known bounds [10] give liquidity only 1−2/β on average.If nodes are constrained to borrow or lend at most β/2, then the total credit available to each node is β and the pairwise liquidity lies between 1 − 1β and 1 − 2/β, achieving the optimal liquidity tradeoff for every pair simultaneously. Finally, we remark on some applications to Lightning, particularly how this tightened tradeoff can substantially reduce Lightning's escrow costs, and some open problems related to credit networks. THE CREDIT NETWORK MODEL A configuration of a Credit Network is a directed graph G = (V , E) along with a map w ((u, v)) ≥ 0 denoting the amount of v's currency that a node u is willing to accept from v. In this article, all credit values will be integral.For convenience, we say that if an edge (u, v) E, then w (u, v) = 0 and vice versa. Suppose that agents u and v are transacting only with each other, and suppose u tries to send one unit of its currency to v. If w (v, u) = 0, then v is unwilling to accept the note from u, and the transaction fails.But if w (v, u) = k > 0, then v is willing to accept the note.Afterwards, v will be only willing to accept an additional k − 1 notes from u, and thus w (v, u) decreases by 1. Conversely, v now owns one note from u that u must honor, and thus could send w (u, v) + 1 total notes to u.Hence, w (u, v) increases by 1, and the total trust c (u, v) = w (u, v) + w (v, u) is constant.As such, we can refer to a Credit Network as the undirected analogue of a configuration.Note that a credit network has many configurations. We call a transaction between neighbors as above a one-hop transaction.More generally, multi-hop transaction of value X is a payer u, a payee v, and a path (p 0 = u, p 1 , ..., p t = v) from u to v. The transaction is valid if w (p i , p i+1 ) ≥ X for 0 ≤ i < t, and performing the transaction means performing a one-hop transaction of value X along every edge (p i , p i+1 ).This process is analogous to performing an augmenting path update in a max-flow computation.For example, the configuration of figure 3 is the result of routing one unit from A to E along the route A-B-C-D-E, starting at the configuration in figure 1. 2Depending on context, we may refer to an edge in a credit network as "trust, " or "possesses note", or "net borrowing." These should be thought of as equivalent.Every agent trusts the value of the notes they issue, so if some other agent v possesses agent u's issued note, then agent u necessarily trusts that v can send one unit of payment back to u.Net borrowing or lending is relative to some ground state.However, if we declare that a particular state in some implementation is the neutral state where nobody has transferred any debt notes, then an agent u's "net lending" is the net amount of notes that u has transferred to others -equivalently, the net borrowing is how much of the total trust capacity (in the ground state) that u has used. Properties of Credit Networks For use as a payment method, agents care primarily about whether money can be sent in the current network configuration.The particular details of a configuration in question matter far less.This suggests the following definition: Definition 2.1 (Transaction Equivalence).Two configurations of a credit network C 1 and C 2 are transaction-equivalent if for any list τ of transactions, all of transactions in τ can be successfully performed in sequence if the credit network starts at C 1 if and only if they can be performed starting at C 2 .This definition will be useful later, but is not immediately useful for understanding the structure of the space of credit network configurations.Consider as a demonstrative example a cycle on n vertices where each edge has capacity 1, and the configuration where all edges have capacity 1 in the direction towards a vertex y and away from a vertex x.Then clearly y can route 1 unit of money to x by two distinct routes, by routing either "clockwise" or "counterclockwise".After routing such a payment, all edges will be oriented either clockwise or counterclockwise.Then, for any other vertices w and z, no matter which route y chose, w can route exactly one unit of money to z. In fact, the configurations where all edges are routed either clockwise or counterclockwise are transaction equivalent.Moreover, if in one of these configurations, a vertex routes a payment to itself along the cycle, the network will reach the other configuration of the pair.This motivates the following definition.Definition 2.2 (Cycle Equivalence (Definition 1, [6]).Two configurations are cycle-equivalent if and only if one is reachable from the other by routing payments along cycles. In the above example, two configurations are cycle-equivalent if and only if they are also transaction-equivalent.In fact, this correspondence holds for general graphs.An analogue of this lemma also applies configuration changes resulting from processing a transaction.Lemma 2.4 (Route Independence (Theorem 3, [6])).The cycleequivalence class that results from routing a payment from a vertex x to a vertex y starting at some configuration C is constant no matter the choice of route. In a broad sense, our object of study is the performance of a credit network, in terms of its ability to resolve attempted transactions.In most contexts where a credit network could be used, transactions arise from some exogenous process.For example, many cryptocurrency transactions arise from online commerce or in response to real-world price fluctuations, not from the internal state of the Lightning network. We will define the liquidity of a credit network, therefore, as the chance that a random transaction will succeed in a random configuration.However, this measurement will depend heavily on choices of transaction and configuration distributions.For a real-world system settling real-world transactions, the relevant distribution on configurations is the distribution that arises from performing transactions drawn from the real-world exogenous transaction distribution. Consider, then, the Markov chain on the space of configurations where at each step, a payer vertex x and a payee vertex y are chosen with probability proportional to λ xy ∈ R ≥0 , and x pays one unit of money to y if possible.The stationary distribution thus gives the relevant configuration distribution.Definition 2.5 (Liquidity).The liquidity of a credit network between vertex x and y is the probability that there exists a directed path from x to y of capacity at least 1 in a configuration drawn from the stationary distribution of the induced Markov chain. The distribution can be complicated and depends on exact transaction rates.However, if rates are symmetric (λ xy = λ yx ), then the stationary distribution of the Markov chain is uniform over all reachable cycle-equivalence classes (Theorem 5, [6]).For the rest of this paper, we will assume that a unique stationary distribution exists; this happens if there are not two sets of agents that never transact with each other.Liquidity analysis thus reduces to counting these classes. The following definition will be useful for the rest of the discussion: For convenience, we will write S v to mean S (•) v when either the specific configuration is clear from context or when we refer to a large class of configurations. A successful transaction always decreases the score of a payer and increases score of the payee by the same amount.When the payment is along a cycle, then, the score vector is invariant.Hence, for a given credit network, a score vector uniquely captures a cycleequivalence class, and Kleitman and Winston [12] show that the number of score vectors on a graph is equal to the number of forests of that graph (where a forest is an acyclic subset of edges).Furthermore, Proposition 2.1 of [10] shows that the number of cycle-equivalent states where x can pay y is equal to the number of forests that place x and y in the same connected component.The liquidity analysis of [10] crucially relies on this correspondence. CONSTRAINED CREDIT NETWORKS 3.1 Node Constraints The credit network models transactions in a real-world trust network.However, the model only accounts for independent bilateral relationships.But a lender might also care about a borrower's total outstanding obligations.Conversely, one agent might want to limit her total lending.More generally, suppose that each individual in a graph G = (V , E) wishes to limit her total lending to the other agents, in addition to her bilateral lending limits. Let the aggregate limit on an individual v be c v , and suppose that the network is constrained such that in every valid configuration, ∀v S v ≤ c v .If k v is the score of v in the initial configuration, when no debt notes have been issued, this constraint means v is disallowed from issuing more than c v − k v notes.Theorem 3.1.Suppose that the credit network system is required to remain in cycle-equivalent classes where ∀v, S v ≤ c v .Then the following properties over the set of cycle-equivalent classes satisfying these constraints are maintained: (1) Route-Independence (2) Cycle-equivalence ⇐⇒ Transaction-equivalence (for reachable cycle-equivalent states) (3) Symmetric transaction distribution =⇒ the stationary distribution of the induced Markov chain on reachable configurations is uniform. Proof.Theorem 3.1 follows directly from Theorem 3.3, to be proved later.The following section gives an intuitive demonstration.□ This theorem shows that independent restrictions on node behavior preserve most useful properties of credit networks.The main property lost is the correspondence between forests and cycleequivalence classes.However, constraints can provide additional structure that often more than makes up for this loss. But first, we give a constructive proof of Theorem 3.1 by showing that the individual node constraints can be modeled by a standard credit network. Intuitively, an agent v borrowing money is akin to routing flow to v in the graph.The maximum amount of flow that can be routed into v, then, is the min-cut of the graph that isolates v.To demonstrate Theorem 3.1, we give a construction of a network gadget, illustrated in figure 4, that separates each agent from the others with a small min-cut while preserving the rest of the graph. Let G ′ be built from G = (V , E) by, for each vertex v ∈ V , adding a "fake" vertex F (v) connected only to v by a new edge of capacity c v .The starting configuration of the network will be the starting configuration of G, and w (v, F (v)) = k v .When agents x and y transact, they route transactions from F (x ) to F (y). Then the score of F (v) in G ′ is the score that v would have in G if agents had used G instead of G ′ .Because every transaction involving a vertex v runs through (v, F (v)), v cannot lend more than k v in total. Observe that because no transactions originate from within V , in any collection of states reachable from each other, the score of every v ∈ V is constant.As such, conditioned on the choice of the starting configuration of the credit network, we can identify uniquely every reachable cycle-equivalent state with a score vector over only F (V ). Because this network operates exactly as a vanilla credit network in which half the vertices never perform transactions, the route-independence property and the property that transactionequivalence is the same as cycle-equivalence on reachable states both hold.Additionally, a small modification of the proof of Theorem 5 of [6] shows that the Markov chain on this credit network starting at that start configuration is uniform over reachable score vectors. We now show that this combinatorial preservation is maintained in a more general, expressive notion of credit network constraint. Figure 5: A credit network with an aggregate constraint on a group.Here, the group in blue is not allowed to have its aggregate indegree, relative to vertices outside the group, exceed 12 (in this configuration, the aggregate indegree is 10). Group Limits and Arbitrary Predicates Suppose that a business owner applies for a loan.When assessing default risk, the lender would likely care about the individual's other lending or borrowing, as discussed previously.However, a default of one member of a organization is likely correlated with the default of other members of that group.As such, a lender might be concerned about the total borrowing of a group of agents, and might wish to impose an aggregate borrowing limit on the whole group.Consider for example the credit network in 5, where agents in the blue box are not allowed to borrow too much in aggregate from the group outside the box. Although we know of no network gadget for enforcing this property, satisfaction of the property can still be checked efficiently. More generally, we can study the dynamics of a credit network with broader lending restrictions imposed.A network designer might like to require, for example, that agent v 1 can pay agent v 2 but only if it owes less than a certain amount to v 3 . In fact, even when no gadget exists, any predicate that is welldefined on cycle-equivalence classes will preserve the properties in Theorem 3.1.Definition 3.2 (Well-formed Predicate).A Boolean predicate P on configurations of a credit network is well-formed if, given cycle equivalent configurations c 1 , c 2 , P (c 1 ) = P (c 2 ). Note that the total amount that a group of nodes has borrowed from other nodes is invariant within a cycle-equivalent class.Hence, restrictions on group aggregate borrowing, as in the above example, are well-formed predicates.Of course, Boolean combinations of well-formed predicates are also well-formed predicates. 3In fact, any Boolean function well-defined on the set of score vectors is a well-formed predicate.Theorem 3.3.Given any well-formed predicate P on the cycleequivalence classes of a credit network, the following properties hold in the corresponding constrained credit network where P is enforced: (1) Route Independence 3 Not all "natural" predicates are strictly well-formed.For example, "A is willing to lend $10 to B, but only if B's debt to C is less than $5, " as it relies on states of links that might vary within a cycle-equivalent class.Such a constraint can be made well-formed by asking instead whether there exists a cycle-equivalent configuration satisfying the original constraint.Informally, we conjecture that most constraints of interest fit within this framework and are computable with max-flow computations. Proof.See Appendix A. □ LIQUIDITY ANALYSIS It now remains to study the impact that constraints have on liquidity of credit networks.In particular, we will focus on credit networks with node constraints.We also show an interesting combinatorial difference between constrained and unconstrained networks.For all theorems in this section, transaction rates will be symmetric and that a unique stationary distribution over cycle-equivalence classes exist (that is, there are not disjoint sets of vertices that never transact with each other), so the distribution over cycle-equivalence classes is uniform (Theorem 5, [6]) Before beginning, note that if an arbitrary predicate can be evaluated efficiently, then liquidity can be estimated experimentally by simply simulating a sequence of random transactions, as in the Markov chain used to define liquidity.The authors of [6] used such simulations to conjecture liquidity in analytically intractable classes of (unconstrained) credit networks; the caveat here is that these Markov chains lack general but meaningful mixing time bounds. Trees Liquidity can be exactly computed in credit networks that have a simple structure.In particular, when subjected to node constraints, a natural dynamic programming algorithm computes liquidity in a node-constrained tree.Theorem 4.1.In a tree with node constraints, liquidity can be computed in time polynomial in the size of the graph and in the maximum capacity along an edge. In fact, this algorithm extends to graphs that are close to being tree-like, in the sense of having low treewidth [2]. 4 Theorem 4.2.Suppose a graph G = (V , E) with node constraints has tree-width k, and let S = max v u ∈Γ(v ) c (v, u).There exists an algorithm to compute liquidity in time poly(|V |, S k , 2 k 2 ). Star Graph Moving from liquidity computation to liquidity analysis, consider as a first example the class of star graphs, where every edge runs from an external vertex v i to a the central vertex u with capacity c i .This type of graph will be useful later, when we show that nontrivial node constraints make constrained credit networks on arbitrary graphs functionally equivalent to a constrained star.Without loss of generality, observe that the edge nodes need no extra constraint, as any extra constraint is equivalent to a decrease in capacity on the associated edge.Similarly, the center vertex can be constrained to perform no transactions whatsoever, if we add 4 For a detailed explanation of a similar algorithm, see [16] an extra outside vertex and have any transactions involving the center go to this vertex, as in figure 4. Theorem 4.3.When a star graph is constrained such that its central vertex u has score ⌊s u = Σ i c i /2⌋, the steady-state failure probability between any two vertices i and j is at most 4/(c i + c j ).Moreover, the steady-state failure probability is at least 2/(c i +c j + 2). Expander Graphs Liquidity is intuitively related to the edge-expansion of the underlying graph.After all, route-independence means that transactions in a credit network are akin to single-commodity flows in a directed graph.Such a flow will fail if it hits a bottleneck, such as if the source or destination is in a poorly-connected set of vertices, or if the graph's min-cut is small. More specifically, a transaction of size k from u to v will fail if and only if there is some cut of the graph separating u ∈ A ⊂ V from v ∈ B ⊂ V such that k units of flow cannot move from A to B. From another viewpoint, the collective capacity of the edges from A to B gives an implicit group constraint on A (and B).A predicate constraint on A will thus not change network behavior unless it supersedes this implicit constraint.The interesting parameter regimes for analysis, therefore, are those where many sets of nodes have superseding group constraints. Rather than study arbitrarily-valued group constraints on every subset of vertices, we will study node constraints; constraints on single vertices collectively give constraints on every set of vertices.As such, one interesting parameter regime for node constraints is where the range of a node's net balance is decreased from its degree to below the edge expansion of the graph. Let h(G) be the edge expansion of a graph G = (V , E).We will take here the edge expansion of a graph to mean min S ⊂V :0< |S | ≤ |V |/2 ∂(S )/|S |, where ∂(S ) is the total capacity of edges leaving S. Let d (v) be the weighted degree of a vertex v. Theorem 4.4.Let G = (V , E) be a credit network, and for each v ∈ V , let 0 ≤ r (v) ≤ h(G) be some integer.If every node's score is restricted to lie between (d (v) −r (v))/2 and (d (v) +r (v))/2, then the constrained credit network is equivalent to the star credit network H where the central node u has a fixed score of S u = Σ v ∈V r (v)/2 and c (v i , u) = r (v). 5orollary 4.5.When r (v) = ⌊h(G)⌋ for all v, a constrained expander graph has, for every pair of vertices, liquidity between 1 − 1/(⌊h(G)⌋ + 1) and 1 − 2/⌊h(G)⌋. Proof. See Appendix D □ For comparison, [10] shows that the average liquidity in an unconstrained graph is at least 1 − 2/h(G), but the proof requires several more pages of analysis.What is surprising here is that the liquidity between two specific vertices does not particularly depend on details of edge connections to those vertices within the graph.h(G) is rounded because edge capacities are integral. Monotonicity A credit network can be thought of as a specialized network for transferrring financial commodities.Some kinds of networks for commodity transfer, such as road traffic, are subject to what is typically referred to as "Braess's Paradox" [3].In short, the paradox is the observation that adding roads in a road network can reduce the overall throughput of the network, when drivers choose routes selfishly. It would be highly undesirable if this paradox existed in Lightningstyle credit network implementations.In the anonymous world of cryptocurrencies, there is no obvious feasible manner of implementing anything other than selfish route choices.It could be possible, then, that bad actors could attack the network and, for example, drive up transaction prices. We would like, therefore, to prove that the addition of an edge to a graph will never decrease liquidity.When studying unconstrained credit networks, the authors of [10] call this the "monotonicity conjecture" and show that this notion is equivalent to the wellstudied negative correlation conjecture on graphical matroids; for more on that conjecture, see [19] and [5]. One could directly generalize the monotonicity conjecture to constrained credit networks.Specifically, one might hope that the addition of an edge in a network containing constrained stars will not decrease the liquidity between any two points.However, this notion is false. Example 4.6.Let G be a star graph with four endpoints v 1 , v 2 , v 3 , v 4 and center u, where the capacity of each edge is 1 and the score of u is constrained to be 2. Then the liquidity between any two endpoints is 1/3.Let H be formed an edge between v 3 and v 4 of capacity n. In H , the liquidity between v 1 and v 2 is (n + 2)/(4n + 6), which is decreasing in n and always less than 1/3.Graph-like objects that disobey a monotonicity-like conjecture, especially simple examples, are rare.It would be interesting to understand how constraints enable this qualitative change in behavior. Although a direct analogue of a monotonicity conjecture is violated, note that liquidity still respects the bounds implied by Theorem 4.3.In particular, Theorem 4.3 implies that the liquidity between v 1 and v 2 is at least 1/4, and for all n ≥ 0, (n + 2)/(4n + 6) > 1/4.We conjecture, therefore, that while liquidity might decrease as in Example 4.6, the bounds implied by Theorem 4.3 are never broken. Clearly, if there exists some pair of vertices x and y in subgraph of expansion β with pairwise liquidity α > 1 − 1/β, then this replacement must decrease their pairwise liquidity.More generally, in such a graph, if a vertex x ′ is connected to only x by an edge of capacity 1, and a vertex y ′ is connected only to y by an edge of capacity 1, then the liquidity before replacement is α/4, while after the replacement it is at least More specifically, we conjecture that a multiplicative liquidity reduction by a factor of 1 − 2β is the worst reduction that can happen.Conjecture 4.7.Let G = (V , E) be a credit network, let S ⊂ V with subgraph expansion h S (G), and let H be the credit network formed by replacing S in V with a constrained star, as in Theorem 4.3.Then for all u, v ∈ V , the liquidity between u and v in H is at least the liquidity of between u and v in G multiplied by 1 − 2/h S (G). CRYPTOCURRENCY APPLICATIONS AND FUTURE WORK Cryptocurrency innovations like the Lightning network on Bitcoin rely on a network of bilateral payment channels that behaves like a credit network.However, in the trustless cryptocurrency context, manufacturing the trust required for a payment channel requires committing capital into escrow per edge, which has a cost.Higher maintenance costs generally mean higher transaction costs for users.The network, therefore, is incentivized to find a design that gives a good tradeoff between liquidity and total escrow costs. As shown in Section 4, a star-like design where every agent has a global lending limit achieves the optimal tradeoff between liquidity and total escrow costs.In practice, this would look roughly like a large, permissioned smart contract. For simplicity of example, suppose that every agent has $D to invest into D edges, each with capacity 2 (so each agent is initially responsible for D units of escrow).Let the expansion of the resulting graph G be h(G). Two agents transacting only with each other could at best get liquidity 1 − 1/2D (and liquidity 0 with all others).Using a standard Lightning system, agents only get on average pairwise liquidity 1 − 2/h(G).By switching to a multiparty contract, every pair of agents can achieve liquidity 1 − 2/D -the asymptotically optimal tradeoff.The exact savings will vary by graph, but h(G) can be much smaller than D. Furthermore, routing across a multiparty contract is trivial, and routes are valid unless the sender is bankrupt. Future Work Section 3 shows that adding constraints preserves most useful properties of credit networks.However, it does eliminate the correspondence between the forests and score vectors.Unfortunately, the bijection in [12] is algorithmic and, other than in tree-like graphs, an analysis of the constrained credit network using forests is not obvious.We leave this as an area of future research. In section 5, the escrow savings require assumptions on the transaction distribution and might disappear if Conjecture 4.7 were false.We leave as future work an experimental analysis of realworld Lightning networks, particularly with regard to the tradeoff between subgraph expansion, escrow savings, and implementation concerns.We would also like to understand how more realistic distribution assumptions would affect our results.Similarly, many real-world implementations of payment networks have strategic agents that resettle network links via on-chain transactions when, for example, a net balance grows too much (e.g.[4]).We would like to understand how these resettlement policies In [7], Dandekar et al consider the strategic formation of credit networks under a model of balancing liquidity against exposure to defaulting trade partners.Constraints allow for many interesting scenarios in which to study the behavior of rational agents.For example, a node constraint is equivalently a guarantee that one will not borrow more than a total amount.In some contexts, guarantees like this on nodes or groups could lead to larger bilateral lines of credit.Understanding the incentives at play could improve designs of credit network-like systems. CONCLUSION The credit network is a model for transactions across a network of agents.Initially studied in contexts related to social networks, the model forms a close abstraction of modern cryptocurrency "Layer 2" protocols like Lightning that are being deployed across the internet.However, the credit network is limited in its ability to describe agent behavior.We study the effects of constraining the behavior of agents in a credit network beyond the implicit constraints in a credit network, or alternatively, the effects of limited guarantees on agent solvency.In particular, these constraints preserve the combinatorial structure of credit networks.Aggregate node-based borrowing constraints transform complicated graphs into simple stars, showing that the details of graph structure ultimately are of little significance.These constraints also enable modeling of more interesting node behavior, and moreover, assuming Conjecture 4.7, the reduction from complex graphs to star graphs achieves the optimal tradeoff between liquidity and escrow costs in a Lightningstyle payment network. B ALGORITHMS FOR COMPUTING LIQUIDITY IN TREES B.1 Proof of Theorem 4.1 Consider the following dynamic programming algorithm for computing the liquidity between two vertices in a tree where nodes have aggregate constraints. Pick some vertex r to be the root of the tree.Let p(v) denote the parent of v, and q i (v) the ith child of v. Let d (v) be the number of children of v. Let C (v, k ) be the number of configurations in the subtree (satisfying all subtree constraints) rooted at v such that w (p(v), v)) = k.It suffices to show how to compute C (v, •) given access to C (q i (v), •) Let D i (v, k ) be the number of configurations of the subtree consisting of v and the first i child subtrees such that the score of v is k. Let X v be the set of scores of v that are considered valid by the constraints. Then The total number of configurations is thus i ∈X r D d (r ) (r, i). The above algorithm iterates over, at each vertex, all possible ways such that that vertex has a specific score.This can be extended to also satisfy some simple local predicates by altering this iteration to exclude particular classes of configurations. For example, to track liquidity, note that there is a unique path from u to v.As the algorithm walks across the graph, it can simply throw out configurations where an edge on this path is entirely oriented in the wrong direction. Let H be the maximum capacity of an edge. B.2 Proof of Theorem 4.2 Let G = (V , E) be some graph of treewidth k, and let (X i , (I, F )) be a nice tree decomposition of G, as described in Theorem 7 of [2].Without loss of generality, assume that the leaf vertices have no constraints, and that non leaf vertices v are constrained to have fixed score s v . Iterating from the leaves to the root of the tree decomposition, the algorithm maintains a complete list of score vectors of active vertices and a count of the number of ways that the induced subtree can produce this score vector on the active vertices (not including edges between active vertices) while satisfying the node constraints of inactive vertices in the subtree. At a leaf node, the only score vector is (0), which occurs with multiplicity 1. At an introduce node X i with child X j that introduces a node v, the algorithm simply adds an entry of 0 to every active score vector corresponding to the score of v.This maintains the induction invariant, as there are no edges between the introduced vertex and the inactive vertices of the subtree. At each join node with children X i and X j (with X i = X j ), the algorithm iterates over every pair of score vectors, with one from X i and one from X j , adding score vectors coordinatewise and multiplying multiplicities.The resulting list is then de-duplicated, with multiplicities added as necessary.Join nodes join disjoint subtrees, so score vectors of disjoint subconfigurations add coordinatewise, and there is no double-counting of edges (since edges between active vertices are not yet accounted for). At a forget node X i that forgets a node v (with child X j = X i ∪ {v}), let Y be the edges between v and the vertices of X i . For every active score vector s of X i , the algorithm computes a list of potential score vectors of X i based on s, by iterating over all ways of directing the edges of Y .These score vectors are deduplicated, and their multiplicities are set to be the multiplicity of s.The algorithm then collates all these lists and de-duplicates them, adding multiplicities when necessary.The algorithm then asks if v satisfies the constraints.If yes, it preserves that vector, with the entry for v removed.Otherwise, it drops that vector.The algorithm then merges duplicate score vectors, adding multiplicities when duplicates occur. Furthermore, since satisfaction of each vertex's constraint is based only on the score of an individual vertex, and once a vertex is "forgotten, " all of its edges have been accounted for in a particular configuration c of the subtree, if a constraint is satisfied when its vertex is forgotten, it will be satisfied in any configuration that extends c to the entire graph. At the final node of the tree decomposition, then, the algorithm is left with an empty score vector and a count of the number of score vectors satisfying all the vertex constraints. Let I be the number of states tracked at any node.Then evaluating an introduce This algorithm computes the number of score vectors satisfying the node constraints.To compute liquidity from a vertex u to a vertex v, the algorithm needs to also track connectivity patterns between vertex. In particular, with each score vector, the algorithm also associates a directed graph on the set of active vertices, where an edge from x to y in this graph signifies that in subtree configuration that produces this score vector and directed graph, there is a directed path from x to y. Leaf nodes start with an empty connectivity graph with only a single vertex.Introduce nodes add a disconnected node to the connectivity graph.Forget nodes, when iterating over arrangements of edges between the forgotten vertex and active nodes, compute which vertices gain new connections and update the graph accordingly, while dropping the forgotten vertex from the connectivity graph unless the forgotten vertex is one of u or v. Join nodes, when pairing subconfigurations from different subtrees, simply take the union of paired connectivity graphs.This maintains the connectivity patterns throughout the computation.At the end, the algorithm is left with connectivity graphs containing only u and v, and the number of score vectors satisfying the constraints that generate each connectivity pattern.Liquidity follows with a little arithmetic. This increases the size of the state space at each node of the tree-decomposition by at most a factor of 2 (k +1)k .So the entire algorithm takes time O (k |V |S 2k 2 2(k +1)k )).These time complexity bounds are likely not tight. C PROOF OF THEOREM 4.3 Proof.In the unconstrained star graph, the number of configurations where w (v i , u) = k for some v i would be constant for all feasible k.In this situation, this may not be the case, but in fact, if the score of the center vertex is half its (unconstrained) maximum, a symmetry argument shows that the number of cases where Let n be the number of external vertices, and let G i be the star graph consisting of only the central vertex and the first i vertices.Let the number of configurations of G i in which S u = k be C (G i , k ), and let Since this symmetry and maximization in the middle holds for i = 2, induction shows this holds for i = n.Let M = i c i /2.Now, for any two vertices v i and v j , for any configuration, let ∆ k = w (v i , u) +w (v j , u).Let H be the unconstrained graph without vertices v i and v j , let A k be the number of states of H such that S u = M − ∆ k , and let Consider the one-to-many map from A k to states of G that completes a state in A k to a state of G by choosing any values for w (v i , u) and w (v j , u) such that w (v i , u) + w (v j , u) = ∆ k .For ∆ k ≤ (c i + c j )/2, there are ∆ k + 1 choices, and for ∆ k ≥ (c i + c j )/2, there are (c i + c j ) − ∆ k + 1 choices.Furthermore, note that in exactly one of these choices is v i bankrupt.Note that these maps have well-defined inverses and the images of each A k are disjoint.Now, consider the case where 0 ≤ ∆ k ≤ (c i + c j )/2.Let T = (c i + c j )/2.The organization of states above enables counting the number of state where transactions fail and number of transactions overall.As such, the probability p of a failed transaction (conditioned on 0 ≤ ∆ k ≤ T ) is therefore simply the weighted summation ( T i=0 B i )/( T i=0 (i + 1)B i ).By Chebychev's sum inequality, 1/p ≥ (( B i )/( B i ))(1/T )( i (i + 1)) = (T + 2)(T + 1)/2T ≥ T /2.so the probability of transaction failure is at most 2/(T ).By a symmetricity argument, the same result holds when (c i + c j )/2 ≤ ∆ k ≤ (c i + c j ), so the result holds overall. Conversely, the probability of transaction failure is a weighted summation of the probability of transaction failure conditioned on a particular ∆ k , and thus must be at least the minimum of these conditional probabilities, which is 1/(T + 1).□ If the score of the central vertex is constrained to be less than half its maximum, but close to half, then a (crude) bound on failure probability can be obtained by decreasing (artificially for analytical purposes) some of the capacities of the edges until the score is half of the reduced maximum.The same holds for central scores larger than half the maximum. The above recurrence relation in effect counts the number of ways in which to put S u indistinguishable items into n boxes of varying sizes c i .When the c i s are constant, the number of states is simply the generalized binomial coefficient. D PROOF OF THEOREM 4.4 Proof.Let G be a credit network with edge expansion h(G), constrained to ensure that for all vertices v, s v ∈ ((d (v)−r (v))/2,(d (v)+ r (v))/2). In any configuration of a credit network, it is impossible for vertex v to pay vertex u if and only if there exists a partition of the graph into a set A and B = V \ A such that v ∈ A and u ∈ B and all edges (a, b) between B and A satisfy w (a, b) = c (a, b), w (b, a) = 0. Let u, v ∈ V , and let A and B be any partition of V separating v and u such that the cut prevents u from paying v. Without loss of generality (by symmetry of the constraints), suppose |A| ≤ |B|. Suppose there are x ≥ |A|h(G) edges pointing into A. Then the number of edges contained within A is ( v ∈A d (v) − x )/2.Hence, the sum of the scores of every vertex in A is v ∈A d (v) + x/2.Because x ≥ h(G)|A|, the sum of scores in A exceeds the aggregate bound implied by the individual bounds on all vertices in A, which is a contradiction. Hence, the only constraints that affect whether vertex u can pay vertex v are the per-vertex constraints we imposed on top of the credit network.Particularly, each vertex is constrained to deviate only at most r (v)/2 from a score of d (v)/2. Let H be a star network with a new vertex u in the center, where S u is constrained to be constant (in fact, S u = Σ v ∈V r (v)/2), and let c (v i , u) = r (v).Note that for any score vector of H , adding (d (v) − r (v))/2 to each vertex's score gives a vector in G satisfying the constraints on G. Then any transaction in G is feasible if and only if the corresponding transaction is feasible in H , and moreover, this correspondence between score vectors is maintained under transactions. Thus, with regards to transaction feasibility, G is equivalent to H .We note that further constraining a vertex in G only shrinks the capacity of the corresponding edge from that vertex to the center.□ Figure 2 : Figure 2: A Lightning-style network, equivalent to the credit network in figure 1. Figure 3 : Figure 3: The credit network of figure 1, after A has routed one unit of payment to E. Lemma 2.3 ((Lemma 2, [6])).Two credit network configurations C 1 and C 2 are transaction-equivalent if and only if C 1 and C 2 are cycle-equivalent. Figure 4 : Figure 4: A credit network with aggregate node constraints, implemented using gadgets Storing C requires O (|E|H ) entries and, given oracle access to C on the children of a vertex v, requires computation time O (d (v)H ).So running the entire algorithm requires time polynomial in H and the size of the graph. node takes time O (I ), a join node takes time O (I 2 ), and a forget node takes time O (S k I + I 2 ).There are at most O (k |V |) nodes, so the entire algorithm takes time O (k |V |(S k + I 2 )).Let S be the maximum (unconstrained) score of any vertex.Then the number of score vectors tracked at any one node is at most S k .Hence, the entire algorithm takes time O (k |V |(S k + S 2k )
2019-10-05T02:19:42.000Z
2019-10-05T00:00:00.000
{ "year": 2020, "sha1": "84b768ef1575d64b72979bfe4c6cfb5cbda17b57", "oa_license": "CCBY", "oa_url": "https://arxiv.org/pdf/1910.02194", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "79160f8c9af0a1e77c21f99426d7597fbe6fb2f0", "s2fieldsofstudy": [ "Computer Science", "Economics", "Business" ], "extfieldsofstudy": [ "Computer Science", "Business", "Economics" ] }
209007858
pes2o/s2orc
v3-fos-license
MITIGATION HANDLING OF SQL INJECTION ATTACKS ON WEBSITES USING OWASP FRAMEWORK The development of the security system on the application of a website is now more advanced. But software that has vulnerabilities will threaten all fields such as information system of health, defense, finance, and education. Information technology security issues will become a threat that made managers of the website (web admin) alerted. This paper is focused on how to handle various application web attacks, especially attacks that use SQL Injection, using The Open Web Application Security Project (OWASP), the aim is to raise awareness about application security web and how to handle an occurred attack. OWASP is a non-profit organization that focuses on web application security. OWASP provides security resources so that everyone can improve website security. the existence of security holes on the website is very vulnerable to being broken with dangerous character. to prevent it from being able to periodically replace the user name and password. Testing can be done to mitigate the security gap in the SMS broadcast application service by updating the filter character in such a way that the attacker can be minimized. Mitigation is done by limiting characters to enter making it difficult for attackers . INTRODUCTION The development of rapid information technology can't be separated from the Internet, to handle the information that is up to date every second, the website is necessary.Websites can contain text, pictures, or videos.The more varied and interesting a website will become a management for users all over the world to access it.The number of developed websites that did not follow by a good security system could have vulnerabilities that are not known by the admin or manager of the website [1].The same impact could happen on broadcast SMS website the Bureau of Student Affairs and Alumni. The website is used to provide information to students.Because of the website, SMS broadcast Bureau of Student Affairs and Alumni of Ahmad Dahlan University is often used and very important, this website should be safe from attacks, especially SQL Injection.In today's world, SQL injection is a serious security threat on the Internet for various dynamic webs that are on the internet.Because the use of the internet for various online services is increasing, so are the security threats that exist on the web increasing.[2].SQL Injection technique is well known in the world of hacking as one of the web hacking techniques that are destructive to the database of a site.The technique used in SQL injection is to enter standard commands in SQL (DDL, DML, DCL) such as create, insert, update, drop, alter, union and select along with other commands that are not foreign.[3] To find out whether the website is completely safe, an attack simulation is conducted.SQL Injection is used to determine whether there is a security loopholes website or not.A website that has vulnerabilities will be vulnerable to attacks.If an attacker successfully attacked a website then it is a possibility that an attacker can manipulate the data.This study aims to find security holes in applications web SMS broadcast Bureau of Student Affairs and Alumni of Ahmad Dahlan University.This research uses a gray system theory.Gray system theory is used because this method does not require a lot of data.this study uses little data, 12 data samples.to prevent it from being able to periodically replace the user name and password. RESEARCH METHOD A website loophole can be detected by using SQL Injection, by entering certain characters in the login form might cause an attacker can get to the admin page and know the contents of the database, even extract, transform or remove it [4].SQL Injection occurs when an attacker gives an SQL query command input to manipulate the query language so that the attacker gets database information [5].SQL Injection attacks are very dangerous, attackers who master the database or have entered the database without permission can manipulate data on the database system, this might cause the injected website became unusable.Hacked data can be misused by irresponsible parties [6]. SQL Injection can be done in many ways one of them is by giving the character aims to inject a character such as string or quotation marks, exclamation points, or equal to, and the other characters to produce the condition is always true [7].In SQL Injection attack the attacker can use malicious characters to be injected in the login form of the website application so that it can control the database.If the security system is good then the database can be recovered [8].The character is shaped like quotation marks ('=,!) Injected into the login form.Other characters that can be used as OR1 = 1--, or'1 = '1'.Character is what used to attack websites login form [9]. SQL Injection attacks are used to take control of the system through the database.By leveraging the success of SQL injection attacker can enter into the website system without going through the login process and password [10]. The weakness of the website requires the security of related information on unrestricted databases thus allowing attackers to retrieve information data.Tests using SQL injection security hole could tell the difference before and after the applied patch to renew or update a malicious code to block SQL injection attacks, the patch that is used to improve the security of applications that require validation against a database [11]. Attacks using SQL Injection can be detected but it is difficult to know the identity of the attacker and might not be tracked.Therefore, it is important to build a security system on a website [12].Website's vulnerability to attack from SQL Injection can be prevented by renewing Password with the latest patches and updates security system otherwise website will be easily susceptible to attack [13]. Analysis handling attacks SQL Injection by Rudi Samuel Pardosi [14] can be done in the following way: 1. Perform an SQL Injection attack by entering a malicious character in the login form that is at the time of initializing in the programming code that retrieves data from the database.2. Giving the code constraint input limitation, the input that makes the attacker can't inject long input in the login form. 3. Eliminate or hide the program code to resolve error messages that come out of the database.This research uses the OWASP Framework.The advantages of the OWASP Framework compared to other frameworks are simple approaches to calculating and assessing the risks associated with applications.wherewith this method can be decided what should be done to these risks.By knowing the risks that will occur, many benefits will be obtained including saving time and reducing the occurrence of more serious risks [15].In The Open Web Application Security Project (OWASP) attack SQL Injection was ranked first, this can be evidenced by the release of the OWASP Top notes 10-2017 [16] and can be seen in Figure 1. Figure 1.The level of security attacks OWASP Figure 1 [16] shows the security attack on injection occupying the first level because often used to attack web applications.In Figure 1 [16] indicates a potential attacker to use a variety of techniques for entering applications.website dangerous database.Sometimes, these techniques can easily be found and exploited, otherwise sometimes can be difficult, as well as damage from a simple factor improved to an irretrievable. Figure 3 [16] shows the update of the OWASP Top 10 focused on the identification of the most serious risk, for each risk there is general information about the likelihood and impact by using a simple grading scheme is based on the OWASP Risk Rating Methodology.For each application the possibility of no threat of attack and impact that makes a change.In the previous version focused on identifying common vulnerabilities that are designed based on risk.Risks in the Top 10 come from this type of attack, weakness, and its effects.History in surfing the browser where the user opens user id email or password will be stored on the hard drive or computer, otherwise, it is also stored in random access memory.This activity also includes accessing internet banking login, PayPal, Bitcoin, and Facebook.Login access is user id and password.Linux Extractor (LiME) memory can capture memory so that information obtained from random access memory can be completed and can be used as evidence in digital crime management and involves evidence of the Linux-based laptop operating system.Forensic Tool Kit (FTK) Imager can analyze digital evidence well because the data evidence of encrypted and unencrypted information can also be opened by these tools.[17] Attacks on structured networks come from multiple sources and assemble to form a large packet flow is a type of Denial of Service (DoS) attack.This attack can disrupt the service on the target network by flooding the bandwidth or system capacity on processing to be able to make the target server network becomes overloaded.The tool used to detect DoS network attacks router and perform network traffic analysis is Wireshark.it can be concluded that attacks on Router analysis, starting from the attack process can be obtained information that the DoS attack can Ping or send data/messages can repeatedly and make the network Router down using DNS Flooding application.This application has characteristics in the research results and it can be proven that forensic investigators can succeed in using the Wireshark app to analyze DoS attacks on the Router [18]. Forensic networks, requiring traffic logs to analyze the activity of each computer connected to the network to be able to know what hackers do.This requires the router information.Accessing router-related information such as RouterOS on Mikrotik devices can be used to maintain some data that uses the API in remotely accessing the router.Forensics on router-based OS devices Routers can be done with live forensics via the Media API.In the extraction of router data through the API can access information related to various activities on the network.The applications developed are the success of data from the router, Log Activity, IP Address List, ARP, Rent DHCP, RouterBoard Info, Users, and DNS Cache.The data used in observing network-based attacks is a scenario, the DNS Cache role has no correlation on the FTP Service for the case of attack scenarios.Analysis of linked links in each field of data acquisition variables greatly helps digital forensic investigators to determine the attack activity of the Network.To obtain forensic acquisition information should do so quickly before the Router is turned off or rebooted [19]. The threat of malicious networks for security on Web servers resulting in loss of bandwidth and overload for users and web servers of service providers is flooding.Flooding attacks on the network is by implementing an Intrusion Detection System (IDS) detection system such as Snort.An open-source system that can be used to detect flooding attacks using Snort's special rules.various activities will be recorded on Snort then stored in the log file to record all network traffic activity.Log files are used for investigation into forensic process modeling methods to find evidence.Results from the analysis of this study found that 15 IP addresses recorded perform illegal acts on the webserver.The IDS system that applies to this research has worked well as expected, the system can record all network activities in the form of log files with p. cap extension, the file can be analyzed with Wireshark.The analysis carried out, found 15 IP address of the webserver has done illegal acts, thus causing overload on the network traffic.With the forensic process, the IDS system on the webserver can help meet forensic needs, besides, administrators can monitor and prevent attacks [20]. Structured network attacks that originate from multiple sources and converge to form a large packet flow are the Distributed Denial of Service Attacks (DDoS).This attack to interfere with the service on the target network by flooding the target bandwidth or excessive load capacity on the target network server.The method of the network defense system on the Internet to avoid a DDoS attack is the classification of network packets.This classification is done by an Artificial Neural Network (ANN) method [21]. Cloud service applications that Cloud service providers offer, but most companies build private cloud computing.Cloud system violations may be an internal user or due to a configuration error or there may be flaws in the system.This research introduces the ADAM (Advanced Data Acquisition Model) method, which refers to the result of the ADAM investigation process, can also verify some parameters of successful investigation; the investigation by using ADAM in the future can work well and correctly.To identify weaknesses in the service system used its ownCloud user list from a group that can change the password of other users [22]. The ADAM (Advanced Data Acquisition Model) method is used for the private cloud computing service investigation process that has been successfully performed.The process in data acquisition of service can work either directly or by writing block acquisition per device so that the problems that occur are the mainstay of evidence as reliable digital data in court.In the misuse of XYZ hospital data against the dissemination of confidential data occurs due to system flaws, or misconfiguration, this can occur due to the misuse of policies on private cloud computing services. Forensic The Closing Phase is a process by patching the user's security gap by first installing the add-on on the extension in the Mozilla Firefox browser using the XSSFilterAde extension name.XSSFilter is available for early warning, turn off plugins, restrict, authorize payload/script to the victim when opening website address [23] METHODOLOGY The method by [24] can be described as follows: 1) Identify websites, Internet networks, Web servers.2) Testing with attack SQL Injection to finding loopholes that can be penetrated by malicious code 3) attacks Analyze the results to find weaknesses in the website 4) report the results of the following research documentation and evidence of research.This journal emphasizes the attacks carried out by the perpetrators of the crime through the security holes of the website, and successfully entered the website database so that the perpetrators of the crime can change or delete the database on the website that will harm the website owner. The analysis in this research is to create an attack SQL Injection on the website SMS broadcast Bureau of Student Affairs and Alumni it is to determine whether there are security loopholes broadcast SMS Bureau of Student Affairs and Alumni so by knowing their security holes. The test is conducted to prove the existence of vulnerabilities perforated so it can be known of the slits to shut it down immediately so that an attacker can't log back in using a unique character.Attacks trials SQL Injection that In the user id field input SQL Injection characters while Password is emptied after input then press enter.Picture 3 has not entered a character so the display has not changed. In Figure 5 login form SMS broadcast view Bureau of Student Affairs and Alumni gave input 'or1 = 1 for password deliberately emptied, the results obtained for the input characters above can't enter the menu page because it is blocked with the characters mentioned above.Existing display after given the input character 'or1 = 1 as in Figure 6 that is a warning that the character is not recognized by the system.The experiment was conducted 12 times, among the twelve experiments one could successfully enter the login form.this experiment can know the website SMS broadcast that has been injected dangerous characters there are security holes that can be exploited by the attacker to manipulate the existing data in the system website. Figure 7 shows the login form page in the user id field entered character 'or'1' = '1.And in the password field is not given any input, only on the column id user only inputted after the login button pressed then the results obtained from the input characters above are as shown in Figure 7.In this experiment successfully entered by using the character 'or'1' = '1.The result of input 'or'1' = '1 has shown successful entry as shown in figure 8. the above test successfully entered into the web admin page so that in this research it can be said that the security hole has been open for character above.The successful entry into the administrator page occurs due to validation error or not filtered malicious characters entered into the login form.The dangerous thing is the successful attacker entering without a password by entering characters or inputs as in the twelfth test and the attacker SMS broadcast BIMAWA or false information or false news that causes the receiver's loss of information. Tests twelfth managed to get to a page web admin so in this study it can be said that the security hole has been open to the characters mentioned above.The success goes to page administrator was due to a validation error or not a filtered dangerous character is entered into the login form.It is causing a dangerous attacker is successful without a password by entering characters or inputs such as in testing the twelfth and the attacker does SMS broadcast Bureau of Student Affairs and Alumni or false information or false information that causes harm the recipient of the information.Figure 9 is a page to the message carried by admin.Until this page, the attacker can perform a message to the destination number.An attacker can create fake messages or sending chain messages that cause users no phone intended to follow what is required or authorized by the attacker, who in this case the mobile phone number will think that that sends a message is admin though not the admin but the attacker's website SMS broadcast Bureau of Student Affairs and Alumni. For typing SMS and no mobile phones are shown in Figure 10.The data on this page the attacker can write a message and no mobile phones are desirable.It is harmful to users no phone because he thought that sending messages is the admin of the SMS broadcast Bureau of Student Affairs and Alumni.At 12 attempts are being made to go to the website SMS broadcast Bureau of Student Affairs and Alumni one trial has made it into the login form.After learning that website there are vulnerabilities that are vulnerable to attackers who will be able to manipulate the data in the database SMS broadcast Bureau of Student Affairs and Alumni such as giving false message to the user or a student, then made an effort to close the gap so that the attacker can't enter it again.Trials to close the gap made in the official Bureau of Student Affairs and Alumni because Server SMS broadcast is in the room Bureau of Student Affairs and Alumni.Steps to be taken are as follows: Opening folder HTDOC on the server and then open the file PHP Login.with notepad where the program code to enter the login form here.Inline 24, there is a user id where the source code is used to enter the login form.For allowed into is like the source code above is only uppercase and lowercase letters from A to Z and then only the numbers from 0 to 9. In this research has proven that website SMS broadcast in Bureau of Student Affairs and Alumni there are vulnerabilities that are vulnerable penetrated by attackers who would give information or fictitious message, and has closed the gap by adding code to filter unique character.SQL Injection Attack appears as the main threat to the web application.The proposed solution for detecting SQLIA vulnerabilities in web applications is very large.Based on the Analyzer and dynamic tester well done to detect and block SQLIA, the response time is also very good compared to other tools.No need to change the source code of a web application can use the minimum system resources.One advantage of the proposed solution is that it can handle advanced SQLIA techniques as a knowledge base to be updated in handling modern types of threats.The proposed solution uses an MS SQL analyzer to allow for detecting vulnerabilities and tagging pages.The detector needs to be improved so that all types of analysis can be configured for analysis.knowledge base using techniques and knowledge of various attacks.[25] Intrusion detection system applies a learning vector quantization algorithm by applying a method of capturing data to the MySQL service port, converting data into ASCII code, extracting data into several alphanumeric features, punctuation, special combination and remaining character then processing that value into learning vector quantization algorithms so that you get an accurate SQL injection query pattern, the application enters text mode as process runs to capture and classify queries that go to the database.Evaluation at the level of accuracy is done by testing applications that use query data that varies to learning vector quantization algorithms when the application is installed on a network.By using parameters, the maximum accuracy of SQL injection detection applications reaches 80%.[26] Based on the results of testing methods for securing internet access using VPN, SSH Tunneling, DNS over HTTPS, DNS over TLS, DNSCRYPT, and Tor can avoid the sensor system.The use of the VPN, SSH Tunneling and Tor destination address in the form of IP Address and Hostname are not detected by the sensor system while the use of the DNS over HTTPS, DNS over TLS and DNSCRYPT methods that are secured is DNS Queries even though the IP Address can still be tracked.The sensor system applied in Indonesia uses the Domain Name System filtering method that records negative addresses entered into the blacklist.There for the use of the DNS over HTTPS, DNS over TLS and DNSCRYPT methods will still escape the Indonesian government censorship system.DNS over HTTPS, DNS over TLS or DNSCRYPT were created to protect Manin-the-Middle attacks by certain parties.So that the use of the DNS over HTTPS, DNS over TLS or DNSCRYPT method is to protect the insertion of malicious codes, protect from annoying advertisements, protect pornography, and so on.The installation of applications on secure internet access is not to avoid the sensor system.[27] CONCLUSION This research concludes that a security hole can be penetrated by giving input to login using a dangerous character.How to tell if on the web site there is a security hole or not ie by using SQL Injection.If a malicious character successfully escapes means the website is vulnerable to SQL Injection attacks.The existence of such vulnerabilities because the website has not been closed properly.SQL Injection can be used on other websites that have security holes.Attackers who can enter login form and successfully log in can manipulate data on the database, so it can harm the data on the website.A solution of this SQL Injection attack is to update patch, user name and password periodically. The use of SQL Injection is used to determine the security hole and can detect threats to the SMS Broadcast web application so that the manager can immediately prevent it or immediately close the security hole in the web application. FailedFigure. 4 . Figure. 4. Display the front page of SMS broadcast Figure 9 . Figure 9. Page SMS Messages broadcast Bureau Of Students Affairs and Alumni To Conduct Figure 10 .Figure 11 . Figure 10.Broadcast SMS page Bureau of Student Affairs and Alumni to Send SMS The Characteristically Shipping Per No Mobile Figure 12 . Figure 12.The program code on the file PHP login on the Bureau of Student Affairs and Alumni broadcast SMS server has been added with the program code to filter the characters. Figure 12 shows Figure 12 shows the code when the program has been changed in line 24 by adding on the user id $ user_id = preg_replace ( '/ [^ a-zA-Z0-9] /', '', $ _ POST [ 'no_anggota']); In the above code preg_replace function is to replace the unwanted unique character as the characters are successfully used to enter login form 'or'1' = '1. Table 1 . Characteristics of SQL Injection Used attacks directly to the target website SMS broadcast.
2019-12-05T09:08:49.078Z
2019-10-07T00:00:00.000
{ "year": 2019, "sha1": "9a6a0656c9a1be94d81601b1782ccb4c742a3f21", "oa_license": "CCBY", "oa_url": "https://doi.org/10.28961/kursor.v9i4.182", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "a25d5cc98b585668ca2548b6e000e1f81ab23eb4", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
263301647
pes2o/s2orc
v3-fos-license
Enhanced molecular dynamic simulation studies unravel long-range effects caused by sequence variations and partner binding in RNA aptamers Intrinsic flexibility and structural modularity are two common features of RNA molecules. Although functionally crucial, RNA plasticity often represents a major complication in high-resolution structural studies. To overcome this problem, RNAs may be rigidified through the complexation with high-affinity partners such as Fab molecules. This approach has been previously used to characterize the DIR2-aptamer. However, possible perturbations induced by the insertion of the Fab binding site on the DIR2-aptamer conformational properties were not investigated. Here, using enhanced molecular dynamics simulations, we compared the dynamics of the DIR2 aptamer holding the Fab binding site with that of the parental sequence. Our results suggest that the L2-loop modification for the Fab recognition leads to a significant increase in local flexibility that also affects the mobility of distant regions. The trajectories provide clear indications of the groups and the interactions mediating the dynamics transfer in DIR2. The effectiveness of our approach in addressing RNA flexibility was further corroborated by showing its ability to reproduce the most important events affecting the NF-κB RNA aptamer upon dissociation from the partner. Therefore, REMD analyses, a rarely adopted technique to unravel the structural/dynamical properties of aptamers, could efficiently complement experimental data guiding the rational design of nucleic acid therapeutics. Intrinsic flexibility and structural modularity are two common features of RNA molecules.Although functionally crucial, RNA plasticity often represents a major complication in high-resolution structural studies.To overcome this problem, RNAs may be rigidified through the complexation with high-affinity partners such as Fab molecules.This approach has been previously used to characterize the DIR2-aptamer.However, possible perturbations induced by the insertion of the Fab binding site on the DIR2-aptamer conformational properties were not investigated.Here, using enhanced molecular dynamics simulations, we compared the dynamics of the DIR2 aptamer holding the Fab binding site with that of the parental sequence.Our results suggest that the L2-loop modification for the Fab recognition leads to a significant increase in local flexibility that also affects the mobility of distant regions.The trajectories provide clear indications of the groups and the interactions mediating the dynamics transfer in DIR2.The effectiveness of our approach in addressing RNA flexibility was further corroborated by showing its ability to reproduce the most important events affecting the NF-kB RNA aptamer upon dissociation from the partner.Therefore, REMD analyses, a rarely adopted technique to unravel the structural/dynamical properties of aptamers, could efficiently complement experimental data guiding the rational design of nucleic acid therapeutics. INTRODUCTION RNA has fundamental roles in life, as it regulates a plethora of cellular functions. 1,2The intrinsic structural versatility of RNA makes it able to efficiently target diverse molecules.Therefore, it is not surprising that the RNA derivatives represent promising systems for a wide range of biomedical applications. 3Since the 90s, the discovery of new RNAs targeting specific proteins and other molecules of biological interest was made possible by the development of the systematic evolution of ligands by exponential enrichment (SELEX) approach. 4ELEX consists of cyclic processes of identification and amplification of RNA/DNA molecules among a pool of random sequences that were selected on their ability to bind the target of interest.The potentiality for a wide range of applications and the good efficacy in their selection gave rise to a rapid growth of interest in RNAs. 5,6However, a limiting factor for the development of these molecules for biomedical applications is the paucity of structural information available as atomic-level characterizations of RNAs are quite difficult and timedemanding due to their intrinsic flexibility. 7,8In 2019, the standalone RNA molecules represent only 1% of the structures deposited in the Protein DataBank (PDB) and the number of RNA aptamers in complex with the protein target is as low as 63 as highlighted by a recent search of PDB (www.rcsb.org)(March 2023).The relative scarcity of structural information strongly limits our understanding of the RNA folding process as well as of its ability to adopt specific three-dimensional conformations that are frequently able to establish high-affinity interactions with diversified partners.One effective and used strategy to overcome this general issue is based on the structural characterization of aptamers following their complexation to high-affinity Fabs, by adapting a similar approach used for highly flexible proteins. 9,10n the case of aptamers, this procedure involves the modification of the aptamer sequence by the insertion of a structural motif that is specifically recognized by a Fab that operates as a crystallization chaperone. 11is strategy has been successfully employed to structurally characterize DIR2, a fluorogenic RNA aptamer endowed with important potentialities in imaging techniques since it is able to activate fluorescence upon binding chromophore molecules.Indeed, DIR2 binds the chromophores Dimethylindole red (DIR) and oxazole thiazole blue (OTB) with nanomolar affinity, thus inducing the emission of red and blue fluorescence, respectively.The intrinsic flexibility of DIR2 impaired its ability to crystallize and perform high-resolution crystallographic studies. 11A variant of the aptamer was generated by replacing its UUCG loop with an AAACA pentaloop closed by a G-C pair that is a recognition motif for the Fab BL3-6.The ability of the Fab BL3-6 to act as a crystallization chaperone and to reduce the flexibility of the aptamer allowed the crystallographic characterization of the Fab BL3-6-DIR2 complex, also in combination with the OTBSO3 fluorophore.These investigations unraveled that, in contrast to other fluorogenic aptamers that typically exhibit G-quadruplex-or nucleobase tetrad-like motifs, DIR2 aptamer adopts a compact, tuning fork-like structure composed of a helix and two short stem-loops that generate the ligand binding site through a network of tertiary interactions. 11Although the authors clearly showed that the insertion of the Fab recognition loop does not have important effects on the fluorescence emission and presumably does not affect the global structure of the aptamer, the intrinsic dynamic properties of this system and how they are affected by the insertion of the loop and/or by its binding to the Fab remain uncharacterized.In this scenario, we here investigated the intrinsic structural and dynamic properties of the wildtype DIR2 (DIR2wt) and of its variant with a high affinity for Fab BL3-6 (DIR2Ffab), by performing extended conformational samplings by using the replica exchange molecular dynamics (REMD) method.REMD is an enhanced sampling approach of molecular dynamic simulation that allows the overcoming of high-energy barriers present in the conformational space. 12reover, we also show that the method here illustrated can be reliably applied to gain insights into the structural basis of the aptamerprotein recognition process.In particular, using the well-characterized interaction between an RNA aptamer of the nuclear factor (NF)-kB transcription factor, 13,14 we show the effectiveness of the present approach in predicting the main structural and dynamic events associated with the dissociation of the aptamer from the protein partner. Static three-dimensional models of the aptamers and their general structural properties The intrinsic dynamic and structural properties of the DIR2 aptamer and the impact that the insertion of the Fab recognition motif has on its properties were here investigated by performing REMD simulations to provide an enhanced sampling of the conformational ensemble of these intricate biomolecules.As detailed in the materials and methods section, the coordinates of the aptamer containing the insertion of the Fab-loop motif (DIR2fab) were extracted from the structure of the complex of this variant with Fab BL3-6 (PDB entry 6db9 11 ).On the other hand, for the wild-type DIR2 aptamer (DIR2wt) for which no experimental structure is currently available, a reliable three-dimensional model was generated using the structure of DIR2fab as template and manually replacing the Fab-loop motif (GAAACAC) with the parental UUCG sequence (Figure 1).The model of DIR2wt is composed of four helices P1, P2, P3, and P4, and three loops L1, L2 and L3 (Figure 1).The initial P1 helix is followed by the coaxial P4-L3 stem-loop, which arranges into a tuning fork-like fold, similar to P2-L1.P4-L3 and P2-L1 are connected essentially by Watson-Crick connections, while non-canonical interactions stabilize L1-L3 proximity in a precise tertiary fold.The loop L2 contains UUCG that is replaced by the Fab BL3-6 binding site in DIR2fab (Figure 1).The DIR2fab presents the same fold and, therefore, embodies the same structure elements.The nucleotides of the sequences of the two aptamers were denoted using the same numeration scheme, which was adapted from Piccirilli et al. for DIR2fab, as a consequence the shorter DIR2wt lacks the bases numbered 27, 28, and 29. Overall analysis of the REMD simulations Following the common protocol of the REMD methodology, 64 independent simulations were performed in the temperature range 300-385 K (see materials and methods for details).One important aspect for the successful application of this approach is the exchange rate of replica between different temperature values.For DIR2wt and DIR2fab, the average probabilities were 0.30 and 0.32, respectively.These values indicate that for both DIR2wt and DIR2fab, a satisfactory exchange rate was achieved.All analyses were performed on the ensemble of structures collected at 300 K.As shown in Figure 2 (top panel), the inspection of the root-mean-square deviation (RMSD) values of the trajectory models computed against the corresponding starting run structure indicates that for the two forms of the DIR2 aptamer, the structures that emerged from the simulations present similar deviations from the respective starting models.The inspection of the RMSD values also highlights that they significantly oscillate throughout the trajectories, thus indicating that both aptamers are endowed with a remarkable level of structural flexibility.This finding fully agrees with the reported failure to grow crystals of these aptamers 11 and with the necessity to resort to the Fab complexation to gain structural information on DIR2.To gain further insights into the intrinsic flexibility of DIR2fab and DIR2wt, we then evaluated the root-mean-square fluctuations (RMSF) per residues that were computed on the entire 40 ns of REMD simulation time (Figure 2, bottom panel).As expected, terminal regions of the two aptamers present higher RMSF values.High RMSF values are also displayed by the nucleotides of the L2 loop.It is important to note that, compared with DIR2wt, the L2 loop of the DIR2fab variant presents remarkably higher flexibility due to the insertion of the Fab binding motif (residues 23-29).In DIR2fab, these residues, which interact with the Fab molecule in the X-ray structure of the complex (Fig-ure S1), make only sporadic connections to other residues of the same region G23-C29, A24-A25, and A25-A28 (see Table 1).Interestingly, the different interaction pattern that the L2 loop establishes with the rest of the molecule has a remarkable impact on the overall flexibility of the two aptamers.Indeed, as shown in Figure 2, DIR2fab is endowed with generally larger fluctuations compared with DIR2wt, even in regions that are distant from the inserted Fab binding motif.This observation suggests that the introduction of this motif may have an impact on the overall flexibility of the aptamer.Nevertheless, it is important to note that the flexibility of the residues within the L3 region of the aptamers that is deputed to the binding of the fluorophores (nucleotides 39-42) is not remarkably different in the two variants.This is in line with the experimental evidence of a similar binding of the fluorophores exhibited by DIR2fab and DIR2wt. 11 Intra-motif hydrogen bonds The intriguing distinct overall flexibility of the two DIR2 variants prompted us to perform a detailed analysis of the evolution of the hydrogen bond networks that stabilize their folding (Tables 1 and 2, Figure S1).This analysis indicates that the crystallographic hydrogen bonds holding the P1 helix base pairs are quite stable in both DIR2fab and DIR2wt runs, in a comparable fashion.Similarly, in both systems the P4-L3 stem-loop maintains most of the crystallographic hydrogen bonds, yet in DIR2wt run a persevered connection between G37 and A44 residues is slightly more stable than in the DIR2fab.Further, in the P2-L1 stem-loop, the U11 residue connects to U20 in both the aptamers, but the G12-A13 connection is peculiar to DIR2fab, in this motif.Precisely, in DIR2fab A13 interacts with either G12 and G37 of P4 and to a lesser extent with C45 of P4 helix, while in DIR2wt A13 interacts in minor measure with G37 but more firmly with C45 (Tables 1 and 2).Because of the lack of the residues 26 to 29 in DIR2wt aptamer sequences and the loop sequence differences, the P3-L2 shows a higher degree of HB network differences among the two simulations; indeed, this stem-loop maintains the initial fold by a completely different HB network.G21-C31 and C22-G30 are persistently connected in both variants.On the contrary, C22-C27, G23-C29, A24-A25, and A25-A28 are peculiar hydrogen bonds of the DIR2fab aptamer, whereas U23-U24, U23-G26, U24-G52, U24-C25, C25-G26, and G26-G30 interact exclusively in the DIR2wt.Notably, the G23-C29 pair of DIR2fab is significantly more persistent than the equivalent U23-G26 of DIR2wt, in terms of connection persistence along the simulations (see HB occurrence, Table 1, Figure 3), because in this aptamer the same residue connects also to U24.The P4L3 region shows a similar dynamic behavior in DIR2fab and DIR2wt.The persistent hydrogen bond connections differentiating the two systems and peculiarly expressed in one of them, are summarized in Table 2 and shown in Figure 3, where DIR2fab and DIR2wt representative conformations extracted from the relative REMD trajectories by the clustering method (see the materials and methods section) are reported. Inter-motif connections The analysis of the relative orientation along the trajectories of different secondary structure motifs is a measure of the plasticity and compactness of the systems.For the two simulations, we Molecular Therapy: Nucleic Acids calculated the distances between the centers of mass of each motif (Figure S3).Similarly, the P1 helix is closed to the P2L1 and P4L3 stemloops (black and green lines, Figure S3) and P4L3 to P2L1 and P3L2 (yellow and brown, respectively, Figure S3).P3L2 shows the same distance pick values from P1 and P4L3 in DIR2wt (red and brown, Figure S3), but the P1-P3L2 distance results are wide and higher in DIR2fab (Figure S3).Additionally, P1 and P3L2 approach each other in slightly different behavior, with distance peaks around 25 and 21 Å, in DIR2fab and DIR2wt, respectively (red, Figure S3).These data are consistent with the HB network analyses as the approaching of P1 to P3L2 motifs is favored by the interaction between U24 in L2 and G52 in P1 of DIR2wt.The P1-P3L2 approach is consistent with the DIR2 available crystallographic data on DIR, as in the fluorophore-bound form (PDB code 6DB8) these two secondary elements are closer than in ligand-free form (PDB code 6DB9).Nevertheless, the differences in the relative motif distances between the two aptamer variants do not impact the intrinsic flexibility of the region deputed to bind the fluorophore ligand, that corresponds to the L3 loop (Figure 2, bottom panel, residues 39-40).To assess the impact of the loop replacement in the conformation prone to bind the fluorophore molecules, we also monitored the flip rotation of the glycosidic angles of A40 and G39, which are the main residues involved in fluorophore binding, along the simulations (Figure S4).For both the aptamer variants, this analysis suggests that the G39 shows a narrower distribution of the glycosidic angle if compared with the A40 which shows a larger distribution of the same angle values and thus a wide range of rotational events.Further, the distances of the center of mass of the G39 and A40 bases compared with the previous and the succeeding residues (38-39, 39-40, and 40-41 pairs, Figure S5) suggest that the A40 is the residue with elevated mobility, in line with its high RMSF value (Figure 2).Indeed, this is the key residue prone to flip out from the loop favoring the accommodation of the fluorophore. In summary, the global RNA architecture is comparably maintained in the two aptamers along the simulations, but the L2 loop interacts more significantly with the P1 helix in DIRwt, as in DIR2fab this tertiary connection is lost and the L2 dynamic is completely independent from the remaining aptamer structure (Figure 3). The structural transition of an NF-kB-binding aptamer from the bound to the free state The ability of the approach here presented to unravel the long-range effects in the structure of the DIR2 induced by sequence modifications prompted us to check whether this method could be effectively exploited to gain information in the protein-aptamer recognition process.In detail, we investigated the intrinsic structural and dynamic properties of an RNA aptamer specifically selected to target the transcription factor NF-kB (Figure 4). 13,14Extensive experimental studies have demonstrated that the structure of this aptamer undergoes significant modification upon binding to the target protein. 25In particular, the comparison of the ligand-free structure of this aptamer determined by NMR with that found in the crystalline structure of the complex with NF-kB highlights clear variations in the tetraloop and the internal loop regions (see Figures 4 and S7). 13,14According to the authors, in this case, protein-aptamer recognition relies on the delicate balance between the structural preorganization of the aptamer and the induced fit caused by the binding. 13To check the ability of our approach to predict the structural variation of the aptamer coupled with its dissociation from the transcription factor, we performed REMD simulation starting from the state it assumes in the complex with the protein and compared the trajectory structures with both the free and bound state experimentally observed.The inspection of the RMSF values of the aptamer clearly indicates that different regions of this RNA molecule are endowed with different flexibility.In line with the available experimental data, the tetraloop region (residues The pairs are divided into sections following the scheme of secondary structure motifs, magenta: P1, P2L1, P3L2, and P4L2.Those connections involving two different motifs are highlighted with an asterisk.The backbone hydrogen bonds are in italics and those differentiating the DIR2fab and DIR2wt aptamer simulations are in italics and bold .14-17) is characterized by remarkable mobility (Figure 4).The trajectory structures were then clustered as reported in the materials and methods sections.The representative example of the most populated cluster was then compared with the free and the bound states of the aptamer (Figure 5).As shown in Figure 5, the structure of the internal loop region that emerged from the REMD closely resembles the experimental free state, although the bound state was used as the starting model.Notably, in the trajectory structures, some of the distances between bases that characterize the free state are frequently detected.For example, the distances of the pair of bases G8(N9)-G23(N9) and A9(N9)-G23(N9) detected in the simulation are closer to those experimentally observed in the free rather than the bound state (Figure S8).Moreover, in some structures of the trajectory, the G23 base exhibits the swing motion that is associated with the aptamer relaxation following the dissociation from the protein partner (Figures 4 and S8), mainly resulting in an opening of the internal loop region, thus a higher distance of G23 versus G8 as well as A9. As expected, the highly flexible region corresponding to the tetraloop exhibits more complex behavior.Indeed, although the structure of this region that emerged from the REMD is frequently divergent from both the apo and bound state (Figure 5), the inspection of the structure ensemble indicates the presence of conformers that closely resemble the tetraloop state observed in the free state of the aptamer (Figure 6).Similarly, the analysis of other indicators such as the interbase distances or the location of the base A16 indicates that free-like states for this high-mobile region are present in the trajectory despite the bound state used as a starting model in the simulation (Figure 6). Collectively, these data indicate that for well-structured regions such as the internal loop, the simulations well predict the transition of the aptamer structure from the bound to the free conformation.For the tetraloop region whose extreme mobility has been experimentally demonstrated, the simulation is still able to capture in some of the trajectory structures the features of the free and relaxed state. DISCUSSION RNA tertiary folding is a slow and hierarchical multi-step process occurring on a millisecond timescale or longer, whose description at atomic level is rather inaccessible by either experimental or theoretical methods. 15Initial secondary structure formation is followed by slower conformational searches among multiple sub-states of RNA tertiary motifs. 12The fast exchange rate among RNA states represents a major obstacle to the discovery and to the effective characterization of the diverse RNA conformations that frequently constitute a basin of distinct functional states through standard NMR or X-ray methods. 16Flexibility featuring RNAs is crucial for functionalities and targeting but a complete description of conformational space of RNA molecules of medium size remains unreachable by the current availability of plain molecular dynamic simulation time.6][17] In 2019, Cheng and coworkers applied a 2D REMD incorporating secondary structure information to describe the RNA folding of four representative RNAs, from 24 to 68 residues in length. 18In the same year, Bottaro et al. predicted the structure and dynamics of five RNA stem-loops in the 5 0 -UTR of SARS-CoV-2 by extensive enhanced sampling techniques of atomistic molecular C25(O2 0 )-G26(O2P) 28.5 In bold, the peculiar hydrogen bonds, present only in one of the two variants; italics indicates hydrogen bond involving backbone atoms. in bold and italics hydrogen bonds involving backbone atoms and peculiarly present in only one of the two aptamer variants.The pairs are reported following the scheme of secondary structure motifs, P1, P2L1, P3L2, and P4L2.Those connections involving two different motifs are highlighted with an asterisks.dynamics simulations. 19MD provides insights on the feasibility of RNA motif manipulation in maintaining a peculiar fold, investigating the role of crucial interactions in holding the overall architecture, a key aspect for the rational design of these molecules. 7,20,21is intricate scenario has a direct impact on the development of new RNA-based molecules with therapeutic potential.Indeed, although the application of computational methods in driving the design and the optimization of new therapeutic agents has been demonstrated for many classes of chemical compounds, their impact on the development of new drugs based on nucleic acids has been rather limited. The paucity of structural and dynamic data on nucleic acids, and, in particular on RNAs, makes the definition of computationally based protocols difficult to achieve. In this framework, we used here REMD simulations to increase the conformational sampling along the trajectories of two DIR2 aptamer forms, the native parental form (DIR2wt) and a variant embodying a Fab3-6 binding motif (DIR2fab) that was ad hoc inserted into the aptamer sequence to facilitate its structural characterization. 11Indeed, the complexation of the aptamer with the Fab favored the crystallographic analysis and provided a three-dimensional structure of the aptamer, although bound to the Fab.The DIR2 aptamer as determined by Piccirilli et al. (pdb code 6db9) folds in a fork-like architecture with a helix and two stem-loops oriented parallel to each other.Here, we performed an extensive conformational sampling of the two DIR2 aptamer variants (DIR2wt and DIR2fab) to measure the effect on the overall conformational space upon the insertion of the Fab binding motif within the RNA sequence.The MD simulation analysis clearly suggests that both variants retain the overall structure exhibited in the complex with the Fab.Nevertheless, present data indicate that DIR2wt and DIR2fab are endowed with remarkable flexibility.This finding well agrees with the experimental difficulties encountered in crystallizing DIR2 as an individual entity.In this general framework, a deep inspec-tion of the trajectory structures also highlights small but significant differences in the overall flexibility of the two aptamers.In particular, the insertion of extra bases in the L2 loop of DIR2fab deputed to the Fab anchoring leads to a significant increase of the local flexibility that affects also some distant regions.Notably, these changes only marginally affect the binding site of fluorophores, in line with the similar affinity of these compounds for these two forms.The analysis of the interaction network of the two variants also provides a structural explanation for the increased flexibility of the aptamer bearing the insertion.Indeed, the modification of the L2 loop leads to the loss of a key intra-motif base connection (U24-G52) that is present in the parental aptamer and is missed in the DIR2fab (Figure 3).The loss of this contact loosens the interactions of the L2 loop with the rest of the aptamer and produces an overall increase of the flexibility.In conclusion, the present findings indicate that the modular structure of RNA molecules allows significant insertions or deletions with major structural reorganization.However, these changes may have a significant impact on the RNA plasticity.Therefore, the functionalization of aptamers with Fab binding motifs is certainly an effective strategy to gain structural information on these intricate molecules.On the other hand, to achieve a full understanding of the behavior of these molecules, once obtained, these static data should be complemented by extensive sampling analyses as those illustrated in the present work.In general, important correlations between the tertiary folds and the RNA sequence that are inaccessible by the most diffuse experimental methods can emerge by applying the presented theoretical approach and can guide the optimization and the rational design of RNA-based molecules. 22,23Moreover, we explored the ability of this approach to deal with the identification of the structural basis of protein-aptamer recognition, which is a fundamental issue for the optimization of the effectiveness of these RNA molecules.Although the structure of several dozens of protein-aptamer complexes has been experimentally determined, 24 in the vast majority of the cases no information has been collected on the intrinsic structural/dynamic properties of the aptamer, 24,25 a crucial step for a full understanding of the determinants of the binding and recognition.By using an RNA aptamer specifically selected to target the transcription factor NF-kB, we show that the approach here illustrated can appropriately reproduce the main structural and dynamic properties of the free aptamer starting from the conformation it adopts in the complex with the protein thus providing information on both the contributions of the aptamer preorganization that either of the induced fit events provides to the binding. 13,14 this context, it is important to note that for most therapeutic purposes, the initially identified aptamers are frequently heavily modified for several reasons. 26,27Indeed, they can be truncated to reduce synthesis costs, modified to increase nuclease resistance, rigidified to allow structural characterizations, and conjugated to different chemical entities to reduce renal filtration. 27Therefore, the availability of computational tools, such as those described in the present paper, that provide rapid and reliable descriptions of the structural and dynamic responses of RNA molecules in response to changes in the local environment, i.e., dissociation from the partner, or as a consequence of sequence 13 modifications represent a powerful and valuable tool for the optimization of drug candidates.Moreover, effective computational characterizations of the structural and dynamic properties of RNA molecules may be crucial for either the a priori screening of many variants of hit compounds to reduce drug development costs or for the rational design of nucleic acid therapeutics. MATERIALS AND METHODS The three-dimensional structure of the DIR2fab and NF-kB aptamers was extracted by the pdb solved structures, downloaded from the Protein Data Bank database (PDB: 6DB9 and 1OOA, respectively). 11,149][30] Both the systems were solvated in an octahedron box using the TIP3P water model 40 with a 1.1-nm distance to the border of the molecule, simulating standard biological conditions by considering a 150-mM KCl concentration and additional ions to neutralize.Electrostatic interactions were treated using the particle mesh Ewald method and Berendsen algorithm to control temperature and pressure, 31,32 following the indications dictated by the ABC consortium (https://bisi.ibcp.fr/ABC/Protocol.html)and previous protocols applied on nucleic acids and derivatives. 7,33,34In all the systems, waters were first relaxed by energy minimization and 10 ps of simulations at 300 K, restraining the RNA atomic positions with a harmonic potential.Then, the systems were heated up gradually to 300 K in a sixstep phase starting from 50 K.Finally, the equilibrated systems were subjected to 64 T-REMD simulations with a temperature distribution ranging from 300 to 385 K and an exchange trial between adjacent replicas every 1,000 steps of 2 fs.According to the protocol proposed by Qi et al., 12 64 parallel simulations were run in NPT standard conditions for 40 ns without restraints.The temperatures of each replica were defined using the http://virtualchemistry.org/remd-temperature-generator/ 35 in SuppREMD.The convergence analysis for T-REMD runs shows replica index and temperature exchange plot along the simulation time for both the aptamers (Figure S6).GROMACS, 28,29 VMD, 36 and Pymol packages 37 were used to analyze the 300-K run trajectories.Clustering analyses of the both MD simulations were performed to extract representative conformations using the gromos method with the algorithm described by Daura et al. 38 For each cluster, the structure exhibiting the lowest RMSD relative to all the other members of the cluster was selected as representative.The secondary structure pictures depicted in Figure 1 were created by using RiboSketch web tool. 39 Figure 1 . Figure 1.The secondary structure and sequence of the DIR2fab and DIR2wt aptamers The loop that received the sequence replacement is indicated in italic in the DIR2wt sequence. Figure 2 . Figure 2. RMSD and RMSF profiles Top: The RMSD values of trajectory structures vs. the starting model (black: DIR2fab and red: DIR2wt).Bottom: The RMSF profiles of trajectory structures along the simulations (black: DIR2fab and red: DIR2wt).The residues belonging to the secondary structure motifs are highlighted with colored boxes using the following color scheme: P1: magenta, P2L1: violet, P3L2: green, and P4L3: cyan. Figure 3 . Figure 3. Hydrogen bond network differentiating and peculiar of the two aptamers Cartoon and lines representation of the DIR2fab (left) and DIR2wt (right) representative MD structures.In sticks are shown those residues involved in hydrogen bonds differentiating the DIR2fab compared with the DIR2wt parental sequence. Figure 4 . Figure 4. RMSF and secondary struture of NF-kB aptamer Left: The RMSF profiles of REMD NF-kB trajectory structures along the simulations.The residues belonging to the tetraloop (green) and internal (blue) loops are highlighted with colored circles.Right: Secondary structure of the NF-kB aptamer with the indication of the tetraloop and internal loop residues. Figure 6 . Figure 6.NF-kB aptamer key interactions Top: Global (left) and zoomed (right) views of the superimposition of the NF-kB aptamer conformation (magenta) extracted from the REMD simulation on the free (violet) and bound (gray) aptamer state, as solved with X-ray and NMR methods (PDB codes: 1OOA and 2JWV, respectively), the tetraloop A16 and the internal loop G23, G8, and A9 residues are shown in sticks.Bottom: The different models are individually represented using the same color code. Figure 5 . Figure 5. NF-kB aptamerFrontal and top views of the superimposition of the representative REMD NF-kB aptamer conformation (magenta) on the free (violet) and bound (gray) aptamer state, as solved with X-ray and NMR methods (PDB codes: 2JWV and 1OOA, respectively13,14 ).The tetraloop (14-17) and the internal loop(8, 9, 22, and 23) residues are shown in sticks in the right panel. Table 1 . The percentage of occurrence of persistent (>10% of the run frames) hydrogen bonds along the DIR2fab and DIR2wt simulations are reported Table 1 . Continued (Continued on next page)
2023-10-01T15:03:20.560Z
2023-09-01T00:00:00.000
{ "year": 2023, "sha1": "b6b97e82bc780654500abeefc1762fabfbc6db57", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.omtn.2023.102039", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8ed5c205c642c60a224fbfae719a0f8c778a054c", "s2fieldsofstudy": [ "Chemistry", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
252602620
pes2o/s2orc
v3-fos-license
Ivermectin induces cell cycle arrest and caspase-dependent apoptosis in human urothelial carcinoma cells Bladder carcinoma is one of the most common malignancies worldwide, and >90% of all bladder cancers are classified as urothelial carcinomas (UC). Surgery, radiotherapy, chemotherapy, targeted therapy, and immunotherapy are evidence-based treatments that are administered depending on the clinical stage of UC. All these treatments exhibited limited effects in cases of metastatic UC, and UC with specific location, invasiveness, and recurrence. Therefore, a new therapeutic strategy for UC is urgently needed. Ivermectin, an avermectin derivative, has been reported to be effective against various parasites, and its pharmacokinetic and pharmacodynamic properties as well as safety are well understood in humans. Recently, ivermectin was shown to exhibit therapeutic benefits against various virus infections in vitro, and anticancer activity against various human cancer cells. This study aimed to investigate the anticancer effects of ivermectin in human UC cells. Ivermectin inhibited growth, regulated the cell cycle, and induced apoptosis in human UC cells. It also induced the activation of both extrinsic and intrinsic caspase-dependent apoptotic pathways. Further investigation revealed that ivermectin induced apoptosis in UC cells is mediated via c-Jun N-terminal kinase signaling. Herein, we demonstrated that ivermectin can be used as a new therapeutic agent for treating UC cells. Introduction Urothelial carcinoma (UC), also known as transitional cell carcinoma, is one of the most common malignancies worldwide [1]. Currently, therapeutic strategies such as surgery, radiotherapy, chemotherapy, targeted therapy, and immunotherapy are provided depending on the clinical stage of UC. However, the curative effects of these treatments are limited by the specific location, invasiveness, and recurrence of metastatic UC. The 5-year survival rate of patients with UC that has not yet spread out of the bladder is 70%; however, when the tumor extends outside the bladder or spreads to nearby lymph nodes, the 5-year survival rate decreases to <40%. Moreover, in patients with distant metastasis, the 5-year survival rate is 6% [2]. Therefore, there is an urgent need for developing of a new therapeutic strategy for UC. However, considering the long process of developing novel drugs, the off-label use of current clinical agents is a good strategy for developing a new therapeutic agent for UC. In this study, we investigated the anti-UC activity of ivermectin in human UC cells. Ivermectin, an avermectin derivative, is a broad-spectrum drug widely used against parasitic infections in humans including onchocerciasis, strongyloidiasis, ascariasis, cutaneous larva migrans, filariases, gnathostomiasis, trichuriasis, pediculosis and scabies, and its pharmacokinetic and Ivyspring International Publisher pharmacodynamic properties as well as safety are well understood [3]. Moreover, it exhibits antiviral activity against various RNA and DNA viruses [4]. In addition, ivermectin has shown anticancer activity in various human cancers, including leukemia, melanoma, esophageal squamous cell carcinoma, glioma, and breast, ovarian, and colon cancers [5][6][7]. Moreover, it has been demonstrated to inhibit tumor proliferation, metastasis, and angiogenesis in various cancer cells [8], as well as reverse drug resistance when combined with clinical chemotherapeutic agents [9]. However, the molecular mechanism underlying ivermectin's anticancer effects have not been investigated in human UC cells. Clinically, intravesical Bacillus Calmette-Guérin immunotherapy is used to decrease the progression of non-muscle-invasive bladder cancer; however, it may result in various side effects, such as chemical cystitis. Therefore, in contrast to chemotherapeutics administered orally or through intravenous injection in various cancers, intravesical injection is allowed in human UCs. In this study, we used two human UC cell lines, T24 and RT4, to investigate the anti-UC activity of ivermectin. We found that ivermectin could suppress cellular proliferation and induce cell cycle arrest and apoptosis. Furthermore, ivermectin mediated caspase-dependent apoptosis. Finally, we identified a novel molecular mechanism underlying the apoptosis induction of UC cells by ivermectin. Cell culture and treatment Human UC cell lines T24 and RT4, were procured from the Bioresource Collection and Research Center (Hsinchu, Taiwan). Both cells were cultured with McCoy's 5A medium supplemented with 10% fetal bovine serum and preserved under 5% CO 2 at atmosphere at 37 °C. Ivermectin was purchased from Sigma-Aldrich (St. Louis, MO, USA) and dissolved in dimethyl sulfoxide (DMSO; Sigma-Aldrich) to prepare the stock and further experiments. As vehicle control, the cells were treated with 1% DMSO. Cell proliferation assay The cells were seeded into 96-well culture plates (5 × 10 3 cells/well) and incubated with medium only (containing 1% DMSO as the negative control) or with medium containing ivermectin. Cell viability was determined using the Cell Counting Kit-8 (CCK-8) (Sigma-Aldrich), as previously described [10]. Cell apoptosis assay Cells (1 × 10 6 cells/dish) were treated with ivermectin or DMSO. Cell apoptosis was evaluated using the Annexin-V-FITC apoptosis detection kit (Strong Biotech, Taipei, Taiwan). After treatment, cells were incubated with FITC-labeled annexin-V and PI for 15 min at room temperature, and the intensity of annexin-V or PI fluorescence was determined using FACScan (Becton Dickinson); 10,000 cells were examined per sample. Mitochondrial membrane potential assay To evaluate the mitochondrial membrane potential (MMP), cells (1 ×10 6 cells/dish) were treated with ivermectin or DMSO for the indicated time, following which MMP was detected using FACScan after rhodamine-123 fluorescent (2 mM; Sigma-Aldrich) staining for 2 h. Statistical analysis Data are presented as the mean ± SD of separate experiments. Differences between the test and control groups were analyzed using one-way ANOVA and Fisher's least significant difference test. A P value of < 0.05 was considered statistically significant in all tests. Ivermectin suppresses cell proliferation in human UC cells To evaluate the anti-UC activity of ivermectin, two human UC cell lines, T24 and RT4, were treated with ivermectin, and cell viability was examined using CCK-8 assay. The results revealed that ivermectin inhibited cell proliferation in these cells in a dose-and time-dependent manner ( Figure 1). The IC50s in T24 cells were 20.5, 17.4 and 16.6 μM, and in RT4 cells were 26.7, 14.9, and 10.0 μM at 24, 48, and 72 h post-incubation, respectively. These findings suggested that T24 cells were sensitive to ivermectin treatment at short term administration, whereas RT4 cells were sensitive to this drug at long term administration. Ivermectin induces cell cycle arrest at the G1 phase in human UC cells To investigate the mechanism underlying the anti-UC activity of ivermectin, a cell cycle analysis was performed. T24 and RT4 cells were treated with ivermectin, and flow cytometry was used to estimate the percentages of cell population in the different phases of the cell cycle. Ivermectin induced cell cycle arrest at the G1 phase (Figure 2A and Supplementary Figure 1). Furthermore, cell cycle markers, including p21, CDK2, CDK4, and cyclin D1, were examined in cells treated with ivermectin, which revealed increased p21 and decreased CDK2, CDK4, and cyclin D1 expression in these cells ( Figure 2B). These findings demonstrated that ivermectin induces cell cycle arrest at the G1 phase in T24 and RT4 cells. In addition, we found a significantly sub-G1 population increased in T24 and RT4 cells under ivermectin treatment. However, RT4 cells exerted higher sub-G1 population compared with T24 cells when treated with ivermectin ( Figure 2C and 2D), suggesting that ivermectin significantly increases apoptosis in RT4 cells. Ivermectin induces caspase-dependent apoptosis in human UC cells We found a significantly sub-G1 population increased in RT4 cells under coincubation with ivermectin. To determine whether apoptosis is involved in the anti-human UC activity of ivermectin, cellular apoptosis was examined via flow cytometry in cells treated with ivermectin. The results showed that ivermectin significantly increased early apoptosis in RT4 cells in a time-dependent manner ( Figure 3A). Further investigation on the underlying mechanisms of ivermectin-mediated apoptosis in RT4 cells revealed that RARP and other upstream factors, including caspase -8, -9, and -3, were activated, whereas Bid and Bcl-xL expression had decreased ( Figure 3B). These findings suggested that ivermectin could upregulate both intrinsic and extrinsic caspase apoptotic pathways in RT4 cells. To confirm that the intrinsic pathway was involved in ivermectininduced apoptosis, MMP was determined using rhodamine 123 staining in RT4 cells treated with ivermectin. A significant dose-dependent change in MMP was observed in RT4 cells ( Figure 3C). In addition, ivermectin-mediated caspase-dependent apoptosis was also observed in T24 cells (Supplementary Figure 2). To confirm that the anti-UC activity of ivermectin involves caspase-dependent apoptosis, cells were pretreated with a pan-caspase inhibitor, Z-VAD-FMK, following which caspase-3 and PARP expression as well as early cellular apoptosis and survival were assessed after ivermectin treatment. As shown in Figure 4A, ivermectin-mediated caspase-3 and PARP activation could be suppressed by Z-VAD-FMK pretreatment. Moreover, under these conditions, ivermectin-mediated early cellular apoptosis was significantly reduced ( Figure 4B). Total cell survival was also reversed in cells pretreated with Z-VAD-FMK ( Figure 4C). These findings demonstrated that ivermectin treatment induced caspase-dependent apoptosis in human UC cells, thereby exhibiting antitumor activity. Ivermectin mediates cell apoptosis through the JNK pathway The mitogen-activated protein kinase (MAPK) signaling pathway plays an important role in tumor cell proliferation, differentiation, and survival. Moreover, ivermectin-attenuated phosphorylation of JNK, ERK, and p38 MAPK (p38) has been previously reported [9,12,13]. To further investigate the upstream signaling pathways involved in ivermectininduced cellular apoptosis, western blotting was performed to determine the expression of p38, ERK, and JNK pathways. Ivermectin significantly suppressed activation of the ERK and JNK pathways but not that of the p38 pathway in RT4 cells ( Figure 5A). Therefore, PD98059 was used to enhance ERK suppression in cells treated with ivermectin, and ERK and PARP activation were detected by western blotting. The results showed that PD98059 could suppress ERK activation in RT4 cells treated with ivermectin ( Figure 5B). However, under this condition, no further increase in the cleavage of PARP was found, suggesting that the inhibition of ERK signaling was not the upstream mechanism of ivermectin-mediated apoptosis. Further research on cellular apoptosis using flow cytometry confirmed this finding ( Figure 5B). In addition, SP600125 was used to suppress JNK activation, and activated JNK and PARP levels were detected using western blotting. The results showed that a combination treatment of ivermectin and SP600125 reduced JNK activation, thereby increasing PARP activation ( Figure 5C). This finding suggested that JNK signaling was the upstream mechanism of ivermectin-mediated apoptosis in RT4 cells. Moreover, the combination treatment of ivermectin and SP600125 significantly increased anticancer activity, thus increasing apoptosis and reducing total cell viability ( Figure 5C). Taken together, these findings demonstrated that ivermectin-induced apoptosis was mediated by JNK signaling. Discussion Currently, in addition to surgery, tumor apoptosis induction is the most successful treatment for human cancers. However, in patients with cancer, surgery is limited in terms of the location of some cancers or large-scale recurrence. Therefore, chemotherapy is a beneficial approach, exerting excellent effects. However, heterogenous mutations, resulting in tumor resistance to chemotherapy is common in various human cancers. Therefore, there is an urgent need to develop various chemotherapeutic agents for human cancers. UC is one of the most critical malignancies that can metastasize into proximal or distal tissues through invasion and migration, thus resulting in substantial mortality rates. UC is the primary cause of health issues owing to its high recurrence rate and lack of treatment efficacy. Therefore, there is pressing need for novel therapeutic strategies for patients with UC. In this study, we demonstrated that ivermectin, a classical therapeutic agent against parasitic infection, exhibits effective anti-UC activity through cell cycle regulation and apoptosis induction. Further investigation demonstrated that ivermectin-induced UC cells apoptosis was mediated by JNK signaling pathway suppression and downstream caspase-dependent apoptosis. Ivermectin is a polycyclic lactone pesticide derived from Streptomyces avermitilis and exerts a broad-spectrum effect against parasites. It also displayed an anti-viral activity in various viruses [4]. Current research has demonstrated the potent effects of ivermectin against SARS-CoV-2 in vitro, regardless of strain and variant [14]. However, a clinical study, in patients with COVID-19 infection who received ivermectin found no significant difference in the incidence of serious illness-related hospital admissions [15]. In addition, ivermectin has been reported to show antitumor activity in various human cancers, including leukemia, melanoma, esophageal squamous cell carcinoma, glioma, and breast, ovarian, and colon cancers by increasing cell proliferation inhibition, cell cycle arrest, and cell apoptosis or autophagy [5][6][7]. Ivermectin has been used for many years to treat parasitic infection, and despite exhibiting antitumor activity in several cancers, its antitumor effect on human UCs remains unclear. In this study, we demonstrated that treatment with ivermectin significantly suppressed human UC cell proliferation in a dose-and time-dependent manner ( Figure. 1). This finding is consistent with that of previous studies [5][6][7]. Notably, a previous study demonstrated that low ivermectin concentrations had no cytotoxic effect, whereas ivermectin at a concentration of ≥20 μM slightly inhibited the viability of normal cells after a 48 h exposure [6]. Our findings showed that ivermectin's IC50 values at 48 h after treatment were 17.4 μM in T24 cells and 14.9 μM in RT4 cells, both of which are significantly below the lethal dose for normal cells, indicating that ivermectin is a safe and effective therapeutic candidate for human UC. Additionally, intravesical injection, which may increase the dose and effects of ivermectin, is a possibility for the treatment of non-muscle-invasive human UC in addition to oral and intravenous administration. Ivermectin mediated cell cycle arrest at G1 phase was reported in human glioma, adult T cell leukemia/lymphoma, and canine mammary tumor cells [16][17][18][19]. In this study, we also determined a G1 phase arrest in human UC cells under ivermectin treatment ( Fig. 2A and 2B). Moreover, we demonstrated that ivermectin could induce caspasedependent apoptosis in human UC cells. This finding is consistent with that of previous studies that used ivermectin to treat leukemia, glioblastoma, cad cervical, and colorectal cancer cells [7,17,[20][21][22][23]. However, it could not induce cellular apoptosis in human breast cancer cells [24], suggesting that ivermectin-induced apoptosis is not a general effect in human cancers. In addition, our results demonstrated that ivermectin activated both intrinsic and extrinsic caspase pathways to cause cellular apoptosis ( Figures. 3 and 4). Moreover, the upstream pathways involved in ivermectin-mediated cellular apoptosis, including oxidative stress, NF-κB, or Akt/mTOR pathways have been previously reported [6,21,22]. Importantly, in this study, we found a novel upstream mechanism, i.e., the JNK signaling pathway, involved in ivermectin-mediated apoptosis in UC cells ( Figure. 5). In addition, activation of the Akt signaling pathway in RT4 cells decreased with ivermectin treatment (Figure. 5). Because ivermectin suppresses the Akt/mTOR pathway and further mediates apoptosis induction in ovarian cancer cells [21], we speculated that ivermectin can also regulate cellular apoptosis in human UC cells by inhibiting the Akt signaling pathway. However, whether oxidative stress or Akt/mTOR pathways are involved in ivermectin-mediated apoptosis in human UC cells needs further investigation. In addition, treatment with ivermectin can reduce phospho-Erk expression as well as apoptosis induction in RT4 cells ( Figure 5A). However, our result demonstrated that suppression Erk pathway with PD98059 can't elevate cell apoptosis in RT4 cells under ivermectin treatment ( Figure 5B). Therefore, Erk pathway is not involve in ivermectin mediated apoptosis in RT4 cells. Interestingly, suppression of Erk activity with PD98059 could reverse ivermectin mediated PARP activation and partially reducing cell apoptosis in RT4 cells, suggesting that suppression of Erk pathway plays a feedback regulation of ivermectin induced cell apoptosis. Further study needs to investigate this hypothesis. Moreover, cisplatin-based regimens are used to treat patients with advanced stage UC; however, resistance and recurrence are common. Interestingly, ivermectin can synergize with chemotherapeutic agents including cisplatin and 5-fluorouracil to promote anticancer activity in esophageal squamous cell carcinoma cell [25], or with tamoxifen to treat breast and osteosarcoma [26]. In addition, ivermectin shows synergistic activity with docetaxel, tamoxifen, and cyclophosphamide in breast and prostate cancers [24]. These findings suggest that ivermectin could be combined with standard clinical therapeutic agents to treat certain types of human cancers. The possible synergistic effects of ivermectin with anti-UC chemotherapeutics agents require further investigation.
2022-09-30T15:27:20.980Z
2022-09-11T00:00:00.000
{ "year": 2022, "sha1": "a0b8937add177041d18e8154fdcacc75e19aac0a", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "ScienceParsePlus", "pdf_hash": "b0c0d6f37da74183e904d178aa7b1989602155ad", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [] }
213466541
pes2o/s2orc
v3-fos-license
Interaction between V2O5 nanowires and high pressure CO2 gas up to 45 bar: Electrical and structural study Graphical abstract Introduction Carbon is the most fundamental element in ecological systems and biological organisms. The atmospheric concentration of carbon gas, particularly carbon dioxide (CO 2 ), is also known to be the one of the main factors driving climate change, global warming and ocean acidification. Nevertheless, CO 2 gas is widely used in industry, especially for styrene production. Styrene is a mainstay material in the polymer industry. It is mostly produced using ethylbenzene via the oxidative dehydrogenation (ODH) process with a transition metal oxide [1][2][3][4][5][6][7]. Under the presence of inorganic oxidants, such as metal oxides reported in the last decades, the ODH process of organic aromatic compounds is accelerated [8][9][10][11]. Among various metal oxides, vanadium-based catalysts with various support materials have been focused because of their good catalytic performance, particularly styrene yields and selectivity [12][13][14][15][16][17][18][19][20]. In ODH using a vanadium-based catalyst, especially V 2 O 5 , the valence state of the vanadium switches back and forth between V 4+ and V 5+ as shown in Fig. 1 [21,22]. However, the persistent reduction of V 5+ to V 4+ results in catalyst deactivation. In other words, a large amount of V 5+ compared with that of V 4+ enhances the activation process. A large amount of superheated steam has generally been used in the process as an oxidant, but in recent years, CO 2 gas has become the preferred alternative oxidant, due to its advantages [1][2][3][4][5][6][7][12][13][14][15][16][17][18][19][20]. For example, in a CO 2 atmosphere the latent heat is maintained throughout the entire reaction process [23] and there is a greater decrease in the partial pressure of the reactants with CO 2 than with superheated steam [24]. This is the reason for the growing industrial interest in CO 2 gas mentioned above. It has been reported that high gas pressure can lower the dissociation energy of the gas, resulting in the modulation of the physical and electronic properties of 2D materials [25][26][27][28][29][30]. This suggests that high gas pressure can enhance the catalytic effect. Moreover, if small sized V 2 O 5 is used as a catalyst, it is expected that the ODH reaction will be reinforced because of the increase in surface area. In this study, we synthesized V 2 O 5 nanowires (VON) and investigated their structural modulation and electrical transport property as a function of CO 2 gas pressure from vacuum to 45 bar. The pressure-dependent Transconductance (G(P)) decreased as the pressure increased, due to oxidation of the VON. This behavior was clarified by x-ray photoelectron spectroscopy (XPS), and structural changes were studied by x-ray diffraction (XRD) pattern and Raman spectroscopy before and after exposure to high pressure CO 2 . We found an increase in the interlayer distance in the VON, and an increase in the V 5+ state, after the VON were exposed to high CO 2 pressure. From the results in this study, we suggest that an ODH process with a VON catalyst can be improved by highpressure CO 2 atmosphere. Synthesis of the V 2 O 5 nanowires The VON was synthesized using a sol-gel method involving the polycondensation of vanadic acid in water [31]. VONs were synthesized from 5 g ammonium meta-vanadate (Aldrich) and 50 g acidic ion-exchange resin (DOWEX 50WX8-100, Aldrich) in 1 L deionized water, and then the mixture was kept at room temperature to produce an orange sol that darkened with time. Measurement electrical transport property of VON with respect to CO 2 gas pressure Sol-gel based VON film was synthesized with VON by drying at 80°C for 48 h in an atmospheric condition. The dried VON film was cut into 1 Â 5 mm sections, and attached to an insulating substrate to measure its electrical conductance as a function of CO 2 gas pressure using a home-made pressure chamber. The VON film in the pressure chamber was heated at 80 ℃ and high vacuum condition (1:0 Â 10 À6 Torr) for 3 h to remove residues. After annealing, the VON film was cooled down to 300 K (300:00 K AE 0:20 K) and the temperature was maintained during the entire measurement process. In this study, 99.999% CO 2 gas was used. CO 2 pressure was increased by 5 bar up to 45 bar. G(P) was measured 30 min after reaching each target pressure. G(P) was fitted from the I-V curve of the VON film (the applied voltage was from À200 mV to 200 mV, in 2 mV steps using a KEITHLEY SCS-4200, U.S.A.). Results and discussion Morphology and structural investigation with SEM, XRD, and Raman spectroscopy Fig. 2(a) shows the SEM image of the VON. VON with diameters of about 10-20 nm, which is well consistent with the previous literatures [31][32][33]. The normalized XRD patterns of pristine VON and VON after high-pressure CO 2 gas exposure (CO 2 -VON) are shown in Fig. 2(b). The (0 0 1) peak of the CO 2 -VON has shifted to a smaller angle (2h = 8.88 for VON and to 8.75°for CO 2 -VON, the inset of Fig. 2(b)), which indicates that the interlayer distance of the VON increased from 9.95 to 10.10 Å after CO 2 exposure. In order to confirm the structural modulation, Raman spectroscopy was performed. Fig. 2(c) shows the normalized Raman peaks. The characteristic VON peaks were found [34][35][36]. The dominant peaks at 139 and 193 cm À1 originate from the relative motions of two V 2 O 5 units belonging to the unit cell. The peaks at 280 and 405 cm À1 are associated with the bending vibration of the V@O bonds. The peaks at 689 and 991 cm À1 , respectively, correspond to the bending vibration of doubly coordinated oxygen (V 2 AO) and the stretching vibration mode of the shortest VAO 1 . These six peaks did not change even after high CO 2 pressure exposure. The peaks at 297, 522, and 476 cm À1 were assigned to the bending vibration, the stretching mode of the bridging triply coordinated oxygen (V 3 AO), and the bending vibration of the bridging VAOAV, respectively. Although the peak intensity changed little, these three peaks were reduced after VON exposure to high CO 2 gas pressure (see Fig. S1 in Supplementary Information and the inset in Fig. 2(c)). This can be interpreted as follows. The amount of VAOAV and V 3 AO bonds is relatively small due to oxygen vacancies in the pristine VON. After CO 2 exposure, the VON is oxidized. As a result, the amplitude of vibration in both bonds (phonon) is weakened. This effect can be seen in G(P). Electrical transport property of VON with respect to CO 2 gas pressure Fig. 3 shows the electrical transport property of VON as a function of CO 2 gas pressure from vacuum (~10 À6 Torr) to 45 bar. As soon as the VON was exposed to 5 bar of CO 2 gas, the G(P) of the VON dramatically decreased from 26.33 to 13.92 lA, and then it gradually declined down to 1.97 lA at 45 bar of CO 2 pressure. This behavior is similar to the oxygen pressure-dependent conductance of VON [37]. In general, charge transport in VON has been interpreted to be by small polaron hopping. The concentration ratio of V 4+ /(V 4+ + V 5+ ) plays an important role in this transport behavior [25]. Specifically, the amount of V 4+ and V 5+ significantly affects the charge transport property, which is related to oxygen vacancies. It is well known that the charge carrier density in VON is proportional to the density of oxygen vacancies. Oxygen vacancies cause the reduction of V 5+ , producing V 4+ , which can be understood as V 5+ plus an additional electron [38]. This means that the electrical conductance of VON decreases when oxygen vacancies are reduced. X-ray photoelectron study before and after CO 2 exposure For this reason, the valence state of the vanadium in VON before and after exposure to CO 2 was studied using XPS (Fig. 4). The surveys of pristine VON and CO 2 -VON are depicted in Fig. S2 in the Supplementary Information. Vanadium, oxygen, and carbon species were observed. The carbon peak in the pristine originates from the carbon tape used to support the sample, so we did not consider this peak. The peaks at approximately 530, 524, and 517 eV correspond to O 1s, V 2p 1/2 , and V 2p 3/2 (Fig. 4). The O1s peak consisted of three sub-peaks: VAOH at 533.29 eV, VAOAV at 531.65 eV, and O 2+ at 530.29 eV. The amount of VAOH slightly increased after CO 2 exposure (Table 1). This shows that the surface OH rarely changes after annealing and CO 2 exposure. On the other hand, the amount of VAOAV bonds in the VON after CO 2 exposure increased from 37.07 to 54.61%. V 2 O 3 , V 2 O 5 (V 5+ ), and VO 2 (V 4+ ) species were observed in V 2p 3/2 . Note that the amount of V 2 O 5 species significantly increased from 48.05% for VON, to 71.89% for CO 2 -VON, but the VO 2 species decreased from 45.72% to 18.18%. Since the charge transport in VON is mainly governed by the amount of V 4+ and V 5+ as mentioned above, we focused on the vanadium species. The ratio of V 4+ /V 5+ changed from 0.952 for the pristine VON to 0.253 for CO 2 -VON. The decrease in V 4+ /V 5+ in the VON after CO 2 exposure indicates that the VON was oxidized due to CO 2 . A notable point is that G(P) continuously decreased and saturated with the increase in CO 2 pressure. This means that the high CO 2 pressure enhanced the oxidation of the reduced VON. Conclusions This study investigated the effect of high CO 2 gas pressure on VON conductivity, and revealed that pressure-dependent oxidation intrinsically reduced the VON. G(P) continuously decreased as CO 2 pressure increased, which resulted in an increase in V 5+ . This behavior was confirmed by XPS taken before and after exposure to high CO 2 pressure. Upon CO 2 gas exposure, the ratio of V 4+ /V 5+ was reduced by four times. Structural modulation resulting from CO 2 gas exposure was also studied by XRD and Raman spectroscopy. The interlayer distance in the VON increased from 9.95 to 10.10 Å, due to an increase in the amount of VAOAV and V 3 AO bonds. This study provides a potential method for improving the ODH process using a VON catalyst in a high-pressure CO 2 atmosphere. Ethics statement This article does not contain any studies with human or animal subjects.
2020-02-06T09:02:28.218Z
2020-01-30T00:00:00.000
{ "year": 2020, "sha1": "77daf68ee91593a09c07ba743e3892f9091f5f98", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.jare.2020.01.014", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "775677cc122b2e0cccc93bc32e3d9cf045e4713b", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
221491579
pes2o/s2orc
v3-fos-license
Predictors of Change in Weight Among People Living with HIV on Antiretroviral Treatment in West Hararghe Zone, Ethiopia: A Retrospective Longitudinal Study Background Human immunodeficiency virus (HIV) remains a major global public health issue, particularly in Africa. In resource-limited settings like Ethiopia, regular weight measurement and monitoring is useful in the examination of patient response to antiretroviral therapy and in clinical decision-making. However, there is a paucity of evidence on factors that affect longitudinal weight change. Therefore, the present study was intended to identify predictors of weight change among people living with HIV (PLWH) in West Hararghe, Ethiopia. Methods An institutional-based retrospective cohort study was conducted among 558 PLWH aged 18 years and above from September 2013 to January 2019 at Chiro Zonal Hospital and Gelemso General Hospital in West Hararghe zone, Ethiopia. Data were entered in Epi info 7 and analyzed in R software. The linear mixed effect regression model was used to identify predictors of longitudinal change in weight. Regression coefficients with their 95% confidence intervals were used to indicate the strength and significance of the association. Results Weight showed improvement in follow-up periods. In this study, age of respondent (beta=0.136, 95% CI, 0.044:0.227), time since the initiation of antiretroviral therapy (ART) (beta=0.089, 95% CI, 0.075:0.104), primary educational status (beta=2.403, 95% CI, 0.540:4.266), secondary educational status (beta=4.035, 95% CI, 1.666:6.404), tertiary and above educational status (beta=3.444, 95% CI, 0.330:6.558), sex (beta= −5.514, 95% CI, -7.260:-3.768), ambulatory functional status (beta= −3.419, 95% CI, −6.169:-0.668) and baseline CD4 count (≤200) (beta=2.205, 95% CI, 0.593, 3.817) were significant predictors of longitudinal weight change. Conclusion We observed an increment in weight among PLWH who were on ART in Ethiopia. Educational status, time since the beginning of ART, age and having CD4 count above 200 have contributed positively to the change in weight, while ambulatory functional status and being female are negatively associated with longitudinal change in weight. Close monitoring is recommended for patients with ambulatory baseline functional status and for patients with baseline CD4 count ≤200. Introduction Human Immunodeficiency Virus (HIV) remains a major global public health issue, responsible for 770 000 deaths in 2018, of which more than two-thirds (25.7 million) are from the World Health Organization (WHO) African Region. Around 37.9 million people are living with HIV in 2019 and more than half of them live in Africa. 1 In Ethiopia, 690,000 were living with HIV and 11,000 people died of acquired immunodeficiency syndrome (AIDS) related illness, according to the UNAIDS report in 2019. 2 People Living with HIV (PLWH) had a persistently lower subcutaneous adipose tissue compared to HIV-free participants. Most PLWH gain weight after antiretroviral therapy (ART) has started in the current ART era, despite relatively normal immune function and minimal pre-ART weight loss. 3,4 The most pronounced increase in weight occurs in the first year after the initiation of ART. A small proportion of patients may also be underweight. Developing countries are experiencing a double burden of underweight and overweight, which may increase the risk of cardiovascular disease among PLWH. 5,6 Weight is one of the clinical measurements used to assess the efficacy of ART. It can be used as an open index for the monitoring and evaluation of PLWH, among other diagnostic indices, particularly in developing countries. Ensuring normal Body Mass Index (BMI) for PLWH in clinical settings is crucial in providing important insights and identifying opportunities for interventions. 6,7 In resource-limited settings like Ethiopia, regular weight measurement and monitoring is useful for clinicians to examine the response of patients to ART and predict the disease stage as well as clinical decisionmaking. Studies are needed to determine whether weight improvement prior to or at ART initiation will result in improved ART outcomes. Despite this, there is little evidence in Ethiopia of factors affecting longitudinal weight change. Therefore, the present study was intended to identify predictors of weight change among PLWH on ART in West Hararghe, Eastern Ethiopia. The finding would be helpful for monitoring the effectiveness of ART and reducing the complication that may arise as a result of over and underweight. In addition, it would help for clinical decision-making and improving therapeutic care. Study Design and Settings An institutional-based retrospective cohort study was conducted among PLWH who are on ART between September 2013 and January 2019 at public hospitals in the West Hararghe zone of Oromia, Eastern Ethiopia. Among the five hospitals in the area, the study was conducted in Chiro Zonal Hospital and Gelemso General Hospital due to the availability of ART data. The two hospitals are providing services to approximately 2,300,000 people in the area. At the time of the study, a total of 1120 patients from Chiro Zonal Hospital and 521 patients from Gelemso General Hospitals were on ART. Study Population and Sample Size The study included all PLWH aged 18 years and above who started ART between 2013 and 2019 and registered in the ART Registry of Chiro and Gelemso Hospitals. Patients who had at least two weight measurements were included in the study. Patients who had single body weight measurement, those with incomplete records, transferred in from other treatment center and pregnant women were excluded from the study. Figure 1 flow chart shows how the study participants were selected. Out of PLWH who started Highly Active Anti-Retroviral Therapy (HAART) in the two selected hospitals between September 2013 and January 2019, 558 patients who met the inclusion criteria were selected. Data Collection Procedures The data for this study were obtained from secondary data and collected using a data extraction checklist. A baseline and follow-up weight data were identified and collected from the registration logbook of HAART attendants. In addition, socio-demographic variables, visiting times and clinical data were collected from the registration documents of patients. The data were collected by health professionals after they had been given adequate orientation. Measurement of Study Variables The outcome variable for this study was longitudinal weight change. Weight was measured using a standard weighing scale with graduation 0.1 kg and measuring range up to 150 kg. The scale pointer was calibrated at zero before taking measurement. Measurement of weight was recorded to the nearest 0.1 kg. Weight change refers to the difference between weight (kg) in the current visit time and weight (kg) in the visit time immediately prior to the current response. Data Quality Assurance The preliminary assessment of the adequacy of the checklist was carried out and the variables on which the data were not available were excluded from the checklist. Trained health professionals have been assigned as data collectors. In addition, to ensure that the data quality of the completed checklist was checked for consistency and completeness. Strict supervision has been applied by supervisors during data collection. Data Processing and Analysis Data entry was done using Epi Info 7 and exported to R statistical software 3.6.2 for further analysis. Descriptive statistics such as mean, median, standard deviations (SDs) and tables have been used to investigate the characteristics of the study participant. The Linear Mixed Effect Model (LMM) was used to identify predictors of weight change. Both the random intercept and slope-intercept models were examined to determine the fitness of the model. The model comparison was made using likelihood ratio test. The predictors significantly associated with longitudinal weight change in the bi-variable analysis at p-values below 0.2 were included in the multivariable linear mixed effect model. Regression coefficients of the final model and their 95% confidence intervals were used as a measure of association between the predictors and the outcome variable. A p-value of less than or equal to 0.05 was considered to be statistically significant. Ethical Consideration Ethical clearance and letter of cooperation for selected hospitals were obtained from the Institutional Review Board of the University of Gondar. Support letter was obtained from medical director of the hospitals in order to access medical records of patients. Data were fully anonymized and no personal identifiers, such as name and private information were not collected. Confidentiality during all phases of research activities was kept and data were held on secured passwordprotected system. Results A total of 558 PLWH were included in the study. The median age of the subject was 32 years IQR: (27.0-40.0). The majority of study subjects 354 (63.4%) were female, 350 (62.7%) were from urban area and more than half 303 Exploring Longitudinal Weight Changes A minimum of two and a maximum of 33 weight measurements were taken for ART patients during the study period. Figures 2 and 3 presents individual profile plots and mean profile plot indicating that there is variability of weight within and between patients. At baseline, patients had different starting points for weight, suggesting a random intercept model. As it can be shown from Table 2. Model Comparison The model comparison was made for the two nested models (mixed model with random intercept and mixed model with both random intercept and slope) using the likelihood ratio test and the result shows that the linear mixed effect model with both random intercept and slope was a more parsimonious model consistent with the exploratory analysis result (Table 3). Variance covariance structure showed correlation between repeated measurements diminishes over time. Therefore, a model with autoregressive (AR1) covariance structure was selected as a better model for this data. Discussion This longitudinal study examined weight change and its predictors among PLWH. The expected weight for all patients at baseline is 8.455 but shows variation from one patient to another with a standard deviation of 0.115. The study found that sex, educational status, age, time since ART initiation, functional status and baseline CD4 count were significantly associated with weight change. Time since the start of ART is one of the factors that is significantly associated with weight change. For a unit increase in time spent on ART, the expected weight of PLWH is increased by 0.089. This finding is consistent with the study in Ethiopia 8 and may be due to the effect of combined antiretroviral therapy which may result in changes in body fat composition overtime. 9 In our study female sex is an independent predictor of weight change. People Living with HIV who are female are expected to have 5.514 lower mean weight as compared to male patients. This finding is supported by a lowresource setting study which reported that after initiation of ART, the mean adjusted body weight change was higher in males than in females. 10 This could be due to females have less ability to bear traumatic life stressors like HIV/ AIDS and in addition biological (hormonal) difference of female with their counterparts make them more likely to experience psychological problems like anxiety and depression which is negatively related to weight. Age is another significant predictor of weight change. Expected weight of PLWH is increased by 13.6% for a unit increase in age of the patient in years. This finding is in line with previous study in USA which indicated as age of patient increases PLWH gain weight due to increment in lean body mass related to effect of combined ART. Educational status is another variable that has shown significant association with weight change. Patients with primary, secondary and tertiary and above educational status are expected to have 2.403, 4.305 and 3.444 times higher mean weights compared to uneducated patients, respectively. The possible explanation for this could be that literate patients have a better awareness on the importance of continuous intake of drugs in order to have better adherence, and that literate persons may have a better socio-economic status and be able to eat balanced and adequate food. In the current study, the mean weight of patients with ambulatory baseline functional status is expected to be 3.419 lower compared to those with working baseline functional status and this might be due to patients with ambulatory baseline functional status may be more susceptible and likely to have to opportunistic infection and complications which, in turn, have a negative effect on weight. This study found that PLWH with baseline CD4 counts greater than 200 are expected to have 2.205 higher mean weights than those with baseline CD4 counts less than or equal to 200. This finding is in line with previous studies 8,11 which reported a significant positive association between weight change and CD4 count. This can be explained by the fact that patients with baseline CD4 counts above 200 have a lower risk of developing opportunistic infections and malignancies, which could contribute to higher mean weight. The limitation of this study was that it was based on secondary data from patient records that could not be obtained for some important variables such as nutritional status and substance use. In the conclusion, we observed an increment in weight among PLWH who were on ART in Ethiopia. Educational status, time since the beginning of ART, age and having CD4 count above 200 have contributed positively to the change in weight, while ambulatory functional status and being female are negatively associated with longitudinal change in weight. Close monitoring is recommended for patients with ambulatory baseline functional status and for patients with baseline CD4 count ≤200. Data Sharing Statement The datasets supporting the conclusions of this article are available upon request to the corresponding author. interpreted by the authors entirely independently of the funding source. The funder has no role in the publication process. Disclosure The authors declare that they have no competing interests.
2020-08-20T10:05:14.608Z
2020-08-01T00:00:00.000
{ "year": 2020, "sha1": "7c47f47cbfcb0bf0edbf10fb59233f986b45a490", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=60692", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ddeebf378ef3229bec1ee458cdbc8c1cc0d96a7c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247596835
pes2o/s2orc
v3-fos-license
A Prompt Array Keeps the Bias Away: Debiasing Vision-Language Models with Adversarial Learning Vision-language models can encode societal biases and stereotypes, but there are challenges to measuring and mitigating these multimodal harms due to lacking measurement robustness and feature degradation. To address these challenges, we investigate bias measures and apply ranking metrics for image-text representations. We then investigate debiasing methods and show that prepending learned embeddings to text queries that are jointly trained with adversarial debiasing and a contrastive loss reduces various bias measures with minimal degradation to the image-text representation. Introduction Large-scale, pretrained vision-language (VL) models are growing in popularity due to their impressive performance on downstream tasks with minimal finetuning. Their success can be attributed to three main advances: the rise of transformers in natural language processing (NLP) (Devlin et al., 2018), cross-modal contrastive learning (Zhai and Wu, 2018) and the availability of large multimodal web datasets (Changpinyo et al., 2021). These models, including CLIP (Radford et al., 2021), are readily available through APIs (Evertrove; Hug-gingFace), allowing non-technical users to capitalize on their high performance 'out of the box' on zero-shot tasks (Kirk et al., 2021). Despite these benefits, an expansion in scope for downstream applications comes with greater risk of perpetuating damaging biases that the models learn during pretraining on web-scraped datasets which are too large to be manually audited for quality (Birhane et al., 2021). Cultural and temporal specificity is also of concern given models are trained on a snapshot in space and time (Haraway, 2004), thus reinforcing negative stereotypes that may otherwise * Corresponding author: hugo@hugob.se naturally alter through societal pressures and norm change. The risk and type of societal harm intimately interacts with the downstream task at hand. Clearly, using VL models for dog-species classification poses very different dangers to projecting the similarity of human faces onto axes of criminality (Wu and Zhang, 2016;Fussell, 2020) or homosexuality (Wang and Kosinski, 2018). Applications of this kind are extremely hard to ethically motivate and there may be no appropriate use case that justifies their associated risks. Even in more benign applications such as image search, there may be harmful consequences arising from representational and/or allocational harms. Representational harms come from the technological entrenchment of stereotypical perceptions; for instance, the overrepresentation of one gender when querying for a profession (e.g., "nurse" versus "doctor") or one ethnicity in explicit and NSFW content (Birhane et al., 2021). Allocational harms arise when an individual's or group's access to resources and opportunity are differentially impacted (Weidinger et al., 2021); for instance, if the ordering of images in search results shifts recruiters' perceptions about the real-world suitability of different peoples for different jobs (Kay et al., 2015). In this paper, we focus on the risk of representational harms when large-scale VL models are used to map sensitive text queries, such as "a photo of a criminal" onto face datasets. While frameworks to measure bias have been established for NLP and computer vision (CV) separately, there is considerably less work on VL (Agarwal et al., 2021). Appropriate debiasing techniques for large-scale VL models are also sparse and face challenges from a lack of access to the original training data and the infeasible amount of compute required for retraining. For the successful and safe adoption of VL models, we need both effective measures of bias as well as efficient methods of debiasing. To this end, Figure 1: Our proposed debiasing method for pretrained vision-language models. Sensitive text queries and images (with labeled attributes, e.g., Gender) are fed to their respective frozen text and image encoders. We employ an adversarial classifier which aims to predict the image attribute labels from similarity scores between the outputs of the two encoders. Learnable "debiasing" prompt tokens are prepended to the sensitive text queries and optimized to maximize the error of the adversary. In this way, biased correlations between image-text similarity scores and attribute labels are reduced whilst preventing significant degradation of the joint image-text representation. Additionally, we jointly train with a contrastive loss on generic image-text pairs to further avoid degradation of the joint representation (not shown for clarity). Pretrained Vision-Language Model we make three contributions: (i) we investigate and evaluate different measures of bias for VL models, showing that some measures, such as WEAT, are inappropriate; (ii) we evaluate gender and racial bias in state-of-the-art VL models on two face datasets: FairFace (Kärkkäinen and Joo, 2021) and UTK-Face (Zhang et al., 2017); and (iii) we provide a framework for debiasing VL models (see Fig. 1), requiring only sensitive attribute labels of images as supervision, and show that jointly optimizing for unbiasedness and image-text contrastive (ITC) losses via an array of learnable tokens prepended to text embeddings is the best strategy for mitigating bias without substantially degrading the quality of the image-text representation. Problem Statement We consider the problem of learning unbiased joint text-image representations. We first establish a framework for measuring the degree of bias in these representations. Consider a dataset of imageattribute pairs (I, A) where I is an image and A is its corresponding attribute from a set of disjoint protected attribute labels A = {A 1 , ..., A l }, for example photos of faces with gender labels. Suppose there is a set of sensitive text queries, T = {T 1 , ..., T m } with corresponding concepts C = {C 1 , ..., C m }, such as the sentences "a photo of a good person", "a photo of a bad person" and their corresponding concepts "good" and "bad". Our goal is to learn a joint vision-language model Ψ that: (i) outputs a similarity score for image-text pairs, s = Ψ(I, T ), where semantically similar image-text pairs are scored highly; and (ii) is unbiased, defined as outputting similar distributions of scores across attributes for a given text query which should be unrelated to demographic affiliation (see Sec. 2.2). Specifically, we consider the case where Ψ is initialized as a pretrained model that already achieves (i) but not (ii) -as is the case with current pretrained VL models, which are often used for zero-shot classification, as well as image and video retrieval. We evaluate the bias of a model when applied to this scenario. Sensitive Attributes and Relevancy Some statistical associations between demographic groups and text queries are required for accurate and relevant text-image pairing in VL models. This is especially true with historical or contextual associations; for instance, the expected over-representation of men in the query '19th century dockworker' or various minoritized groups in '1960s civil rights marches'. However, our framework assumes there is a reasonably concrete normative view that there exists a set of 'neutral' text queries like "a good/bad person" which hypothetically should be independent of demographic categories. This aligns with a notion of statistical parity (Dwork et al., 2012), where maintaining high-quality feature representations alongside debiasing specifically relates to conditional statistical parity (Corbett-Davies et al., 2017). Under this treatment of fairness, some associations with a sensitive attribute are legitimate and explainable, while others are illegitimate and unjust (Makhlouf et al., 2021). While this assumption underpins existing bias evaluations such as the Implicit Association Test (Greenwald et al., 1998), it is necessarily a simplification and does not resolve deep tensions in ontology and normative ethics, including questions over what sensitive attributes are relevant, what a 'legitimate' association is or what a fair society should look like. These issues require ongoing, multi-disciplinary and multi-stakeholder discussions. We demonstrate a method for measuring and debiasing associations between a set of text prompts and demographic attribute labels but the specification of the prompts and sensitive attributes can and should be adapted to the context and culture under which the VL model is applied and how the downstream task is defined. Bias Metrics WEAT. We first investigate the suitability of the Word Embedding Association Test (WEAT) (Caliskan et al., 2017) for measuring bias in VL models. WEAT is derived from the Implicit Association Test (IAT) (Greenwald et al., 1998) which measures the time-delay that human subjects take in associating a given demographic group with positive or negative descriptors. WEAT is used to measure the bias of word and sentence embeddings (Caliskan et al., 2017;May et al., 2019), and more recently has been adapted to evaluate the the bias of vision encoders (Steed and Caliskan, 2021). The mathematical implementation of WEAT for the VL setting is described in App. A. Ranking metrics. We also apply bias measures from the information retrieval literature (Geyik et al., 2019;Yang and Stoyanovich, 2017) to the setting of text-image retrieval. This is a natural application given that VL models are increasingly used for semantic image search, introducing biases from the attributes which get ranked higher than others in the top k results. We describe the mathematical implementation of these metrics, namely Skew, MaxSkew and Normalized Discounted Cumulative KL-Divergence (NDKL) in App. B. Harmful zero-shot image misclassification. Agarwal et al. (2021) propose using the zero-shot misclassification rates of people into derogatory criminal and non-human categories. Implementation details for zero-shot image classification experiments are described in App. G. Debiasing The proposed debiasing method has two components: (i) the objective function to minimize for bias reduction; and (ii) the choice of parameters to optimize over in the VL model Ψ to minimize (i). Fairness Objective with Adversarial Debiasing We follow a common approach in bias mitigation (Edwards and Storkey, 2015;Elazar and Goldberg, 2018;Xu et al., 2021) and employ an adversarial classifier, θ adv , whose aim is to predict the attribute label A of image I given only its similarity logits from the set of sensitive text queries T where S = [s 1 , ..., s M ] ∈ R M and s m = Ψ(I, T m ). The adversarial classifier is trained to minimize the cross entropy loss between the predicted attribute labels and the ground truth attribute labels A (2) L adv = − A∈A A log θ adv (S). In this work, we define an unbiased representation as being blind to the sensitive attributes over the set of 'neutral' text queries so optimize the VL model to maximize this adversarial loss. Adaptation Methods Naïve optimization of the above objective function without any regularization can lead to trivial solutions, such as Ψ outputting the same logits irrespective of the image or text query. In this case, the feature representation loses all semantic information of the input, making it effectively useless for downstream tasks. We thus investigate regularization techniques (discussed below) that restrict the set of parameters in the image-text model Ψ which can be optimized over, as well as joint training of debiasing and image-text similarity objectives. Finetuning depth. Instead of optimizing all model parameters, a common regularizing adaption technique is to finetune the layers in the image-text encoders to a certain depth (Zhuang et al., 2021). We instantiate Ψ as a dual stream encoder (Radford Ctrain + clever, stupid, successful, unsuccessful, hardworking, lazy, kind, unkind, nasty, noncriminal, moral, immoral, rich, poor, trustworthy, caring, heroic, dangerous, dishonest, villainous, violent, nonviolent, honest et al., 2021;Mu et al., 2021), with text and image embeddings encoded via independent streams, s = Ψ(x, y) where Ψ(x, y) = Ψ i (x) T Ψ t (y), and choose different finetuning depths for each encoder Ψ i (x), Ψ t , noting that Zhai et al. (2021) show finetuning only the text encoder Ψ t improves generalization and reduces catastrophic forgetting of the original pretrained representation when compared to full finetuning. Prepending learnable text tokens. Prompt learning has shown promising results for fewshot learning, when pretrained models are applied to downstream tasks with minimal additional data (Zhou et al., 2021;Wang et al., 2021b). The optimization over prompt tokens of a few thousand parameters (rather than the full model which can be 100M+) enforces heavy regularization and prevents catastrophic overfitting to the few samples. We use this method to regularize the debiasing optimization, since unconstrained training to maximize the adversary's loss can simply collapse all embeddings. Following (Zhou et al., 2021), we prepend learnable text tokens to the text queries after they have been embedded by the token embedding layer (see App. F). Joint training with image-text similarity. To debias the model without losing strong image-text similarity performance, we add an auxiliary imagetext contrastive (ITC) loss which is computed from batches of image-text pairs. ITC loss is used to train various VL models, including CLIP (Radford et al., 2021), however, this can be substituted with any image-text matching loss. Datasets The original IAT literature, from which this work draws inspiration, relies on the association between faces of different demographics and text attributes for measuring bias. We also use two commonlyused face datasets as a comparable baseline for the novel application of these these principles to the VL subdomain but discuss limitations in Sec. 6. FairFace (Kärkkäinen and Joo, 2021) consists of 108,501 images of GAN-generated faces. This dataset has emphasis on a balanced composition by age, gender and ethnicity. The ethnicities are: White, Black, Indian, East Asian, South East Asian, Middle East and Latino. The training dataset for the utilized GAN was collected from the YFCC-100M Flickr dataset (Thomee et al., 2016). UTKFace cropped image dataset (Zhang et al., 2017) contains 20,000 images with ethnicities: White, Black, Asian, Indian, and Others (like Hispanic, Latino, Middle Eastern). This is a notable limitation compared to FairFace which has individual classes for each of these. UTKFace has different characteristics to FairFace, in terms of variance in lighting conditions, color quality and angle of portraits. Experimental Protocol Text query generation. We select pairwise adjectives from the IAT dataset. 1 We use pairs of words which are uncorrelated with facial expressions or sensitive attributes, e.g., not "happy/sad" or "beautiful/handsome" (see Tab. 1). We expand the test set with unseen templates and concepts to assess generalizability. In order to produce single bias measures, we aggregate across text queries using the arithmetic mean over all templates. Bias metrics. Of the metrics defined in Sec. 2.3, we find that the effect size of WEAT is overly sensitive to changes in model architecture, evaluation dataset, as well as minor syntactic changes in text queries (see App. C). MaxSkew@k with k = 1000 and NDKL were found to be more robust measures so are used in the following experiments. Additional results for harmful zero-shot misclassification are presented in App. G. Downstream performance metrics. We report the zero-shot (ZS) performance on (i) flickr R@5 : recall@5 text-to-image retrieval on the Flickr-1k test set (Young et al., 2014) and ( Debiasing baseline. We further compare our debiasing method to a simple baseline, CLIPclip (Wang et al., 2021a), which performs feature selection on CLIP embeddings by removing the dimensions with the highest mutual information to the sensitive attribute labels of the images. The feature selection is computed on the training set and evaluated on the test set with clipping done on both the image and text embeddings. Results Bias across model architectures and pretraining. The results in Tab. 2 indicate that higher feature quality comes from (i) models pretrained on larger datasets, and (ii) models with larger image encoders (RN50 < ViT B/32 < ViT B/16 < ViT L/14 ). The FiT model breaks the pattern, which may be explained by its joint training on both images (CC) and video (WV) and higher quality datasets than YFCC15M. Increased pretraining dataset size decreases bias (both MaxSkew and NDKL). (IN1Kacc) trade-off of our debiased models with varied ITC loss weights λ (in red) and CLIP-clip using different numbers of removed dimensions m (in blue). Effectiveness of debiasing approaches. During adversarial debiasing, we tried adding an 2 loss (Kaneko and Bollegala, 2021) between the original model embeddings and debiased model embeddings. However, finetuning in this setting did not reduce bias nor increase feature quality. To prevent the pretrained model's feature quality from degrading due to the adversarial loss, we use joint training with an ITC loss on FairFace30K (train). The results of ablation over debiasing approaches (see Tab. 3) show that while pure adversarial loss significantly reduces the bias metrics (-69% to -80%), it also reduces feature quality by up to 25%. Training only with the ITC loss shows small increase in both feature quality (0% to 5%) and bias metrics (0% to 6%). It is only when training jointly with adversarial and ITC loss that bias metrics are significantly reduced (-52% to -65%) with feature quality either improving or staying relatively unchanged (+3% to -1%) compared to the baseline. Debiasing with different ITC loss weights (λ) allows us to explore the bias-accuracy tradeoff in our framework, and we compare our results to the results of clip-clip with different numbers of cutoff dimensions (m) in Fig. 2. For λ * = 0.05, our joint training method outperforms CLIP-clip in downstream performance for all values of m. For low values of λ ≤ 0.0001, our method lies within the pareto-frontier of CLIP-clip. However, operating on this part of the curve is undesirable given that accuracy drops to 55%. There are additional benefits of our method: CLIP-clip applies heuristic feature clipping so necessarily loses more information than just gender information in debiasing because no single dimension of the feature vectors is dedicated to gender information. Therefore, it is of interest to have an effective debiasing method like ours that keeps all dimensions of the imagetext embeddings. We further evaluate adversarial debiasing when training different parts of the model, as well as pure prompt learning (see App. H). The best bias results are achieved early on for all techniques in Tab. 3, and reach their optimum within 3 epochs, so our method is relatively computationally cheap (∼ 3 hrs per training run on 1 GPU). We note that for models with separate image and text encoders (all VL models in this paper), training prompt embeddings allows precomputation of image embeddings, thus decreasing computational cost significantly. Generalization across datasets and attributes. Table 4a shows the percentage change in bias measures when training with adversarial loss for gender attributes on FairFace then evaluating on UTK-Face (and vice-versa). 2 Training on FairFace shows larger reductions in bias metrics (-73% to -37%), than training on UTKFace (-35% to -3%). The Fair-Face training subset is ∼ 4× larger than UTKFace which may explain the difference in reductions. When the FairFace-trained model is evaluated on UTKFace, NDKL is increased and MaxSkew is decreased, possibly due to lower diversity of facial expressions in UTKFace (Kärkkäinen and Joo, 2021). Thus, debiasing on FairFace appears to generalize better, but more work is needed to confirm this. Next, we evaluate the change in bias measures when training the same debiasing protocol with FairFace for gender attributes, then evaluating on FairFace with race attributes (see Tab. 4b). The bias reduction on race (-45% to -40%) are lower than the reduction on gender (-79% to -69%) but still of significant magnitude, demonstrating that debiasing on one attribute class can result in debiasing of other classes. Even though FairFace is well-balanced across gender, race, and their intersection, racial bias in the pretrained baseline is more than twice the gender bias (on both MaxSkew and NDKL). Given the greater prevalence of face image datasets with gender annotations, it is encouraging that debiasing on gender also reduces racial bias but further research is needed into crossattribute debiasing generalization. Qualitative debiasing results. In Fig. 3, we present the top-5 ranked images for the text query: "A photo of a smart person.". Before debiasing, CLIP produces a highly skewed distribution towards male faces. After debiasing, the images are more balanced by gender and age. Related Works There have been multiple recent releases of opensource VL models (Radford et al., 2021;Mu et al., 2021;Bain et al., 2021), but research into bias measurement and mitigation has not kept pace, with only a few papers to date tackling these topics for VL (Agarwal et al., 2021;Zhao et al., 2021;Wang et al., 2021a). In this work, we therefore drew inspiration from the literature on dataset-and modellevel bias in CV and NLP (Mehrabi et al., 2021). Bias in NLP. Large-scale language models are optimized to reflect statistical patterns of human language, which can be problematic if training datasets contain harmful or misrepresentative language (Weidinger et al., 2021). . We adopted the idea of adversarial finetuning in our work because, as well as being effective, it is computationally cheap and does not require access to the original dataset. Bias in vision-language. Some work measures bias in VL representations. The authors of the original CLIP paper investigated manifestations of bias within their own model (Agarwal et al., 2021) by assessing the misclassification of faces by age or race with non-human and criminal categories. Wang et al. (2021a) proposes a simple debiasing method via feature engineering by removing the dimensions in CLIP embeddings most associated with gender bias, however this guarantees feature degradation due to significant information loss. The sparse literature on debiasing VL models falls into two categories: (i) dataset-level debiasing (Zhao et al., 2021) and (ii) Limitations and Ethical Consideration Our methods and findings are subject to some limitations, as well as some ethical considerations of how bias and fairness are operationalized. Assumptions on computational restrictions. Our methods rest on two assumptions about the setting of the downstream application, namely that (i) the VL model is too large to be pretrained from scratch within the computational budget, and (ii) there is no access to the original training dataset. In the absence of those assumptions, we strongly encourage employing ethical dataset curation practices as well as including fairness considerations in the initial training of the model. However, in the case where our assumptions hold, our method provides a cheap, simple yet effective method for debiasing VL models. Context-dependency of the debiasing goal. One limitation in the applicability of our debiasing method comes from the fact that any "desired distribution" of age, gender, ethnicity or other identity factor is related to (and may have to stem from) the context in which the model is developed or deployed. For example, the demographic distribution of ethnicities and their lived experiences varies across countries or regions so when debiasing VL models, different sensitive attributes and text prompts may be more or less relevant. Our bias measurement and mitigation techniques can be applied to any set of sensitive attribute queries and text prompts but defining how these relate to bias is a normative, subjective and contextual question. Lack of intersectional analysis. Due to practical constraints on available dataset labels, our experiments have only investigated social bias with respect to gender and ethnicity attributes. We encourage future research on more attributes, as well as intersectional analysis of how biases stack together (e.g., age and gender together may display much larger bias than either in isolation). However, we expect our mitigation and measurement techniques to work with similar efficacy and efficiency in intersectional experiments. Focus on representational harms. We primarily focus on representational harms, i.e., the harms which arise from unjust, inequitable portrayals across demographic groups. The problematic entrenchment of harmful norms is clear if marginalized groups are more highly associated with negative, criminal or non-human traits, while societallydominant groups are associated with positive traits such as being 'smart', 'good' or 'kind'. These rep-resentational harms can appear in common downstream use cases of VL models including image captioning or image search, with a potential mechanism for concomitant allocational harms. For example, an individual applying for a certain job may be discouraged if all faces returned by Google search on the position do not match their own identity or a recruiter may be influenced towards unfairly prioritizing applicants from the well-represented demographic. We do not explicitly test allocational harms and suggest future research should explore both general and case-specific settings by engaging multiple stakeholders and affected communities (Weidinger et al., 2021). Sole focus of bias in face images. Face datasets were used in original research on implicit bias (Greenwald et al., 1998) and have been adopted widely for bias in machine learning contexts, especially in the computer vision community. This motivated our use of face datasets in the subdomain of VL. Note that many well-known large face image datasets present privacy and representational issues, and that FairFace (Kärkkäinen and Joo, 2021) thus serves an important role in ethical bias research due to its synthetic nature. However, focusing only on face datasets encodes only a narrow presentation of social bias. In reality, social, cultural and historical biases extend far beyond face images, and includes associations on cultural artifacts, practices and geographic localities. We encourage future work on broader presentations of bias and harms in addition to those captured from captioning face datasets. Code of ethics. Our method can be applied to reduce representational harm in search queries. Our methods avoid using costly and environmentallydamaging training procedures. We use the privacypreserving dataset FairFace which avoids potential unconsensual use of face images, but UTKFace may entail privacy risks. We do not employ human annotators in any capacity. Conclusion This paper establishes a framework for measuring and mitigating bias in VL models. Firstly, we demonstrate that ranking metrics (specifically MaxSkew and NDKL) are effective bias measures. We report these metrics for a range of pretrained VL models for gender and racial bias in photos of faces. Our results confirm previous findings in other domains that (i) more pretraining data correlates with lower model bias, and (ii) training models with SSL can reduce bias. Secondly, we demonstrate a supervised adversarial debiasing method of VL models via learned "debiasing" tokens on publiclyavailable face image datasets with attribute labels. The proposed method demonstrates a substantial reduction over a suite of bias metrics for gender and race attributes, with feature degradation being wholly mitigable using joint training with an ITC loss on small publicly-available image datasets. Future work could include (i) debiasing during the pretraining stage, with SSL showing a promising avenue in that regard, or (ii) defining a wider diversity of attributes such as removing the harmful assumption of binary gender or considering intersectional biases. We encourage researchers in VL to continue to investigate bias in their models, be transparent in documenting model weaknesses using metrics like those proposed in this paper, and seek to apply relatively cheap and easy debiasing protocols like ours. Our code, models and debiasing tokens are publicly-available 3 for the community to use in the hope that progress can be made towards the safer and fairer use of this technology in society. Here each concept C i and attribute A i contain embeddings in a common space for stimuli associated with them (e.g., 'office', and 'business' for the concept 'career', and 'boy', 'father' and 'man' for the attribute 'male'). Now the differential association between concepts C 1 and C 2 and attributes A 1 and A 2 is defined as where, with µ denoting the arithmetic mean, measures the differential association of w with the attributes using cosine similarity. The significance of this association is computed using a permutation test. Denoting all the equal-size partitions of C 1 ∪ C 2 by {(C i 1 , C i 2 )} i , we generate a null-hypothesis of no bias and compute the p-value Finally, the effect size, i.e., the normalized measure of the separation between the associations of the targets and attributes, (Caliskan et al., 2017) is defined as In the case of WEAT, all attributes and categories are word embeddings. In our experiments, we have cross-modal interactions where the target concepts C are inferred from the text queries T and are the corresponding embeddings from the text encoder of the vision-language model, and attributes A are the image embeddings from the vision encoder. B Ranking metrics The following outlines the mathematical implementation of three bias metrics. Let τ y be a ranked list of images I according to their similarity to a text query T , and τ k T be the top k images of the list. Skew@k Skew@k Skew@k measures the difference between the desired proportion of image attributes in τ k T and the actual proportion (Geyik et al., 2019). For example, given the text query "this person has a degree in mathematics", a desired distribution of the image attribute gender could be 50% to ensure statistical parity. Let the desired proportion of images with attribute label A in the ranked list be p d,T,A ∈ [0, 1], and the actual proportion be p τ T ,T,A ∈ [0, 1]. The resulting Skew of τ T for an attribute label A ∈ A is This measurement gives an indication of possible representational bias (Weidinger et al., 2021), with certain attributes being under-represented in the top k search results (i.e., a negative Skew A i @k). However, Skew A i @k has a couple of disadvantages: (i) it only measures bias with respect to a single attribute at a time, and so must be aggregated to give a holistic view of the bias over all attributes A, and (ii) different chosen values of k gives different results, so more than a single Skew value would need to be computed for each attribute. These disadvantages form the basis of the next two measures, proposed by Geyik et al. (2019), which address each of these limitations. M axSkew@k M axSkew@k M axSkew@k is the maximum Skew@k among all attribute labels A of the images for a given text query T This signifies the "largest unfair advantage" (Geyik et al., 2019) belonging to images within a given attribute. The desired outcome is 0, implying that the real distribution is equal to the desired distribution (e.g., all genders are equally represented in the ranked images, when the desired distribution is uniform). Normalized Discounted Cumulative KL-Divergence (NDKL) employs a ranking bias measure based on the Kullback-Leibler divergence, measuring how much one distribution differs from another. This measure is non-negative, with larger values indicating a greater divergence between the desired and actual distributions of attribute labels for a given T . Let D τ i T and D T denote the discrete distribution of image attributes in τ i T and the desired distribution, respectively. NDKL is defined by where is a normalization factor. The KL-divergence of the top-k distribution and the desired distribution is a weighted average of Skew A @k measurements (averaging over A ∈ A). Thus, this aggregation overcomes the first disadvantage of Skew, however, NDKL is non-negative, and so it cannot distinguish between two "opposite-biased" search procedures. C Measuring bias across different model architectures, datasets, and syntactic changes. In Fig. 4 we report the defined bias measures (WEAT, NDKL and MaxSkew) across changes in vision-language model encoders, datasets and minor syntactic changes to the text queries T . Since WEAT uses a template to fill in with concepts, it is not directly comparable to the text queries used in NDKL and MaxSkew. We report these results only to illustrate the high variance of bias measurement results over small changes in the syntax of templates, model architecture and dataset. We note that WEAT measured on UTKFace has an opposing sign to WEAT measured on FairFace. Furthermore, with small syntactic changes in template, WEAT produced both positive and negative results on both FairFace and UTKFace. This may be explained by the fact that WEAT was primarily designed for single word embeddings, while we are using long prompts. May et al. (2019) found SEAT (Sentence Embedding Association Test) to fail for analogous reasons. Accordingly, we implement MaxSkew@1000 and NDKL which show consistent performance in measuring bias across different model architectures, datasets and minor syntactic changes. D Performance effects of learnable text token initialization In Tab. 5 we show the effects on zero-shot performance when adding zero-initialized text tokens to the text queries, before any debiasing training has occurred. We note there is a substantial drop in performance in both Flickr image retrieval and CIFAR image classification, with the drop increasing with the number of tokens added in both the prepending and appending settings. This suggests that the reduced ZS performance of the debiased model is not due to the adversarial learning but rather the learnable text tokens which shift the distribution of the text query. E Debiasing Prepending learnable text tokens. We initialize these learnable tokens as the zero-pad embeddings, minimize deviation from the original text embedding to the original text query, and optimize over the learnable tokens -the rest of the model weights are frozen. However, even with zero-pad initialized token embeddings, token embeddings of prompts are different to their non-prepended counterparts, and so the text-encoder outputs are slightly modified. This results in a degradation of model performance before any training has occurred. F Experimental protocol Debiasing implementation. Models are trained using a NVIDIA GTX Titan X with a batch size of 256. The adversarial classifier is a multilayer perceptron (MLP) with ReLU activation, two hidden layers of size 32, input size equal to the number of training text prompts, and output size equal to the number of sensitive attributes that we debias over, dim(A). We train with the Adam optimizer (Kingma and Ba, 2015) and use learning rates of 2 · 10 −5 and 2 · 10 −4 for CLIP and the adversarial classifier, respectively. Following an initial two epochs of only training the adversarial model, the CLIP and adversarial model are alternately trained for 10 batches each. Minimal parameter tuning is employed due to the computational costs. Early stopping is implemented if the CLIP model performance as tested on CI-FAR100 (Krizhevsky, 2009) 4 or Flickr-1k (Young et al., 2014) drops below 50% of the original accuracy. The small size (measured in number or size of hidden layers, or total # of parameters) of the adversarial model is motivated by the size of its input (fewer than 20 training prompts) and the size of its output (fewer than 10 sensitive attributes). We expect even the small adversarial model to remove any linear and reasonable non-linear relationships between the output logits of our vision-language models, i.e., be able to find bias if and when it exists. For finetuning, we choose to train all combinations of the last three layers of the text encoder (transformer-based with 12 layers total), the last three image encoder layers (also transformer-based 4 Chosen over IN1Kacc monitoring due to its smaller scale. with 12 layers) and the two projections from text and image feature space to the embedding space. We purposefully do not choose to train the entire model, as the expected feature quality loss is large, as well as the memory and computational requirements being significantly higher than for training only 25% of the model's parameters. We experimented with other implementations of prompt learning than prepending tokens (e.g. appending or adding learned embeddings, and different initializations, e.g. zero-pad, embedding of common token from training corpus, and uniformly random), but these variations showed different feature and bias metric results only at start of training, and no significant change in results. As the number of learned tokens impacted feature quality, we chose 2 tokens as a reasonable trade-off (more tokens giving lower feature quality). For ITC joint training we used λ = 0.05 with image-text batches from the Flickr30K training set, unless otherwise specified. G Harmful Zero-Shot Misclassification We follow the protocol of Agarwal et al. (2021) by using CLIP to classify images from the Fair- 0.664(+543%) 18.6(-78%) 6.6(-90%) Face validation set into different categories, the 7 · 2 = 14 FairFace ethnicity-gender class pairs, nonhuman categories (animal, gorilla, chimpanzee, and orangutan) and crime-related words (thief, criminal and suspicious person). We then look at the percentage of images that are misclassified into the non-human and crime classes. The original implementation is lacking in details, and it is unclear if they use a template approach. We use the template "a photo of a {}", since it is the standard for all other CLIP measurements. We also tried performing the test without using a query template but classification accuracy was significantly reduced for all images. Tab. 6 shows the results directly taken from Agarwal et al. (2021) alongside results from our implementation with the pretrained baseline CLIP ViT B/16 . Our gender-debiased model trained on FairFace has a lower misclassification rate into crime-related classes than the pretrained baseline. While the non-human misclassification rate was marginally higher than baseline, the absolute rates are still comparable and very low (<1%). For all ethnicities with misclassification rates greater than 1% from the pretrained baseline, our debiased model reduces the rate by half or more (-43% to -96%). H Additional Results In Tab. 7 we show the result of finetuning over different parts of the model as well as pure prompt learning, all with pure adversarial training. The strong regularization from having few learned embeddings keeps the feature quality at an acceptable level, and finetuning larger parts of the model lowered model performance to an unacceptable level very quickly during training.
2022-03-23T01:41:58.986Z
2022-03-22T00:00:00.000
{ "year": 2022, "sha1": "c7032f4b865ef4e40524aeb8689f83cd3cb739ce", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "efabdd27929796b712cb1b3a3051ea5358dc1200", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
256545077
pes2o/s2orc
v3-fos-license
Encapsulated Phase Change Material Slurries as Working Fluid in Novel Photovoltaic Thermal Liquid Systems: A Comprehensive Review Today, energy generation from renewable energy sources is of great interest. Photovoltaic (PV) systems, in this regard, have much to offer, but they suffer from low efficiency, which further deteriorates due to overheating under insolation. So, they need removal of heat from their bodies for better efficiency, which resulted in the introduction of PV-Thermal (PVT) systems, which feature heat transfer fluids (HTF) to draw the heat and deliver it to other systems that make use of it. Nevertheless, the best HTF has yet to be developed. Water-based fluids with additives or nanoparticles seemed like a good choice until HTFs that featured the use of encapsulated phase change materials (ePCM) were proposed. The findings of early studies and subsequent research revealed that the use of ePCM slurries (ePCM-Ss) as the working fluid in PVT systems increased the thermal efficiency, electrical efficiency, and overall efficiency without a notable increase in pumping power. However, preparation of ePCM-Ss is much more complex in many aspects compared to conventional HTFs, as it involves numerous parameters, including but not limited to the use of various shell and core materials, the variety of production methods, the homogeneity of the resulting capsules, the use of additives, the core to shell ratio, and the mass fraction of ePCM in the slurry. All these require an extensive and exhaustive study with quite a lot of background knowledge and interdisciplinary collaboration, as the proper selection of PCM materials and synthesis methods, as well as the correct concentration in the best CF, involve several aspects and expertise in a number of other fields. These parameters also significantly diversify and differentiate ePCM-S by affecting its suspension stability, rheological properties, and thermal properties. In recent years, PCMs have become an attractive research field for researchers due to their advantages. Although there are quite a number of studies addressing ePCM-S, none provide a holistic approach, and they just deal with a certain aspect of this broad topic. This study, therefore, aims to constitute a fundamental guide to refer to from the very beginning to the final implementation of the ePCM-Ss as the working fluid in PVT systems by addressing all steps, aspects, and almost all effective parameters in terms of advantages, disadvantages, challenges, and opportunities. Introduction Today, due to the rapid depletion of fossil-based energy sources (Al-Waeli et al. 2017b;Kara 2020;Nguyen et al. 2021;Öner et al. 2016) and the fact that the existing reserves are insufficient to meet (Deymi-Dashtebayaz et al. 2022) the rapidly increasing demand for energy (Chen et al. 2014a, b) for a long time, as well as the negative environmental effects of fossil fuels (Abdullah et al. 2018;Chen and Fang 2011); research aimed at the development of new technologies regarding the use of renewable energy resources and on improving the efficiency of existing applications have been gaining importance (El Chaar et al. 2011;Ghodbane et al. 2022). This need came into focus further after recent developments, such as the COVID-19 pandemic and its consequent impacts on the global energy system (Hoang et al. 2021b) and the growing international calls to cut down on GHG emissions, on which renewable energy sources have much to offer (Jäger-Waldau et al. 2020). A wide variety of research has been done on all renewable energy sources, and great progress has been made (Fu et al. 2021). Undoubtedly, solar energy is the most prominent (Adun et al. 2022) in terms of both total potential and accessibility (Allouche 2016;Awad et al. 2022), as well as in terms of diversity of applications apart from energy generation, such as drying, dehumidification, desalination, distillation etc. (Hoang et al. 2021a). Solar energy has the potential to meet a significant portion of the world's energy needs (Al-Waeli et al. 2019a); and it is projected to play a crucial role in meeting the world's electric supply by the year 2030 (Fu et al. 2021), especially as incorporated into smart energy systems in the smart cities of the near future (Hoang et al. 2021c). Besides, solar systems. Nevertheless, solar energy, which exists with great potential in vast geographies worldwide (Awad et al. 2022), has some considerable handicaps that need to be overcome, such as the need for innovative absorber designs (Abdullah et al. 2018) due to non-uniform cooling requirements, a long payback time (Al-Waeli et al. 2019a), high initial investment and installation costs (Cartmell et al. 2004), limitation on the installation area due to lacking a proper shape and structure to integrate into existing buildings or other structures, need for large areas for discrete installations (Al-Waeli et al. 2019a;Öner et al. 2016). Basically, there are two types of solar systems: solar cells (photovoltaic systems, PV) and solar panels (solar water heaters or collectors) (Fudholi et al. 2013;Said et al. 2022;Zondag 2008). PV systems convert solar radiation into electricity and are more industrial in terms of their application areas and utilization. However, they are under strong influence of a number of environmental and system parameters, which are mainly referred to as "sustainability parameters." Not all incident solar irradiation can be converted into electricity, and the greater part accumulates as heat (Brahim and Jemni 2017) in the panel body, which in turn causes the electrical efficiency of the system to decline (Agyekum et al. 2021a;Chen et al. 2017). The photovoltaic thermal (PVT) combined systems have been put forth as capable of solving this problem, i.e., the accumulation of heat in the panel body and low efficiency brought on thereby (Chandel and Agarwal 2017a) by the use of heat transfer fluids (HTFs) to maintain cell temperature at or close to optimum operating temperature. As such, a typical PVT system is made up of solar cells combined with a solar thermal system that uses various HTFs. PVTs have a very wide application area (Daghigh et al. 2011) and have diversified according to fluid type, cover, and absorber design (Gelis et al. 2022), as shown in Fig. 1. With the simultaneous cooling of the PV system by the thermal system, the increase in the electrical resistance caused by increased temperature can be prevented to a great extent, and the maximum efficiency value of the PV cells at the optimum operating temperature can be maintained. A typical PVT-air system, shown in Fig. 2, offers an electrical efficiency of 8% and around 40% thermal efficiency (Solanki et al. 2009) and they are low-cost and easy to integrate (Tonui and Tripanagnostopoulos 2007). However, air has a low heat capacity and has limited use as a warm/hot medium. On the other hand, while PVT-water systems were shown to have provided an approximately 15% increase in the electrical efficiency (Gelis et al. 2022) compared to conventional PV, the quest for developing new and high-performance HTFs has been gaining momentum (Sardarabadi et al. 2017). The main idea has always been reclaiming the waste heat and preventing efficiency drops in exchange for pumping as little power as possible, as well as spending less on the HTF and relevant system components. In this regard, the new HTF should have a high specific heat capacity, be inexpensive and readily available, and require no great pumping power. As a result of numerous studies conducted so far, water-based special purpose fluids have stood out as reasonably good alternatives. Thermal energy storage, as shown in Fig. 3, can be by chemical or thermal means, which have several alternative applications that can be used to implement active or passive cooling (Browne et al. 2015b) or an effective combination thereof (Agyekum et al. 2021a) in PVT systems. As such, the heat storage density of an HTF can be increased by incorporating phase change materials (PCM), thereby achieving a better heat transfer rate as well as lower flow rates for a certain heat transfer rate Salunkhe and Shembekar 2012). On the other hand, PCM suspended in HTF, which is called PCM slurry (PCM-S), was reported to store or carry a greater amount of thermal energy compared to PCM particles that are not in direct contact with HTF (Salunkhe and Shembekar 2012). Although PCM suspended in HTF increases the viscosity of the fluid, hence the required pumping power, such an effect remains within acceptable limits Al-Waeli et al. 2019b;Cao et al. 2019;Trivedi and Parameshwaran 2020). Nonetheless, PCM-Ss have a few issues that must be addressed. PCM particles can solidify in the channels of the heat exchangers and cause clogging. Also, the stability of the PCM slurry is not very good above the melting temperature of the PCM, and the small PCM droplets may coalesce with each other over time, eventually becoming completely dissociated layers with the CF. In order to overcome such issues, PCM was dispersed as small droplets and encapsulated. No such incorporation problems occur when slurry preparation is carried out using ePCM instead of PCM, thanks to the shell material preventing contact between PCM and HTF . ePCM slurries (ePCM-Ss) offer all the benefits of a CF-PCM mixture, except for marginal effects induced by the shell material. In short, ePCM-Ss are HTFs that are multiphase in at least one region of the cycle and are prepared to take advantage of the latent heat of phase change materials without any problems, thus making them more efficient HTFs than single-phase fluids (Cao et al. 2019). It is generally formed by dispersing micro-or nano-sized capsules, consisting of a polymeric shell with a paraffin PCM in the core, in CFs such as water or glycerol, resulting in a nanofluid. Over the last decade, there have been approximately 150 thousand studies on renewable energy, about ten percent of which involved the use of nanofluids in renewable energy systems (Sharma et al. 2022b). ePCM-S nanofluids were also developed to improve the heat transfer rate of the CF and have found several applications, such as heating, cooling, air conditioning, heat exchangers, and ventilation (Barreneche et al. 2014;Said et al. 2022). Their utilization in PVT systems, however, is relatively new, and little research has been conducted to date (Jia et al. 2020). (Choi and Choi 2022) Advantages of Using PCM Materials PCMs help store energy as latent and sensible heat and defer thermal equilibrium due to the fact that a certain amount of energy (fusion energy or latent heat of fusion) is transferred at a constant temperature (Sharma et al. 2009), offer higher energy storage density (Yu et al. 2021;Zhang et al. 2010) and help maintain a relatively stable operating temperature (Chen et al. 2015). As seen in Fig. 4, the amount of heat stored within the unit mass/volume of a medium is notably higher in the case of latent heat storage (the case that a phase change takes place within that ΔT temperature range), compared to sensible heat storage. Such an increase in the amount of energy stored per unit volume can be as high as 5-14 folds (Sharma et al. 2009;Yu et al. 2018). Further, some properties, such Comparison of total heat storage between sensible and latent storage at a given ΔT interval (Agyekum et al. 2021b;Browne et al. 2015a, b;Reyna 2018) as the thermal conductivity and storage capacity, could be improved with the addition of nanoparticles to the PCM (Tarish and Alwan 2017). Moving from solid towards gas, the amount of phasechange latent heat increases. In this respect, solid-liquid PCMs have latent heat higher than solid-solid PCMs (Alehosseini and Jafari 2019) but lower than solid-gas and liquid-gas PCMs. Despite having higher latent heats, nevertheless, large volume change and pressure change greatly limit the applicability of solid-gas and liquid-gas PCMs (Su et al. 2015), which makes the solid-liquid PCMs more favorable. While latent heat storage systems (LHSS), which make use of PCM, involve different types of applications, there is a common algorithm to follow in order to develop a proper and fit-for-purpose LHSS (Reyna 2018), which can be seen in Fig. 5. In a thermal system, such as a PVT system, the thermal energy may not be converted into useful work instantly and often needs to be stored and/or transferred. If the energy is stored in the form of sensible heat, a medium with a high specific heat capacity has to be used, and it must be non-corrosive and non-toxic, and isolation/insulation is required during storage. However, owing to their ability to exchange heat at constant temperature and store energy in the form of latent heat, PCM materials offer higher energy storage density (Farid et al. 2004), and they provide important advantages, particularly in systems where an optimum working temperature must be maintained (Konuklu 2008;Sharma et al. 2015). PCMs are available with a wide range of fusion energy at melting temperatures ranging from − 5 to 150 °C (Kenisarin and Mahkamov 2007). When compared to sensible heat storage, the use of PCMs can increase the amount of energy stored per unit volume by 5-14 folds (Sharma et al. 2009). In order for PCMs to serve their purpose, they have to possess some thermodynamic, kinetic, physical, and chemical properties (Biçer 2009), e.g.: • They should have the highest possible latent heat of fusion, as this determines the maximum amount of heat that can be stored per unit volume. • As the solid-liquid phase change takes place at a constant temperature, the phase change temperature must be within the optimum operating temperature range of the system to be used. Fig. 5 The algorithm for the development of LHSS (Abhat 1983;Reyna 2018;Rousse et al. 2009;Sharma et al. 2009) • The volumetric change between the phases should be very small, and the vapor pressure should be low as it will be contained in a closed system. • The thermal conductivity and density should be high, and the phase changes should be stable over time. • In terms of kinetics, the rate of nucleation and the rate of crystal growth must be large. • They must be non-toxic, have long economic life, are not harmful to health, are inexpensive and easily accessible (Akçay 2006;Garg et al. 1985;Hale et al. 1971;Pasupathy et al. 2008). Incorporation of PCM In taking advantage of PCM, one has to choose from different viable methods: direct incorporation, immersion in porous materials and shape stabilization, macro-encapsulation, and micro-encapsulation (Serale 2018;Zhou et al. 2012). However, the intended use of PCM is determinative on the selection of incorporation methods (Fig. 6) that can function effectively (Yu et al. 2021). The fact that PCMs are not always suitable for use in circulation circuits makes the use of CF inevitable. As such, a new approach to the utilization of PCM was proposed in the last 10-15 years. A PCM-S is typically a solution that has PCM dispersed within a CF. Despite numerous alternatives, water is preferred as the CF for several advantages, such as having high thermal conductivity and considerable specific heat capacity , being compatible with PCMs, being easy to use, cheap, and safe (Jurkowska and Szczygieł 2016). PCM-Ss, as outlined in the literature, can be prepared in different types, which can be seen in Fig. 7. Ice slurries are typical PCM-Ss that are naturally encountered on earth, and consist of ice particles stratified or floating in water. In ice slurries, the CF and PCM are of the same substance and become a pure substance in the liquid phase of the PCM content. The PCM-emulsions, on the other hand, are a mixture of a PCM and a CF, homogenized through the incorporation of an emulsifying agent and remain a slurry even in the liquid phase of the PCM. Encapsulation, where the PCM is wrapped in a shell material (Chandel and Agarwal 2017b) to avoid contact with other PCM particles or undergo other stabilization issues, and shape-stabilization, where solidified polymers are used as supporting materials (Melone et al. 2012) that absorb liquid PCM (Serale 2018) (Delgado et al. 2012b;Qiu et al. 2019;Serale 2018) to prevent leakage (Qiu et al. 2019;Wu et al. 2020), are relatively new methods developed to prevent some drawbacks of micro-emulsions and offer great versatility in the application of PCMs in HTFs that circulate in more complex circuits. In encapsulation approach, encapsulation efficiency, i.e., the ratio of PCM to shell material, and mechanical strength is central whereas chemical compatibility and thermal stability is crucial in shape stabilization (Umair et al. 2019). Shapestabilized PCM-Ss. The relationship between the effective viscosity of the fluid and the thermal dilatation of the PCM with micro-or nano-capsules is one of the parameters to be investigated (Dutil et al. 2011). And despite the fact that the motion of the solid and liquid boundary layer and the mixing of the two phases are not well known (Prakash et al. 1985), numerical studies have shown that PCM can exhibit a single-phase feature in microcapsules and can simplify the solution of this problem with a reasonable error. As a result, regardless of the phase changes, the PCM-Ss act as a single phase fluid and have constant hydrodynamic properties at the macro level. This also allows the PCM-Ss to be pumped and be used as a HTF in thermal circuits, offering some advantages over similar solutions, including but not limited to latent heat exploitation, higher thermal diffusivity, reduced mass flow rate needs, and a high heat transfer rate. Nevertheless, the PCM-Ss have some issues that need to be overcome in order to realize their full potential and benefit from their advantages. During a phase change heat transfer, the fact that the heat transfer cannot be determined precisely due to the non-linearity of the process, the volumetric change, and not knowing the exact heat transfer mechanism are the main obstacles to a full evaluation of the performance of a thermal system with phase change (Regin et al. 2008). Encapsulation of PCMs provides performance improvement by both increasing the heat transfer surface area (surface area to volume ratio) (Farid et al. 2004) and reducing the PCM reactivity, as well as eliminating some other problems caused by volume change and low thermal conductivity (Jegadheeswaran and Pohekar 2009). Encapsulation of PCMs Encapsulation can roughly be defined as coating a core material (solid, liquid, gas, or even multiphase particles) with a shell (a film layer, generally of polymeric materials) (Chen et al. 2014b). The process can be named differently according to the size of the resulting capsules: macro-encapsulation (or simply encapsulation), micro-encapsulation, and nanoencapsulation. The first encapsulation process is believed to be studied by the National Cash Register Company within the framework of their project to produce carbonless copy paper (Benita 2005). The shell of the capsules prevents the core material from interacting with the environment, which increases the stability of the material and prevents undesired exposure to the core or its interaction with the environment. The encapsulation process has long been executed and implemented primarily in drug-related or medicinal applications in pharmaceutical, chemical, and biological engineering. Over the following years, however, the technology has become widespread and found application in a variety of fields, including but not limited to thermal, mechanical, and structural engineering. Encapsulation improves the thermal and mechanical properties of PCMs , increases the heat transfer surface area (Chandel and Agarwal 2017b, 586), thereby increasing the surface-to-volume ratio, and hence enhances the thermal capacity and efficiency of the CF significantly (Aslan 2014). In addition, the capsules allow the materials to be used as solid particles in their liquid state and compensate for the volume change in the shell during the phase change. In recent years, a number of studies on the preparation and properties of suspensions prepared by mixing microcapsules having a PCM shell with certain CFs (also known as ePCM Slurry, ePCM-S) have been conducted and significant results have already been recorded (Chen and Fang 2011, 4625). Material Selection The shell and core should not interact chemically (Karellas et al. 2018). Therefore, the material selection should be made taking into consideration the intended use and based on the material of the other. This process must be carried out meticulously, and special attention should be paid to the properties of materials in order to ensure that both the shell and core materials can withstand the operating conditions (Yeşilyurt et al. 2019). In addition, it must be taken into account that a number of products harmful to the environment and human health are produced as a result of encapsulation methods (Bayés-García et al. 2010, 1235. While the shell material functions as a container and is mostly regarded as important in terms of the mechanical strength of the capsule, it also has an important effect on the thermal performance as the heat transfer between the core PCM and the CF takes place through the shell. Therefore, it is also important that the shell material have good thermal properties, which will also bring on longer thermal life cycle (Salunkhe and Shembekar 2012). The main general properties sought in the shell material and the importance of such properties can be summarized as shown in Table 1. Among several organic and inorganic materials that have properties suitable for use as shell materials, polymers are the most commonly used. Polystyrene, polymethylmethacrylate, Arabic gum, gelatin, amino plastics, arabic gelatin-gum, urea formaldehyde resin, melamine formaldehyde resin, gelatin formaldehyde resin, and the like are also selected as shell materials (Su et al. 2017). The core material is also very important and should be carefully selected by considering a large number of parameters among organic, inorganic, or eutectic solid-liquid phase change materials. The breakdown of solid-liquid PCMs and their respective melting enthalpies and melting temperatures can be seen in Figs. 8 and 9, respectively. Organic materials have good chemical and thermal stability (Bruno et al. 2015). Paraffins, which are suitable and practical organic materials, are often preferred for the production of ePCM in the solid-liquid phase change. Especially for PV system cooling, paraffins are more applicable because of their thermal stability with regards to cycling (Atkin and Farid 2015). Inorganic PCMs, which can be classified into metals, salts, and salt hydrates, offer high thermal conductivity and high energy storage density, but they are not preferred as core materials as they undergo high subcooling and phase separation, corrosion, and decomposition (Faraj 2021) during the phase change from liquid to solid (Chen and Fang 2011) and they lack thermal stability (Alehosseini and Jafari 2019). As a combination of organic and inorganic materials, eutectics enable the use of the superior properties of both material types. When the core material of a capsule is considered; in thermodynamic terms; it is desirable that the latent heat of melting required for the unit volume to melt be as high as possible since it also makes the amount of heat that can be stored in the unit volume maximum (Yu et al. 2018). In addition, since the solid-liquid phase change takes place at a constant temperature, this value should be equal to or very close to the optimum operating temperature of the PV module used in the system. Another property to seek is that the volumetric change between phases be very small and the vapor pressure be low, as the whole process takes place in a closed system, i.e., the capsule. It is also preferred that the thermal conductivity and density are high and the phase changes are stable and congruent over time. From a kinetic point of view, the nucleation rate and crystal growth rate should be large. It should not be toxic, and its economic life should be long (Akçay 2006; Al-Mamoori 2017; Garg et al. 1985;Hale et al. 1971;Pasupathy et al. 2008). Just like the shell materials, some basic characteristics that the Maintain structural and thermal stability during phase change must Low reactivity to the CF and the core PCM, must Retain the thermo-physical properties in macro-, micro-, and nano-levels plus High thermal conductivity and good heat transfer between CF and PCM core materials should have can be listed as given in Table 2 ( Yeşilyurt et al. 2019). In parallel to the increasing interest in the utilization of LHSS, the proper selection of PCM for a certain application is ensured by the use of a database including several properties of these materials and software, taking into consideration different constraints (Barreneche et al. 2015b). Nevertheless, when a manual selection of PCM is to be done, the following table, which briefly summarizes the main advantages and disadvantages of organic, inorganic, and eutectic PCM, may help. Encapsulation Methods Encapsulation of materials can be accomplished as a combination of core and shell materials with different morphological structures by using different encapsulation methods. Types include single-core, multi-core, matrix, and multilayered (Jurkowska and Szczygieł 2016). Although there are quite a few encapsulation methods (Nadaroğlu et al. 2022), each has pros and cons that determine whether they are successful or unsuccessful in providing the desired properties for any given application. In general, encapsulation methods are classified into two groups according to how the capsulation mechanism takes place: chemical and physical. Figure 10 shows the classification of physical and chemical methods. PCMs are encapsulated in a shell to prevent leakage of the PCMs as well as increase the thermal conductivity. Figure 11 shows different types of capsules as mononuclear, polynuclear, and matrix in terms of core/shell composition, as single-layer or multilayer in terms of coating lamination of shell structures, and as regular and irregular in terms of the shape of the shell (Ghasemi et al. 2022;Jamekhorshid et al. 2014;Jurkowska and Szczygieł 2016). According to some researchers, however, the matrix composition of core and shell materials does not constitute a capsule but rather a sphere (Karthikeyan and Ramachandran 2014). While the morphology of the capsules, in different terms, may vary depending on several factors, such as the core material, shell material, encapsulation process, repetition coating process, as well as other process parameters such as stirring rate, surfactant type, type of emulsion, temperature of the reaction medium, etc., the most common morphological structure of encapsulated PCM is the mononuclear type (Ghosh 2010;Jamekhorshid et al. 2014;Mishra 2015). The structure of a typical mononuclear core/shell ePCM and the phase change within the capsule shell by absorbing / releasing heat is shown in Fig. 12. Fig. 9 Melting temperatures and enthalpies of common solid-liquid PCM (Biçer 2009;Bruno et al. 2015;Ghasemi et al. 2022;Konuklu 2008;Mehling and Cabeza 2007;Yeşilyurt et al. 2019;Zondag et al. 2016) In order to obtain the capsule by coating the core material with the polymer shell material, the first step is to prepare an emulsion so that these two immiscible materials are dispersed into each other in such a way that the desired capsule size is assured. Depending on the method of encapsulation, in addition to shell and core materials, an emulsifier, an initiator, a crosslinking agent, a nucleating agent, and a surfactant may also be used. Furthermore, other auxiliary materials such as NaOH, hydrochloric acid, triethanolamine, and acetic acid solutions may be needed as PH stabilizers in methods that include a polymerization process ). Issues with PCM/ePCM Slurries Encapsulation serves as a means to overcome some common problems of PCM, such as sub-cooling, a low nucleation rate, and a low crystal growth rate, which is mainly dependent on droplet size after nucleation. With the opportunity to produce smaller-sized capsules in parallel with the developments in the technology, both in terms of devices and methods, such problems PCMs have can be eliminated partially. Studies have shown that the capsule size directly affects the crystallization temperature and that the subcooling temperature varies inversely proportional to the capsule size in the range of 5-100 μm (Safari et al. 2017). With respect to solving this problem, Cao and Yang (2014) developed a new technique to suppress subcooling in ePCMs by optimizing the composition of the capsule shell. By virtue of the composite structure that comes out in the form of a shell and core after encapsulation and due to the fact that the phase change during heat exchange takes place within the shell, both problems such as precipitation and separation that may occur in the case of direct mixtures of PCMs and CF as well as the risk of leaking of the core can be prevented by taking advantage of different properties of shell materials. The capsule structure, shell, and core material to be preferred in heat transfer applications depend on the amount of heat to be drawn from the system per unit time, the stable operating temperature of the system, and some other system parameters. However, one of the most important points is that the phase change temperature of the core material to be selected should be equal to or close to the optimum operating temperature of the system. As important is that the shell material should not be damaged, broken, or torn during pumping (Yeşilyurt et al. 2019). Sub-Cooling Subcooling, a phenomenon that can be described as the ability of a liquid to cool down to a temperature lower than the fusion temperature without crystallization, is one of the main features of importance for PCMs. It represents the difference between crystallization and fusion temperatures. Since the use of latent heat of fusion, which is the main purpose of using PCMs, can be delayed or hindered because of subcooling, the degree of subcooling of PCMs should be as low as possible. Therefore, it is necessary to reduce this degree in a PCM that exhibits a high degree of overcooling. The issue can be solved by adding a nucleating agent to the solution (Chen et al. 2014b). Stability The most important factor to be considered in the synthesis and use of ePCM should be stability. Although the nature and type of physical properties required in an ePCM may vary based on the application areas, ePCMs for all applications should be able to maintain the properties they possess or have acquired under the operating conditions in order to perform the expected function in the same way. For example, ePCM slurries to be used as heat transfer fluid must remain stable both under mechanical and thermal loads and for long periods of time. Stability is an indicator that can be evaluated based on whether any changes occur in the properties of the ePCM or ePCM slurry, such as particle size or shape, thermophysical The phase change of the core material during heat exchange (Ghasemi et al. 2022) properties, and viscosity of the slurry, under operating conditions or over time (Chen et al. 2014b). Stability can be regarded in three aspects: physical stability (mechanical stability), structural stability, and thermal stability (Qiu et al. 2017). Physical Stability The physical stability of ePCM slurries, or of emulsions, in general, is very important for heat transfer and thermal energy storage (Qiu et al. 2019). The physical stability of an emulsion is directly related to certain emulsion parameters, such as the surfactants' mass concentration, surfactants' type, pH value, and density difference. While some of these parameters may be adjusted or regulated, some must be considered in the material selection stage. For example, the suspension pH value can be adjusted by adding citric acid and triethanolamine, which are traditional pH value regulators (Qiu et al. 2018). But as for the density difference between the particles and CF, which is one of the dominant parameters influencing the physical stability, considerations should be made in advance. An ePCM-S with stability problems being used in a thermal cycle will surely underperform or require stirring to restore physical stability or to ensure homogeneous dispersion during operation in order to maintain normal performance (Alvarado et al. 2007;Wang et al. 2007;Yamagishi et al. 1999). Creaming or Sedimentation These two phenomena take place when the densities of the difference between ePCM and the CF, i.e., the densities of the dispersed material and the continuous phase, are different. Therefore, the smaller the density difference, the more stable the suspension (Qiu et al. 2018). When the density of the ePCM is greater than that of the CF, e.g., as in an oil-in-water emulsion, creaming may occur, whereas sedimentation is likely to occur when the density of the CF is lower than that of the ePCM, as in an water-in-oil emulsion (Qiu et al. 2019). Flocculation The flocculation phenomenon is described as the agglomeration of particles (Delgado et al. 2012b) within the solution due to affinity (Qiu et al. 2019). With smaller diameters, especially smaller than 10 µm, capsules are more robust, but in this case the solution may be prone to flocculation, which can be prevented by adding an anionic surfactant (Ali 2017). Flocculation is a dispersion problem and indicates a nonhomogeneous slurry (Yamagishi et al. Fig. 13 Instabilities likely to be encountered in emulsions (Jurkowska and Szczygieł 2016;Kuroiwa et al. 2015;Qiu et al. 2017;Tadros 2004) 1999). When the volume concentration of the capsules gets higher, they come closer to each other; hence, the interaction between capsules also increases, resulting in the formation of larger agglomerates. In reverse, these agglomerates shear down to smaller pieces under shear forces. Flocculation increases the viscosity and shear stress (Cao et al. 2019). Coalescence This type of instability involves the merging of two or more dispersed droplets into a new bigger droplet during contact. Ostwald Ripening The Ostwald ripening is defined as the inhomogeneity that takes place over time in solid or liquid phases due to the solubility difference of the dispersed phase (Ghasemi et al. 2022), showing up as small crystals or solution particles dissolving, followed by redeposition into larger crystals or solid particles (Qiu et al. 2018). Phase Inversion As the name suggests, it takes place when the continuous phase and dispersed phase convert to one another (Delgado et al. 2012b;Qiu et al. 2019). Except for creaming, suppression of other instabilities can mostly be achieved by the shell protection provided by encapsulation. By taking advantage of different properties of the shell material, problems such as sedimentation and decomposition, which may occur in the mixture, can be eliminated while also preventing the penetration of the core and mixing with the CF. To prevent creaming, the use of a certain surfactant or a mixture thereof provides good emulsion stability, as seen in Fig. 14. PCM-Ss provide high thermal inertia conservation and higher thermal diffusivity while also reducing subcooling and phase segregation. Still, some problems such as leakage (Barreneche et al. 2015a) and precipitation are encountered, which limits their use in more advanced applications. At this point, the incorporation method comes in handy for the formation of PCM slurries, where a separation between the PCM and CF is also required (Serale 2018). One way to solve this problem is to obtain some kind of composite material by wrapping PCMs with different shell materials (Chandel and Agarwal 2017b: 586). Therefore, PCMs are widely used in microcapsules to prevent leakage and maintain the effectiveness of thermo-physical properties when applied in slurry active systems (Barreneche et al. 2014). Structural Stability Another issue with the ePCM-S is its structural stability, which relates to the potential rupture or breakage of capsules by a mechanical shear force induced by the pumping stress (Qiu et al. 2017). In this regard, the selection of a circulation pump is also a factor to consider for the structural stability of capsules, and centrifugal pumps are best for the purpose since they can pump a slurry for a long period of time without imposing any damage to the capsules (Cao et al. 2019;Gschwander et al. 2005). Structural stability is mainly dependent on the shell material, thickness, and capsule diameter. Capsules with smaller diameters up to 5 μm are strong enough to withstand 5000 pumping cycles without rupture (Yamagishi et al. 1996). Alvarado et al. (2007) confirmed this effect with several experiments on different ePCM particles. However, while capsules with smaller sizes can withstand the flow pressure, the smaller the capsules within the CF, the greater the viscosity of the slurry will get (Ali 2017). Cracking of the capsule shell or its rupture can also be caused by volumetric expansion of the core PCM during phase change. Therefore, in order to ensure the structural stability of the capsule, PCM selection can be made paying due attention to the volume change between phases (e.g., select an inorganic PCM that features low volume change as outlined in Table 3). Yet, as it is nearly impossible to achieve all desired properties with any of the PCM candidates, this may not always be the best solution. Kim and Cho (2002), for instance, mixed volatile cyclohexane with the PCM and thus encapsulated the mixture with a polymer shell; then, during the phase change, cyclohexane evaporated and allowed the shell to remain unaffected, leaving room for the volume expansion of the PCM. On the other hand, any modification of parameters that leads to a smaller and more uniform capsule size and uniform shell thickness can be considered favorable for the structural stability. In experiments with n-eicosan and stearic acid microcapsules having shell wall thicknesses of 15% and 30% of total capsule size, it was reported that thin-walled microcapsules cannot withstand thermal cycles above the melting point (Roy and Sengupta 1991). Thick-shell capsules, on the other hand, were found to be less damaged during pumping, but thicker shells hampered heat transfer from the shell to the core. Therefore, an optimal choice should be made between these two parameters (Cao et al. 2019). The strength of particles can be evaluated by using an atomic force microscope (Ghasemi et al. 2022) or by experimentation. To conclude, the type of pump used in circulation, pump speed, capsule diameter, volume-to-weight ratio, low volume expansion of the core or larger room for expansion in the shell, shell material, and shell thickness are among the main effective parameters (Qiu et al. 2017). Thermal Stability The thermal stability is directly related to the thermal stability of the PCM and the shell material when encapsulated (Gong et al. 2009). Paraffin wax is the most readily available PCM type in the market with high thermal stability (Abdelrazik et al. 2020). The thermal stability of any PCM of choice, on the other hand, can be evaluated and assessed. Thermal Gravimetric Analysis (TGA), which is basically a measurement of the weight change (gains and losses) as a function of temperature or time, provides information about the material's thermal stability and compositional analysis through the determination of loss on drying and phase transition temperatures (Ahuja and Scypinski 2001;Alkan et al. 2009;Allouche 2016). However, in general, a simultaneous SEM analysis and/or Differential Scanning Calorimetry (DSC) analysis are carried out to visualize and identify the origin of the PCM degradation (Allouche 2016; Barreneche et al. 2015a). Fei et al. (2015) and Fu et al. (2017) have experimentally examined the effect of encapsulation on thermal stability and confirmed a significant improvement. The protection provided by the shell material (Karthikeyan et al. 2014) was reported to differ depending on not only the material itself but also the structure of the shell. Huang et al. (2019) reported that the thermal stability of the network polyurethane shell was remarkably enhanced compared to linear polyurea shell. In this regard, the modification of shell composition has been a practice. Thermal stability of PCM microcapsules is crucial for practical applications (Al Shannaq and Farid 2015). Therefore, numerous studies have been aimed at the examination and improvement of thermal stability. Salunkhe and Shembekar (2012) reviewed the effects of capsule size, shell thickness, shell material, and encapsulation geometry on the performance of thermal energy storage systems and reported that heat storage capacity and thermal stability strongly depend on the core-to-coating mass ratio. PCM stabilization can also be achieved through the development of 3D-structured supporting matrices that can also increase latent energy storage capacity of composites, and nanomaterials can be used to fabricate organic PCM composites with increased thermal stability (Alehosseini and Jafari 2019). Another factor that improves thermal stability is the availability of expansion space in the microcapsule, which allows PCM to expand freely without exerting stresses on the shell when the temperature rises. For some encapsulation methods, process parameters, such as stirring rate and emulsifying content were also reported to be effective on thermal stability (Al Shannaq and Farid 2015). Hydrodynamic and Thermal Characteristics of ePCM Slurries Density and Viscosity The ePCM is a very fine granular powder (Fig. 15a). It can be added to a CF, such as water, which has been determined to be the most ideal CF and is the most used liquid for ePCM slurries due to its easy availability, cheapness, high thermal conductivity, and high specific heat capacity (Cao et al. 2019), at different weight or volume ratios. The resulting mixture is called ePCM slurry (ePCM-S) (Fig. 15b). The fluid and flow properties of the ePCM-S are governed mainly by the densities of the core PCM and the shell material, the diameter of the capsule, and the concentration of capsules in the CF. These parameters affect the density and viscosity of the slurry. The higher the viscosity of the fluid, the greater the power required to circulate the fluid through the system. Therefore, the fluid's viscosity is preferred to be as low as possible. The density of the ePCM slurry is then calculated as follows: where is the volume concentration of ePCM in the CF. As for the viscosity of the slurry, assuming that the suspension is hydrodynamically homogeneous, a theoretical (1) slurry = carrierfluid * (1 − ) + capsule * formula proposed by Einstein to predict the viscosity of dilute suspensions of hard shell spheres can be applied: where k takes the value 2.5 for rigid spherical particles based on rigidity and Brownian motion (Vand 1945). However, when ePCM particles move relative to each other under the shearing motion of fluid, they collide and roll over each other, the duration of which is directly proportional to the concentration of ePCM particles in the CF. Considering the fact that the viscosity of the slurry is greater when the ePCM particles are in contact with each other than when they aren't, the equation to reflect the true viscosity can be expressed as follows: with k = 2.5 and q = 1.16, the viscosity values obtained from the above equation gives very consistent results with the experimental data obtained with Ostwald viscometers. The relative viscosity of the ePCM slurry is almost negligible for concentrations below 5%. In order to test the accuracy of Vand's formulation, Wang et al. (2007) conducted experiments and reported an acceptable agreement (Fig. 16). Vand (1945) correlation was also used by Goel et al. (1994) and Qiu et al. (2019) to calculate solution viscosities and was reported to be valid. Furthermore, Qiu et al. (2019) reported that the constant q with a value of 3.7 fits best for commercial ePCM, testing it in their study addressing the use of ePCM-S in solar systems. Thomas (1965) made a comparison of the values given by the Vand formulation with experimental results and reported that the degree of agreement is 97.5%, which (Qiu et al. 2016) gradually decreases to 60% for a concentration of 40% and to an unacceptable degree of 8.7% when the concentration reaches 60%. As a result, a better-fit version of the formulation was proposed by Thomas (1965) in order to maintain high agreement rates even for greater concentrations: But the Vand formulation still remains valid as ePCM slurries reported in the literature is generally below 25-30%. Specific Heat Capacity and Thermal Conductivity Thermal properties, such as the specific heat capacity (Cp) and thermal conductivity (k), of ePCM-Ss, as is the case for viscosity or other hydrodynamic and physical properties, depend on the thermal properties of the CF and ePCM. Thanks to the precious studies conducted so far, correlations and formulations have been provided in the literature that allow us to predict/design, or even calculate such properties. As a function of temperature, the specific heat capacity of an ePCM changes with the temperature, but this change is substantial depending on whether the temperature is in the phase change temperature range of the PCM or not. As can be seen in Fig. 17, specific heat capacity of the capsule (C p,capsule ) is considered equal to the specific heat capacity of the core PCM (C p,PCM ) when the temperature is outside the phase change temperature range, and when within the phase change temperature range, it can be calculated as given as follows (Kuravi 2009): (5) eff = fl + 2.5 + 10.05 2 + 0.00273exp(16.6 ) Further, irrespective of the phase change but considering the composite structure of the capsule as core and shell, the C p of the capsule can be calculated as follows (Goel et al. 1994;Qiu et al. 2019): or as (Guo et al. 2017): where R refers to the encapsulation ratio, i.e., how much by weight of the capsule consists of core PCM and how much of it consists of the shell material. Based on the above, the bulk specific heat capacity of the slurry (C p,slurry ) can be calculated as follows (Guo et al. 2017;Languri et al. 2013): where ξ is the mass fraction of the capsule in the fluid. With a similar approach, the thermal conductivity of the ePCM-S can be calculated using Maxwell's relation as follows: Again, the thermal conductivity of the capsule can be obtained using (Guo et al. 2017;Qiu et al. 2016): can be re-arranged as follows (Languri et al. 2013): Considering the fact that the thermal conductivity is also governed by the particle/fluid interaction, the effective thermal conductivity need to be reframed as: where f is a constant defined as: considering that the Peclet number for spherical micro/ nano particles is: where α is the thermal diffusivity of CF, γ is the shear rate, and d is the capsule diameter (Languri et al. 2013). Compared to water, although the specific heat capacity of the slurry is lower (as can be seen in Fig. 18a), within the melting temperature range, the ePCM slurry has a significantly higher effective specific heat capacity, boosted by the latent heat of fusion, increasing in direct proportion to the ePCM concentration in the slurry (Fig. 18). On the other hand, the increase in the viscosity of the slurry was almost negligible in comparison. The pumping power required to draw the same amount of heat is much lower in ePCM-S than in pure water . Hydrodynamic Performance of ePCM-S The main problems of ePCM slurries are the high flotation rate and the limited operating temperature range. In addition, agglomeration of microcapsules can cause various problems, such as increased viscosity, clogged channels, or even pump failure. While agglomeration can be reduced by determining a reasonable microcapsule concentration or using surfactants, using smaller capsules to prevent flotation and balancing the density of the capsules and carrier medium may offer solutions (Cao et al. 2019, pp 180). Although there are partial increases in pressure drop in some cases, there are many theoretical and experimental studies showing that ePCM slurry increases convective heat transfer. Volumetric slurry concentrations below 25% can be assumed to be Newtonian and analyzed as such. Under ideal conditions, the Nusselt number of the ePCM slurry is 1.5-4 times higher than that of the single-phase flow. While Stefan number, volumetric concentration of microcapsules, dimensionless subcooling degree, dimensionless phase change temperature range, and the diameter of the microcapsules are found to be the dominant parameters that affect the heat transfer enhancement of the ePCM slurry, the shell of the microcapsule, the form of the PCM specific heat function, the ratio of specific heats, and thermal conductivities have little effect on heat transfer properties (Hu and Zhang 2002). ePCM-S has great potential as a thermal energy storage medium and HTF simultaneously (Yang et al. 2019). Since an increase in the capsule concentration will increase the heat capacity and energy storage density in the phase change temperature range, the required pumping power can be reduced by cycling at lower flow rates. However, in turn, the viscosity of ePCM-S may increase with the capsule concentration, increasing the pumping resistance Cao et al. 2019). Therefore, the rheological properties of ePCM solutions should also be investigated. Many studies have evaluated ePCM solutions as Newtonian fluids, but many such systems are non-Newtonian and exhibit timedependent behavior (Cao et al. 2019). 4 Thermal Performance of ePCM-S ePCM-Ss have potential applications as an HTF for many systems, such as micro-channel heat exchangers, solar panels, and thermal power plants. Theoretical and experimental studies investigating the effects of different parameters have reported the Stefan number and the concentration of ePCM in the slurry as the most effective parameters on the heat transfer characteristics of ePCM-Ss in a laminar flow systems. As revealed by numerical studies, the Nusselt number of ePCM-S, in comparison to a single-phase fluid, is 1.5-4 times greater (Salunkhe and Shembekar 2012). In various studies, different parameters were experimentally investigated with regards to their effects on the heat transfer performance in both laminar and turbulent flow conditions and are briefly summarized in Table 4. PVT systems have also been included in the research on the usage areas of PCM and ePCMs. Although there are studies on systems in which PCMs are used directly, which are often referred to as PVT/PCM in the literature, PVT-nanofluid systems using ePCM slurries as working fluid are the most prominent systems in terms of making use of the advantages offered by PCMs by eliminating their negative aspects. As seen in Table 4, all parameters effect heat transfer in laminar and turbulent flow conditions in the same direction. And it is also noteworthy to mention that most of the parameters studied are positively related to the improvement of heat transfer. The increases in the inlet subcooling, phase change temperature, and Stefan number were found to be inversely related to heat transfer enhancement. Since turbulent eddies and slurries containing PCM capsules as small as the laminar substrate thickness will exhibit single-phase flow characteristics; heat transfer coefficient increases with the effective heat transfer coefficient of the fluid. Slurry mixes containing phase change material have the potential to significantly increase the heat transfer coefficient, whether under laminar or turbulent flow conditions. As a matter of fact, an experimental study conducted with an aqueous mixture containing paraffin particles at temperature differences between 10 and 20 °C showed that the heat transfer coefficient could be increase by three folds (Qiu et al. 2017). It is well known that PCM solutions show physicochemical changes after many thermal cycles. However, the load effective in damaging the capsules and breaking the shells is also variable and a function of the operating temperature, so temperature is the primary consideration in the design and operation of PCM-S systems (Barreneche et al. 2014). In short, it is a fact that the heat transfer coefficients of ePCM-S increase, and even this increase is effective in increasing the heat transfer capacity, however, the occurrence of a phase change significantly increases this improvement in heat transfer. It was determined that melamine increased thermal stability and that the homogeneity of composites was maintained after melamine was added (Acar 2014). Su et al. (2006) used melamine formaldehyde (MF) to prevent PCMs from being affected by the environment and environmental materials and to ensure long life in practice. According to the DSC results of micro/nano-encapsulated phase-change materials synthesized by emulsion polymerization using n-heptadecane as the core and polystyrene as the shell, the melting and freezing temperatures were found to be 21.48 and 21.37 °C, and the latent heat of melting/freezing was 136.89 and 134.67 J/g, respectively. After five thousand thermal cycles, it decreased from 136.89 to 128.27 J/g due to damage to some capsules during pumping. Thermal gravimetric analysis (TGA) results showed good thermal stability of Micro-/ nano-PCM in the process (Sarı et al. 2014). Use of PCMs in Photovoltaic Systems More and more research has been carried out on PVT systems, developed to save on installation space and costs, for they have a higher overall yield than both PV systems and solar collectors. While most researchers recommend new models/designs, some have been conducting studies to examine the performance of existing models and configurations under different climatic and environmental conditions. However, it is also very important to screen and examine existing studies in order to identify the basic ideas and principles to be considered in future studies and to know the limits of research in this field (Al-Waeli et al. 2016). One important research topic aimed at improving the performance of PVT systems addresses the use of various cooling means, including but not limited to air, water, nano-fluids, PCM, and their combinations (Al-Waeli et al. 2019d, pp 178). The most common PVT system is PVT air systems, which are commercially and technologically advanced, but the electrical and thermal efficiencies that can be achieved with this system, at a maximum, are 8% and 39%, respectively. Another drawback of PVT air systems is that the application area of hot air is limited. On the other hand, PVT liquid systems are also common, a technology with greater electrical and thermal efficiencies of 9.5% and 50%, respectively, depending on the flow rate and temperature of the HTF, the shape, size, and geometry of the flow channel. Yet, the presence of a characteristic technical problem, -increasing fluid temperature during operation-limits the possibility of improving these systems. Despite innumerable studies on PVT liquid systems, there still exist some technical problems that have not been completely resolved, such as the working fluid temperature being higher than the optimum operating temperature of the system, insufficient heat absorption, fluid freezing in the system at night in cold climate conditions, or fluid leakage out of the system. Using refrigerants as the HTF in PVT systems has long been considered, and they were shown to be able to increase the solar energy utilization rate compared to PVT-water and PVT-air systems, and that they could reach 10% and 65% electrical and thermal efficiency, respectively. However, due to the fluid leaking from the system, the phase change, the uneven distribution of the fluid in the system, and the technical problems in providing pressure control in the operating conditions, it seems that such systems may only be used integrated with heat pumps and may become popular only in the future. The use of nanofluids as HTF has also become widespread and has proliferated since their thermal conductivity is higher than that of CFs. Currently, nanofluids are widely used in different applications. Their use in heat exchangers and PVT systems is also increasing. However, nanofluids exhibit different characteristics in terms of performance and applicability. There is a lot of research aimed at finding the best HTF for use in PVT systems (Al-Waeli et al. 2019b, c) including novel hybrid nanofluids that comprise more than one nanomaterial dispersed in the CF (Sharma et al. 2022a). As another novel approach, studies on the use of PCM in the thermal management of PV systems, and hence the number of articles submitted to the literature, have been increasing regularly and gradually since 2002, 24 years after 1978, when the first study in this field was made. Figure 19 shows the total number of studies on using PCM in PV systems published each year as stacked bars showing the particular field in which the studies are conducted. The interest in the thermal regulation of PV modules and in the use of PCM for their thermal management began many years before these two fields were combined. Over the last decade, studies on the thermal control and management of PV, CPV, and PVT systems by using PCM have increased and diversified rapidly (Browne et al. 2015b). In addition to studies on the utilization of PCM in PV and PVT systems becoming notable in 2000s, they also diversified over the years to cover micro-and nano-encapsulation of PCM and utilization of ePCM slurries, topping out at some two thousand papers by the year 2021, as seen in Fig. 20. In order to make a system thermally more efficient using PCM, the main requirement is to maximize the heat transfer between the PCM and the environment (Rady 2009). However, it is known that many PCMs, especially organic PCMs, exhibit low thermal conductivity. Therefore, the main problem on which studies in this field are focused is, for sure, increasing the thermal conductivity of PCMs. In the literature, different techniques such as the use of fins, the incorporation of metal matrices with high thermal conductivity into the PCM, the dispersion of micro or nanoparticles with high thermal conductivity particles within the PCM, and micro-or nanoencapsulation were reported to have positive effects on increasing the thermal conductivity of the PCM (Velraj et al. 1999). Al-Waeli et al. (2019c) examined the thermo-physical properties of three different types of nano-fluids comprising nano-SiC as an additive and setyl-trichlorite ammonium bromide as a surfactant and tried to find the best CF for use in PVT applications with a mixture of water, 35% ethylene glycol, and 35% propylene glycol. They reported that glycol solutions are more stable than water, although the thermal conductivity of the three nano-fluids is close to each other in the studied temperature range. Song et al. (2007) showed that the addition of silver nanoparticles provided a strengthening of the shell structure of microencapsulated bromo-hexadecane microcapsules. In addition, the thermal and structural stability of PCM microcapsules combined with silver nanoparticles were found to be significantly higher than conventional PCM microcapsules. Zhang et al. (2004) found in their studies on phase change properties and thermal stability with ePCM obtained from urea-melamine-formaldehyde shell and n-octadecane core materials that the best performance was obtained in the ratio of 0.2:0.8:3 mol. It has been reported that thermal stability can be increased up to 163 °C. Stability could be increased Fig. 19 Distribution of studies on the use of PCM for thermal stability of PV and PVT systems by year (Browne et al. 2015b) to 200 °C by adding cyclohexane. Salaün et al. (2009) studied the effect of the formaldehyde / melamine (F/M) ratio on the mechanical properties of paraffin encapsulated with amino resin and found that a low F/M ratio causes a significant reduction in the structural strength of the microcapsule, although it offers a smooth capsule surface. They reported that ePCM exhibited better mechanical properties at higher F/M ratios. In addition to the thermal conductivity enhancement methods such as adding metal spheres/screens (Ettouney et al. 2004) or graphite (Sarı 2004) directly into the PCM, the studies on creating a conductive layer on the shell structures of the encapsulated PCMs by various methods have also yielded positive results (Bellemare 2009). Al-Waeli et al. (2019d, pp 178) developed a mathematical model for a new nano-fluid/nano-PCM-based PVT system and experimentally tested it. The proposed mathematical model has been reported to be satisfactorily compatible with the results of the experiment. The study revealed that the electrical and thermal efficiencies for the mathematical and experimental methods were 13.7%, 13.2% and 72%, 71.3%, respectively. The maximum temperatures recorded in glass, PV cells,wax,39.92,38.8 and 36.5 degrees,respectively. With a solar panel on the top of the storage tank in which cylindrical containers with PCM were placed, only a marginal improvement effect has been observed compared to the conventional solar panel, and even a loss of performance has been reported in some cases (Talmatsky and Kribus 2008). Ibáñez et al. (2006), who performed a similar study, both experimentally and numerically, with sodium acetate trihydrate PCM encapsulated within aluminum bottles, reported that the fraction of solar energy in the total energy need increased from 4 to 8% due to the inclusion of encapsulated PCM in the system. However, it was also noted that the addition of PCM has a critical value for performance improvement, and after the critical threshold is exceeded, it will not increase the performance of the system; on the contrary, it will decrease it. Another PCM application that has attracted great attention recently is taking advantage of their latent heat by mixing them into CFs as encapsulated within micro-or nanoscale capsules, which offers improvements in circulation circuits as well as better electrical and thermal power obtained from PV/PVT systems. This approach is also effective in overcoming the technological constraints and handicaps associated with solar collectors (Baronetto et al. 2014). PVT systems using ePCM-S as the working fluid are almost a new technology that can be considered still in the early research phase. Within the scope of the research on the sustainability of heating and cooling needs in environmentally friendly Energy-Plus houses, where the energy produced from renewable energy sources throughout the year is greater than the annual consumption, ePCM-Ss were proposed and used as the working fluid in PVT systems and were shown to be capable of improving the electrical and thermal efficiency of the system better than conventional PVT-liquid systems (Al-Waeli et al. 2017a). The ePCM-S-based PVT systems are (Ghasemi et al. 2022) generally connected to a second heat-pump system through a common heat exchanger. The experimental system by Qiu et al. (2016), shown in Fig. 21, consists of a PVT module (4), an ePCM-Srefrigerant heat exchanger (evaporator) (3), a compressor (2), a refrigerant-water heat exchanger (condenser) (7), a water tank (1), an inverter (9) and other necessary accessories such as pumps (5), valves (6), an electrical resistance (8) and controller (10). During operation, the PVT module absorbs the incident solar irradiation and converts a portion of it into electricity and the rest into heat. ePCM-S circulating through the serpentine pipe attached to the back surface of the PV cells draws heat from the cell body, causing the PCM particles in the ePCM-S to melt. ePCM-S then flows into the evaporator of the heat pump, where heat transfer takes place between the ePCM-S and the working fluid (R134a) in the heat pump cycle, causing the PCM particles in the ePCM-S to solidify and the refrigerant (R134a) of the heat pump cycle to evaporate. The temperature of the refrigerant increased approximately from 15 to 70 °C and the heat was released into the water passing through the condenser of the heat pump cycle, thus supplying hot water for use in the building. The condensed refrigerant then passes through an expansion valve to complete the cycle and return to the evaporator. An electric pre-heater was used to adjust the temperature of the ePCM-S at the entrance of the PVT module in order to create a stable working regime. ePCM-S Based PVT Systems In general, all the thermodynamic properties of fluids used in heat extraction from thermal systems are important. The specific heat capacity and viscosity of the fluid are determinative in terms of the pumping power required to circulate the fluid in the system. For air and water, the flow rate of the fluid that corresponds to the total heat required to be (Delgado et al. 2012a;Jurkowska and Szczygieł 2016) drawn from the system increases depending on the specific heat capacities. The greater flow rates require more power (fan power or pump power) to circulate the fluid. As can be seen in Fig. 22, the pumping power required to draw a certain amount of heat with pure water and encapsulated PCM slurries, it can be easily seen that it is much higher for pure water. ePCM-S helps increase the amount of energy stored per unit volume by 5-14 times (Sharma et al. 2009) as they also store some of the energy they gain during the heat transfer as latent heat. As a result, the same amount of heat can be drawn with less fluid or at less flow rates, i.e., with up to 5 folds less pumping power. For example, in order to draw 1000 Watts of heat, with pure water we need almost five folds of pumping power. Also, the relationship between the pump power and the heat drawn is parabolic for pure water, while it is almost linear for aqueous mixtures containing encapsulated PCM. Also, the relationship between pump power and heat absorbed is parabolic for pure water, whereas it is almost linear for ePCM-S. With ePCM-S made of Eicosane and a mineral oil, up to 80% heat transfer enhancements could be obtained with concentrations as low as %1. However, concentrations above 5% decreased heat transfer due to particle clumping (Chen et al. 2014b). Comparing the pumping power required to draw a certain amount of heat with pure water and encapsulated PCM slurries, it can be easily seen in Fig. 22 that it is much higher for pure water. Viscosity and flow rate are the two determinant parameters on the required pumping power, i.e., the higher the viscosity of the fluid or the greater the flow rate, the greater the power required to circulate the fluid in the system gets, so it is always preferable that the viscosity of the fluid be low or the flow rate be smaller. Examining Figs. 22 and 23 together, the pressure drops that occur in the system against mean flow velocities for different concentrations of ePCM-Ss in comparison with that of pure water suggest that the pressure drops that occur in the system are at an acceptable level in compensation for the low pumping power provided. Erdoğan (2017) examined the thermal conductivity of ePCM-S comprising different weight concentrations of ePCM and reported that ePCM addition at any percentage resulted in a greater thermal conductivity. As can be seen in Fig. 24, 0.5% ePCM content induced an almost 15% increase in the thermal conductivity of the fluid. Nevertheless, since it also increases the viscosity of the fluid and hence the pumping power required to circulate the fluid in the system, the optimum percentage for the ePCM content was reported to be around 2-5% by weight, it may still vary depending on the shell material and particle size though. Liu et al. (2017) reported that they achieved an improvement in both the thermal and electrical efficiency of their PVT system by employing ePCM-S as the working fluid at approximately the same pumping power. They compared the thermal and electrical efficiencies of water and ePCM-S at different mass flow rates ( Fig. 25a and b) and measured the pumping power required at those flow rates, concluding in the net efficiency of the two PVT-Liquid systems ( Fig. 25c and d). A significant heat storage capacity improvement was reported to have been achieved at v = 0.015 m/s and ṁ =0:67 kg/s, proving the superiority of ePCM-S over water as a HTF. Nevertheless, both initial setup costs and operating costs of ePCM-S-based PVT are higher when compared to conventional PVT-liquid systems (Ali 2017). A number of ePCMs have been tested in Pakistan in an outdoor C-PVT system. Lauric acid was the best performing PCM, reducing the PV temperature by 22 °C and the second best was palmitic acid with 19.5 °C. According to the results of the experiments, the optimal PCM selection should be made depending on the application, and it should be noted Fig. 23 Pressure Drop versus mean flow rate for a PVT-water system and a PVT-ePCM-S system compared (Yamagishi et al. 1999) Fig. 24 Variation of thermal conductivity of ePCM slurry with temperature (Erdoğan 2017) that a PCM that is optimal for one application may not be suitable for another (Browne et al. 2015b). In their outdoor experiments in Selangor, Malaysia, Al-Waeli et al. (2017a) used paraffin FDM with a SiC nano-fluid circulation circuit to control the heat capacitance of the system, both to maintain electrical efficiency and increase overall efficiency. At the highest insolation period (12:30-01:30) at a fluid flow rate of 0.17 kg/s, the cell temperature decreased by 30 °C. Thus, the proposed PVT-nano-PCM nanofluid system increased the open circuit voltage from 11-13 to 20-21 V, the output power from 61.1 to 120.7 W, and the electrical efficiency from 7.1 to 13.7%, whereas the thermal efficiency of the system was recorded to be 72%. Discussion It is no longer a matter of debate whether PV cooling is necessary, but it is still a hot topic as to which cooling method is the most effective. There have been innumerable studies all around the world. The topic is so broad and deep that it is not even possible for a researcher or a group of researchers to address all cooling techniques in all aspects. Therefore some researchers have had to make comprehensive reviews of previous studies in order to structure or plan their own research, each addressing different methods and aspects of PV cooling (Agyekum et al. 2021a;Ali 2020;Bahaidarah et al. 2013Bahaidarah et al. , 2016Baloch et al. 2015;Chandel and Agarwal 2017a;Feng et al. 2021;Liu et al. 2017;Mojumder et al. 2016;Nižetić et al. 2017Nižetić et al. , 2018Ren et al. 2018;Shahsavar et al. 2020;Shen et al. 2021;Shukla et al. 2017;Zhe et al. 2019). Being the most effective cooling method depends on several sub-factors, such as cost-effectiveness, applicability, cost of fluid/cooling media, cost recovery factor, enhancement factor, etc. Therefore, any cooling may or may not be appropriate for application in certain cases. For example, Chandel and Agarwal (2017a) reported that a PV-PCM system would be effective only in regions that receive high insolation throughout the year and where inter-seasonal climatic variation is less. Yet, this might still be subject to change. Once PV power generation was only a lab-scale application and was deemed to be an option for only extraterrestrial power generation for satellites, but now that it has become a viable option to resist, mitigate, and even replace fossil fuelbased power generation (Misha et al. 2019). Similarly, just two decades ago, ePCM-Ss were only lab-scale. Over the years, technology has developed and enabled these special fluids to be viable and yet efficient options for heat transfer processes (Alvarado et al. 2007;Liu et al. 2021;Yuan et al. 2022). In a recent study, Trivedi and Parameshwaran (2020) proved that ePCM-Ss exhibited Newtonian fluid properties and were viable for thermal energy storage. Ghaziani et al. (2012) used porous media to further improve the heat transfer by preventing ePCM particles from moving far from the surfaces and hence not taking part in heat exchange. The overall efficiency of PVT systems, therefore, depends on not only the improvement of the electrical efficiency of the PV module but also on the thermal efficiency of the integrated thermal system. It is therefore worthy of consideration, with regards to the efficiency of PV cooling systems/mechanisms, whether or not the heat drawn from the PV panel could be converted into useful heat (Misha et al. 2019). Despite numerous studies carried out on PVT systems over the past three to four decades, there have been only a few efficient PVT systems on the market (Sathe and Dhoble 2017). Conclusions Studies on renewable energy are important within the framework of sustainable development and clean energy strategies. PVT systems are becoming increasingly common among solar energy applications, which have the highest potential and the widest application area among renewable energy sources. Efforts to improve both electrical and thermal efficiency of PVT systems continue, with different approaches on different system parameters. Although studies aimed at drawing more heat from the system with less pumping power by using ePCM-Ss have increased in number and diversity in recent years, much more dynamic simulations and experimental studies are needed in this field in order to more precisely establish real climatic conditions and operating parameters. ePCM-S systems are much more complex in many ways compared to conventional PVT-liquid systems. Numerous parameters such as the use of various shell and core materials, the variety of production methods, and the homogeneity of the resulting capsules, the use of additives, the core to shell ratios, and the mass fraction of ePCMs in the slurry, make it difficult to determine the properties of these fluids accurately and precisely. These parameters also significantly diversify and differentiate ePCM-S by affecting the suspension stability, rheological properties and thermal properties of ePCM-S. As a result, it becomes very difficult to compare the data and findings obtained from different studies. In recent years, PCMs have become an attractive research field for researchers due to their advantages. The use of ePCM-Ss in PVT systems requires an extensive and exhaustive study with quite a lot of background knowledge and interdisciplinary collaboration, as the proper selection of PCM materials and synthesis methods as well as the correct concentration in the best CF involve several aspects and expertise in a number of other fields. The findings of early studies and subsequent research revealed that the use of ePCM-S as the working fluid in PVT systems increased the thermal efficiency, electrical efficiency, and overall efficiency at almost the same pumping power. In some specific cases, the yields were reported to be better. However, efforts should be intensified to achieve improvements in the size and composition of shell and core materials or to develop new materials. And operational issues still need to be fully addressed for the full implementation of ePCM-S to be successful. In the present study, the need and value of encapsulation, phase change materials, and their synthesis and characterization methods, as well as their advantages and suppression of their disadvantages, were addressed in a comprehensive way with special reference to their use as a HTF in PVT systems. This study, therefore, aims to constitute a fundamental guide to refer to from the very beginning to the final implementation of the ePCM-S as the working fluid in the PVT system by addressing almost all effective parameters in terms of advantages, disadvantages, challenges, and opportunities. Future research on the implementation of encapsulated PCM in dilute solutions of water or of other liquids as working fluids in thermal systems, either as heat transfer fluid or as thermal storage medium, is recommended to be aimed at designing and developing binary, ternary, or even quaternary ePCM slurries in order to enable transfer of heat as latent heat in a wider temperature range. The narrow window of phase change temperature range functions as a constraint on the flow rate, forcing it to be low so as to allow the PCM to melt within short exchange circuits, but this, on the other way round, may bring on problems in other parts of the cycle. In longer exchange circuits, on the other hand, the phase change would take place at a shorter portion of the exchange circuit, resulting in sensible heat storage in the rest of the circuit. Furthermore, future research should be aimed at the incorporation of single-or multi-walled carbon nanotubes, as well as other nanoparticles and even graphene, into the shell or core of the capsules in order to further enhance their properties. Conflict of interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper and that they have received no funding from any third parties.
2023-02-04T05:07:17.634Z
2023-02-02T00:00:00.000
{ "year": 2023, "sha1": "a387487e1e630fb36f8581c8df6dba980044b63b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "a387487e1e630fb36f8581c8df6dba980044b63b", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [] }
214524988
pes2o/s2orc
v3-fos-license
Implementation of random forest algorithm with parallel computing in R Random forest is a method for building models by combining decision trees or decision trees generated from bootstrap samples and random features. A common problem that often occurs when implementing random forest is long processing time because it uses a lot of data and build many tree models to form random trees because it uses single processor. This research proposes random forest method with parallel computing and implemented in R programming language. Some of the cases used in this research are Iris flower dataset, wine quality and diabetes diagnosis data of Pima Indian woman. The results obtained from the entire study show that the computational time used when running random forest with parallel computing is shorter than when running a regular random forest using only a single processor. Introduction Random forest was first introduced by Breiman in 2001. In his research shows the advantages of random forest, among others, can produce a lower error, provide good results in the classification, can handle the training data in a very large amount efficiently, and effective methods to estimate missing data [1]. Previous research on random forest was conducted by [2] conducting research on web caching by comparing classification accuracy using CART, MARS, random forest and Tree Net methods. Research on the application of random forest methods in driver analysis [3]. In the research of ensemble method on poverty classification in Jombang Regency, it is found that random forest gives best classification accuracy [4]. However, the common problem that often occurs when implementing random forest is the long processing time when using large amounts of data to build many tree models to form random trees when using single processor. Therefore, a random forest design with parallel computing is proposed. Parallel computing is the union of several computers or servers into a single unit that can work on the process simultaneously or simultaneously. Parallel computing makes programs and processes run faster as more CPUs are used [5]. Parallel computing was once used to improve the performance of advanced encryption standards [6]. There are several studies on parallel computing, one of which is the use of parallel computing to improve computer performance [7]. The programming language R is a programming language and software environment for statistical computing and is supported by R Foundation for Statistical Computing [8]. According to [9] R programming is an integrated software suite facility for data manipulation, calculation and graphical display. This research is aimed to develop random forest to be used in parallel computing in R. To achieve this objective, we utilize the "foreach" package. It is a package that supports the foreach looping construct [10]. We use this package because it can be used easily for repeating the same procedures along with all cores. Actually, the use of the foreach package for parallel computing can be found in the several literature, such as, [11,12] and it has been implemented in the gradDescent package [13]. Implementation of Random Forest in R High Performance Computing The programming language R or R programming is a programming language and software environment for statistical computing and is supported by R Foundation for Statistical Computing [8]. The R language is an implementation of the S programming language combined with lexical scoping semantics inspired by the scheme. The first step in this research is import data. In this process the orbit element data is inserted into the R programming language. Here is how to import data directly from the website, to keep in mind is the computer must be connected to the internet. library("httr") a <-GET("https://archive.ics.uci.edu/ml/machine-learning databases/wine/wine.data") wine <-read.csv(textConnection(content(a)), header=F) colnames Figures 1 shows the process of retrieving the dataset from the online repository and then stored in CSV form, the next being the naming of the columns of each variable because the dataset retrieved does not have an attribute name so if it is not defined it will by default be named V1, V2, V3, and so on. Once the orbital element's dataset is imported, the next step is to build the bootstrap function. Figure 2. Code of the data retrieval program at random. Figure 2 is the program code for retrieving data randomly from the available dataset, the above example is data retrieval from the iris datset. The variable "irisData" is used to store randomlycaptured data, "iris.tra" is the name of a variable that contains data for training data, "iris.tst" is a variable name containing data for test data whereas " real.iris "is the name of the variable that contains the original data from the test data (testing). In this step, some steps are taken to run random forest with parallel computing. There are several packages needed to run this stage ie packages "foreach", "doParallel" and "entropy". The first step is to install the three packages, then import or call packages as in Figure 3. # Import packages "foreach" library(forech) # Import packages "doParallel" library(doParallel) Figure 3. Import the R packages. Next is to build the parallel random forest function shown in Figure 4. Basically, this code contains several components, as follows: to collect parameters, to define number of cores, to perform the foreach package along with numbers of trees, to call the decision tree procedure, and to aggregate the final results. Experimental Design In order to conduct an efficient experiment, a suitable experimental design was created so that a satisfactory trial result was obtained. In this research there are two scenarios, the first scenario is to do the calculation of random forest without parallel computing, while for the second scenario is to do the calculation of random forest with parallel computing. The first experimental scenario of computing random forest without parallel computing using only one processor to test the case of Iris flower dataset, the quality of wine and diabetes of Pima Indian women. The numTree parameter value is set to 20 and 100 and numFeature is set 2 and 4. The second experimental scenario to test the case of the Iris dataset, Wine and Pima using a parallel random forest with two processors. Modify the parameters as in the first scenario of numTree 20 and 100 and numFeature 2 and 4. The third experimental scenario to test the case of the Iris dataset, Wine and Pima using a parallel random forest with three processors. Modify the parameters as in the first and second scenarios of numTree 20 and 100 and numFeature 2 and 4. Recent experimental scenarios to test the case of the Iris, Wine and Pima datasets use a parallel random forest with four processors. Modify the parameters as in the previous scenario of numTree 20 and 100 and numFeature 2 and 4. Each experimental scenario uses the Iris flower species dataset consisting of 5 variables (4 input variables and 1 output variable) and 150 rows of data. Next use the wine quality dataset consisting of 14 variables (13 input variables and 1 output variable) and 178 rows of data. Lastly, by using the Pima Indian female diabetes dataset consisting of 9 columns (8 input variables and 1 output variable) and 798 rows of data. Results and Discussion This chapter describes some experimental results that have been made predictions of iris species, wine quality and diagnosis of diabetes Pima Indians by using conventional random forest methods as well as random forest methods combined with parallel computing The experimental results obtained after running the test case studies that have been determined in this study that is the case of Iris flower dataset, the dataset of the quality of wine and the dataset of diabetes Pima Indian women. The following experimental results were made to predict 3 cases of Iris flower, wine quality and diagnosis of diabetes of Pima Indian women by random forest method in parallel computing with modification of number of processor used, number of tree and number of features. Table 1 shows the time obtained when running a random forest method with parallel computing to predict Iris flower species. Based on the data obtained and displayed in the table can be seen that the more processors are used then the processing time becomes shorter. But that does not mean the faster the error generated smaller. The fastest time is 1.368404 seconds when using two processors with numTree 20 and numFeature 2 but the resulting error is 10.0%. The smallest error is 3.3% at numTree 100 and numFeature 2 using two, three and four processors. The resulting time is 4.294907 seconds, 5 3.351274 seconds and 3.207866 seconds. The longest time is 6.964609 seconds when using three processors with numTree 100 and num Feature and the resulting error is 6.6%. Table 2 shows the computational results obtained when running a random forest method with parallel computing to predict the quality of the wine with different parameter values. This affects the processing time spent. The fastest time is obtained when computing with four processors with numTree 20 and numFeature 2 parameters, the resulting error is 0.000%. Errors resulting from the overall wine dataset prediction are relatively small, the highest error is only 2.857%. Overall, the computing time gets shorter when using more processors. Table 3 shows the results obtained when running the random forest method with parallel computing to predict the quality of the wine with different parameter values. When numTree 100 and numFeature
2019-11-28T12:36:28.141Z
2019-11-01T00:00:00.000
{ "year": 2019, "sha1": "80b756061ad7d6a5f5fbec0ebb92b0d1f0283a9e", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1280/2/022028", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "d7129113d24c9252feec0e06b2a872a960cb8d3e", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Physics", "Computer Science" ] }
266473693
pes2o/s2orc
v3-fos-license
Importance-Weighted Variational Inference Model Estimation for Offline Bayesian Model-Based Reinforcement Learning This paper proposes a model estimation method in offline Bayesian model-based reinforcement learning (MBRL). Learning a Bayes-adaptive Markov decision process (BAMDP) model using standard variational inference often suffers from poor predictive performance due to covariate shift between offline data and future data distributions. To tackle this problem, this paper applies an importance-weighting technique for covariate shift to variational inference learning of a BAMDP model. Consequently, this paper uses a unified objective function to optimize both model and policy. The unified objective function can be seen as an importance-weighted variational objective function for model training. The unified objective function is also considered as the expected return for policy planning penalized by the model’s error, which is a standard objective function in MBRL. This paper proposes an algorithm optimizing the unified objective function. The proposed algorithm performs better than algorithms using standard variational inference without importance-weighting. Numerical experiments demonstrate the effectiveness of the proposed algorithm. I. INTRODUCTION Reinforcement learning (RL) is a promising framework for autonomously learning a policy from interaction data [1].Online model-free RL methods have succeeded in applications where the data can be obtained easily, such as games [2], [3].However, such methods are often impractical for applications where the data collection is expensive, such as robotics or healthcare [4], [5].Data-efficiency is one of the fundamental issues in RL. There are several approaches for increasing data-efficiency in RL.One is model-based reinforcement learning (MBRL).In MBRL, the agent explicitly learns an environment model and utilizes it to improve a policy [6], [7], [8].Bayesian MBRL is a subfield of MBRL in which the The associate editor coordinating the review of this manuscript and approving it for publication was Tony Thomas.agent explicitly takes uncertainty about an environment model into account [9], [10].Based on Bayes-optimal exploration/exploitation tradeoff in Bayesian MBRL, the data-efficiency can be further improved.Offline RL is also a data-efficient RL approach [11].In offline RL, the agent learns a policy from previously collected data.Meta-RL is another approach for data-efficient RL [12].In meta-RL, the agent learns a policy from data collected from multiple similar environments, assuming that each environment is drawn from some distribution every episode.Combining these data-efficient RL approaches has also been investigated. Motivated by increasing data-efficiency, this paper discusses a Bayesian MBRL approach for offline meta-RL.A standard model in Bayesian MBRL is a Bayes-adaptive Markov Decision Process (BAMDP) [9], [10].A task distribution to draw a task instance in meta-RL can be represented as a prior distribution over MDPs in a BAMDP. A BAMDP is also reasonable for offline RL, as its goal is offline optimization of possible trial and error under its environment model and prior distribution.For these reasons, a BAMDP is a promising model for offline meta-RL. Conventional Bayesian MBRL methods assume that a BAMDP is given in advance, implying that an environment is accurately represented by a likelihood function and a prior distribution specified in a BAMDP.This assumption is valid when using a flexible black-box model to infer from sufficient data from a current environment.However, this assumption is often difficult to hold when using a structured model with low-dimensional latent task representation to infer from few data from a current environment.If using an inaccurate model, Bayesian MBRL may not work for a real environment due to failing at belief update [13].How to address a structured BAMDP is a question. Recent meta-MBRL research has discussed learning latent variable models based on variational inference framework to obtain latent task representation in meta-RL [14], [15], [16], [17], [18].A typical approach is to optimize an evidence lower bound that implicitly assumes that the data distribution does not change.Such implicit assumption can also be seen in meta-MBRL but also in MBRL, e.g., [8], [19], and [20].However, in MBRL, the distribution of data previously collected to train a model differs from the distribution of data obtained in the future when applying a policy improved using the learned model.Such a situation is called covariate shift or distribution shift [11]. In the case of online MBRL, the effect of ignoring covariate shift is relatively mild.This is because the difference between the constantly updated data-collecting policy and the improved policy gradually becomes small in the online setting in which the policy is gradually improved and converged.Indeed, most of the above-mentioned meta-MBRL methods suppose online learning settings.However, in the case of offline MBRL, the difference between the data-collecting policy no longer updated and the improved policy is significant, and thus the effect of ignoring covariate shift is also significant.Prior work [17] addresses another issue that arises in offline meta-MBRL, whereas the issue of covariate shift is out of the discussion. This paper discusses learning a BAMDP model considering covariate shift.This paper leverages the idea of learning a MDP model considering covariate shift [21].The main idea of [21] is importance-weighted maximum likelihood estimation weighted by the ratio of the distributions to predict future data more accurately when applying an improved policy.The importance-weighted objective is also an estimate of the expected return in a MDP penalized by model error.The algorithm in [21] optimizes the importance-weighted objective with respect to both model and policy.This paper proposes to extend this idea from MDP model learning to BAMDP model learning.The outline of the discussion is similar to [21].Firstly, this paper presents a unified objective function viewed as an importance-weighted variational objective function for training a model and as the expected return penalized by model's error for planning a policy.Secondly, this paper proposes an algorithm to optimize it with respect to both model and policy.This paper and [21] are one of the decision-aware model learning approaches [22], [23].Prior works [24], [25] are also similar approaches in that they consider importance-weighting with the distribution ratio.The difference is that this paper and [21] consider the data distribution in a simulation MDP model when applying a planned policy, not the data distribution in a real MDP as in other approaches.Using the data distribution in MDP model simulation has two advantages.Firstly, unlike in a real MDP, data when applying a newly planned policy in a simulation MDP model are accessible to the agent, and the importance-weight can be obtained in the standard framework of density ratio estimation [26].Secondly, optimizing the importance-weighted variational objective with respect to policy takes the same form as standard BAMDP planning, and the proposed algorithm can use an existing BAMDP planning algorithm as a policy planning subroutine. Sect.II describes the notations of MDP and BAMDP.Sect.III explains the problem setting of offline meta-MBRL in this paper and presents an importance-weighted variational objective.Sect.IV proposes an algorithm to optimize the importance-weighted variational objective.Sect.V demonstrates the effectiveness of the proposed algorithm in numerical experiments.Sect.VI concludes this paper. II. PRELIMINARY A. MDP This paper considers a discounted infinite horizon MDP [27].Let S be the state space.Let A be the action space.Let ρ(s) be the initial state distribution.Let P(s ′ |s, a) be the transition probability function.Let r(s, a) be the reward function.Let π be a policy.The state and state-action distributions are = s|ρ, π, P) The expected return is A BAMDP is an augmented MDP whose augmented state is (b t , s t ), where b t is the agents' belief over MDPs at timestep t [9], [10].For simplicity, this paper assumes that reward function r is known.In that case, the agent's belief is over transition probability function P. The prior distribution, i.e., the agent's belief at timestep t = 0 is b 0 (P).The likelihood function is l(P; s t , a t , s t+1 ) = P(s t+1 |s t , a t ).The posterior distribution, i.e., the agent's belief at t ≥ 1 is updated using the Bayes rule, b t+1 (P) = Pr(P|b 0 , s 0 , a 0 , • • • , s t , a t , s t+1 ) ∝ b t (P)P(s t+1 |s t , a t ). = Pr(P|b t , s t , a t , s t+1 ) The transition probability function in a BAMDP is By the assumption, the reward function in a BAMDP is r(b, s, a) = r(s, a).The expected return in a BAMDP is A Bayes-optimal policy is a policy that maximizes (1).Since a BAMDP is an augmented MDP whose augmented state is (b, s), a Bayes-optimal policy is a function of (b, s).In principle, when given a BAMDP, i.e., when given likelihood function l(P; s, a, s ′ ) = P(s ′ |s, a) and prior distribution b 0 (P), a Bayes-optimal policy can be planned offline, as (1) can be computed offline [10]. III. PROBLEM SETTING AND OJECTIVE FUNCTION This paper assumes a meta-RL setting where a task represented by a MDP is drawn from a distribution.This paper considers optimizing the expected return averaged over MDPs as a reasonable criterion in meta-RL.For simplicity, this paper assumes that state space S, action space A, initial state distribution ρ(s), and reward function r(s, a) are the same between all MDPs.In that case, the expected return averaged over MDPs is E P∼b 0 η π P , which is the same as (1).That is, policy optimization in meta-RL in this setting can be seen as policy optimization in a BAMDP whose likelihood function and prior distribution are specified by P and b 0 (hereinafter called ''the real BAMDP'').As described in Sect.I, this paper considers a setting where the real BAMDP is inaccessible, and only offline data are given.Even in principle, a Bayes-optimal policy cannot be planned offline in this setting, as the real BAMDP is not given.Throughout, this paper discusses a model-based approach to optimize (1) in this setting. This paper assumes that offline data are collected from M real MDPs sampled from b 0 .Let D To represent P m (s ′ |sa) and P m ∼ b 0 , the agent uses a latent variable model denoted by Pθ,z (s ′ |sa) and z ∼ β 0 φ , where θ is a model parameter vector shared between MDPs, z is a latent variable vector to specify one MDP, and β 0 φ is the prior distribution parameterized with φ.Hereinafter, this paper refers to a BAMDP model whose likelihood function and prior distribution are specified by Pθ,z and β 0 φ as ''the simulation BAMDP.''Let β t φ be the agent's belief at timestep t.By the assumption, the reward function in the simulation BAMDP is r(β φ , sa) = r(sa).In the MDP whose transition probability function is Pθ,z , let ηπ θ,z be the expected return, let dπ θ,z (sa) be the state-action distribution, and let Dπ θ,z be simulated data collected using policy π. The model-based meta-RL setting in this paper is summarized as • the agent trains simulation BAMDP parameter (θ, φ) using the offline data obtained in the real BAMDP, • the agent uses the trained simulation BAMDP to plan policy π to optimize the expected return in the real BAMDP, ( Below, this paper discusses how to train (θ, φ) and plan π.The first idea is to train (θ, φ) to optimize a standard latent variable model learning criterion and then plan π to optimize a standard MBRL criterion.This paper calls it ''twostage optimization.''The second idea is to iterate between training (θ, φ) and planning π to optimize a unified objective function.This paper calls it ''joint optimization.''The former is a natural extension of existing methods, whereas the latter is what this paper proposes.Sections III-A-III-B describe objective functions for these ideas, respectively.Sections IV-A-IV-B show algorithms for these objective functions, respectively. A. OBJECTIVE FUNCTION FOR TWO-STAGE OPTIMIZATION 1) FIRST STAGE: TRAINING (θ, φ) The first stage is to train (θ, φ) based on variational inference for latent variable model learning.As a standard method, this paper uses variational autoencoder (VAE) [28].Given D ofl , the log marginal likelihood function is where p(z) is the prior distribution for VAE learning.Using Jensen's inequality, Equatoin ( 2) is bounded as ln Pr(D ofl |θ ) where q φ (z|D ofl m ) is a variational distribution parameterized with φ.Let (θ * , φ * ) denote parameters that maximize (3). The initial belief in the simulation BAMDP is ideally the true latent variable distribution obtained after VAE learning.As a reasonable approximation, this paper uses ), which can be seen as a latent distribution learned from data and is called average encoding distribution [29] or aggregated posterior [30]. 2) SECOND STAGE: PLANNING π The second stage is to plan π using the simulation BAMDP represented by Pθ * ,z and β 0 φ * .The most naive idea is to optimize the expected return in the simulation BAMDP, with (φ, θ) = (φ * , θ * ).However, even in the case of MDP, this idea often only works for offline MBRL [20].An improved idea is to optimize a penalized expected return in a MDP whose penalized reward function is r(s, a) − λu(s, a), where u(s, a) is an estimate of model's error, and λ is the user-chosen penalty coefficient [20]. Similarly, this paper considers a penalized version of the expected return in the simulation BAMDP.Writing the initial belief explicitly as with (φ, θ) = (φ * , θ * ) as the second stage objective function, where u m,θ,z (sa) is an estimate of the model's error between P m (•|sa) and Pθ,z (•|sa). B. OBJECTIVE FUNCTION FOR JOINT OPTIMIZATION In the joint-optimization, this paper gives the agent's belief at timestep t = 0 in the form of , as in the two-stage optimization.This paper also approximates the expected return in the real BAMDP by The difference between the expected return in the simulation BAMDP and the approximate expected return in the real BAMDP is bounded as where and ν is a constant.For the derivation, see Appendix. A lower bound of the approximate expected return in the real BAMDP is bounded as The first term is the expected return in the simulation BAMDP, and the second penalizes the policy evaluation error between the real and simulation BAMDPs. Inspired by increasing the objective function by maximizing the lower bound, this paper defines a penalized objective function by where c ∈ [0, C] is a user-chosen penalty coefficient.The main idea of the joint optimization is to iteratively optimize (θ, φ) and π based on an estimate of ( 7).This paper uses the MM framework [31] to optimize (7).When updating from (θ i , φ i , π i ), the surrogate function is . Below, this paper omits the constant term.Equation ( 8) can be rewritten as , . This paper estimates the above equation by where , and κ is an estimate of κ.How to estimate them is described in Sect.IV-B. Equation ( 9) can be interpreted as a kind of variational inference because ( 9) is similar to (3) in the following points.Firstly, w π m,θ,z (sa) is importance-weighting to address covariate shift between d ofl m (sa) and dπ θ,z (sa).Secondly, ℓ m,n (θ, z; κ) is a utility function modified from the loglikelihood function.Thirdly, ν scales the KL divergence regularization term in the same manner as β-VAE [32].Based on the interpretation of (9) as a kind of variational inference, this paper uses it to update (θ, φ).This paper calls it ''importance-weighted variational inference for BAMDP.'' 2) ESTIMATED OBJECTIVE FUNCTION FOR PLANNING π Equation ( 8) can also be rewritten as . resulting estimated objective function is which is a penalized version of the expected return in the simulation BAMDP. a: COMPARISON TO TWO-STAGE OPTIMIZATION The objective function for planning π is essentially the same for the joint optimization and the two-stage optimization, comparing (10) and ( 5).In the joint optimization, the objective function for training (θ, φ) is relevant to the one for planing π, as ( 9) and ( 10) are both estimates of (8).However, in the two-stage optimization, the objective function for training (θ, φ) is different from the one for planning π, comparing (3) and (5).In other words, for one objective, the joint optimization optimizes it with respect to both (θ, φ) and π, whereas the two-stage optimization does it with respect to only π.As a result, the joint optimization is better than the two-stage optimization in terms of optimizing one objective., becase another bound similar to (6) can also be derived by replacing dπ θ,z in L(θ, φ; π) with d π m , see Sect.IV of [21].However, in that case, the resulting variant of (10) does not have the same form as the objective function of a BAMDP planning problem.One advantage of using b: ADVANTAGE OF USING is that ( 10) is a BAMDP planning objective function and can be optimized using an existing BAMDP planning algorithm.Another advantage is that, since the agent cannot access data sampled from d π m (sa) in the real BAMDP but can generate data sampled from dπ θ,z (sa) in the simulation BAMDP, can be obtained in the standard framework of density ratio estimation [26], which is a simpler setting. IV. ALGORITHM A. ALGORITHM FOR TWO-STAGE OPTIMIZATION The main idea of the two-stage optimization is to train BAMDP parameter (θ, φ) and subsequently plan policy π. 2) SECOND STAGE: PLANNING π Line 3 in Algorithm 1 optimizes (5), which is policy planning.Inspired by VariBAD [16], this paper approximately gives an augmented state in the BAMDP by a pair of a state and a variational approximation of the belief.To reduce computational efforts, as the prior for the variational approximation of the belief, this paper uses a variational distribution that minimizes the KL divergence from β 0 (z) = 1 M m q φ * (z|D ofl m ).As the likelihood function for the variational approximation of the belief, this paper uses Pθ,z , the decoder trained as in Line 2. This paper trains u m,θ,z (sa) in (5) using input data { sa n,m , z, µ φ,m , ln σ φ,m } n,m and output data B. ALGORITHM FOR JOINT OPTIMIZATION The main idea of the joint optimization is to iterate between training (θ, φ) and planning π.Algorithm 2 shows the outline. 1) TRAINING (θ, φ) At the first iteration, where π remains an initial value, importance-weighting depending on π is not reasonable.Line 4 in Algorithm 2 optimizes (3), as with the twostage optimization.At the subsequent iterations, Line 6 in Algorithm 2 optimizes (9).Below, this paper discusses how to execute Line 6 concretely. In principle, v π θ,z (s) may be estimated by a meta-RL extension of LSDG [34].However, in practice, estimating v π θ,z (s) is computationally unrealistic if θ is high-dimensional.Specifically, LSDG in a single MDP RL setting requires estimating the same number of value functions as the dimension of model parameters, and v π θ,z (s) additionally needs its meta-RL version.In the case of MDP, the numerical experiments in [21] observe that importance-weighted model estimation ignoring this term can also perform better than unweighted model estimation.Assuming that this also holds for a BAMDP, this paper ignores v π θ,z (s) in the gradient-based optimization. is meta-learning of density ratio where the source datasets are D ofl m , the target datasets are Dπ θ j ,z .This paper estimates w π m,θ,z using neural networks that take sa, z, µ φ,m as input, denoted by ŵπ m,θ,z .Since µ φ,m encodes data from the m-th MDP, it contains the information of the source distribution.Since z specifies a simulation MDP, it captures the characteristics of the target distribution.Adding latent representations of both source and target distributions to input is inspired by [35].(θ j , φ j , π j ) ← (θ, φ, π ). V. NUMERICAL EXPERIMENTS A. POLICY EVALUATION Firstly, to illustrate the effectiveness of importance-weighted variational inference for BAMDP, this paper discusses the problem of predicting behaviors of a given target policy.This problem can be seen as a policy evaluation problem, as the expected return is computed from the predicted behavior.This paper compares predicting behaviors of standard variational inference and importance-weighted variational inference when training BAMDP models expressed using the same NN model.This paper considers an inverted pendulum task, where state s is a pair of angle and angular velocity, and action a is torque input.The environmental variation in meta-RL is that the viscosity coefficient of the equation of motion behind a real MDP changes every episode.The offline data is collected using a random policy in 100 sampled real MDPs.The target policy is a controller that swings up and stabilizes the pendulum to (0, 0) in the real MDP, whose viscosity coefficient is zero.For more details, see Appendix. The outline of variational inference is as follows.The agent considers a one-dimensional dimension latent variable z.The agent represents each model by neural networks.For learning each model, the agent uses the data obtained from 80 real MDPs for training and the rest for validation.For regularizing importance-weighting, the agent uses α = 0.2.The number of iterations of Algorithm 3 is five.For more details, see Appendix. Fig. 1 illustrates the predicted behavior when using standard variational inference, i.e., optimizing (3).The 100 subplots correspond to the 100 sampled real MDPs.In each subplot, the horizontal and vertical axes stand for angle and angular velocity, respectively.The black lines show real future data when applying the target policy from initial state (π, 0) in each real MDP, which is the ground truth behavior the agent wants to predict.Multiple black line patterns show that the target policy planned for zero viscosity coefficient swings more weakly than expected as the viscosity coefficient increases, finally failing to swing up.The colored markers show simulated future data when applying the target policy from the same initial state in each simulation MDP, whose latent variable is the encoding of the offline data collected in the real MDP in the same subplot.That is, this is the prediction that the agent obtains using the trained model.Note that since the state transition model is estimated as a probabilistic model, there are variations in the predicted behavior, which are drawn in different colors.The red markers indicate (0, 0).The top 20 subplots and the bottom 80 subplots are the real MDPs where the offline data for validation and training are collected, respectively.There is a big difference between the black lines and the colored markers, meaning that the simulation BAMDP trained using standard variational inference does not capture the behavior of the target policy. Fig. 2 illustrates the predicted behavior when using importance-weighted variational inference for BAMDP.Note that the black lines, i.e., real future data, are the same as Fig. 1.The difference between the black lines and the colored markers in Fig. 2 is small compared to Fig. 1.Thus, the simulation BAMDP trained using importance-weighted variational inference for BAMDP captures the behavior of the target policy more accurately compared to standard variational inference. Fig. 3 shows the offline data colored based on the logarithm of the importance-weights at the fifth iteration of Algorithm 3.This figure also shows the same black lines as Fig. 1 for reference.Roughly speaking, data points close to the black line are colored brightly, assigning large importance-weighting.Such importance-weighting is effective for more accurately predicting behaviors of the target policy. Fig. 4 illustrates the relationship between the real MDP parameter and the simulation MDP latent variable when using standard variational inference.The horizontal axis stands for the viscosity coefficient, which is the real MDP parameter and is inaccessible to the agent.The vertical axis indicates the one-dimensional latent variable mean of the approximate belief, which encodes the offline data collected in the same real MDP and is accessible to the agent.The orange and blue markers are the results of the real MDPs where the offline data for validation and training are collected, respectively.This figure also shows that the simulation BAMDP learned using standard variational inference is not very accurate.Fig. 5 illustrates the relationship between the real MDP parameter and the simulation MDP latent variable when using importance-weighted variational inference for BAMDP.The magnitude relation of the one-dimensional latent variable accessible to the agent roughly captures the magnitude relation of the viscosity coefficient inaccessible to the agent.This figure also shows that the simulation BAMDP learned using importance-weighted variational inference for BAMDP is more accurate.Note that, for the few subplots that do not capture the ground truth behaviors, the viscosity coefficient is close to the critical point where the target policy cannot swing up. B. POLICY OPTIMIZATION Next, this paper discusses policy optimization experiments to demonstrate the effectiveness of the proposed algorithm.This paper presents the results of the inverted pendulum task described in Sect.V-A and a cartpole swing-up task.For the cartpole task, the environmental variation in meta-RL is that the pole mass and the pole length of the equation of motion behind a real MDP change every episode.Similar to the inverted pendulum task, the offline data is collected using a random policy in 100 sampled real MDPs.For more details, see Appendix. The outline of the two-stage optimization and the joint optimization is as follows.The agent considers a one-dimensional latent variable in the inverted pendulum task and a two-dimensional one in the cartpole swing-up task.The agent uses a decoder with 48 hidden units in the inverted pendulum task and one with 64 hidden units in the cartpole swing-up task.The others are the same between the inverted pendulum and cartpole swing-up tasks.For regularizing importance-weighting, the agent uses α = 0.2.The number of iterations of Algorithm 3 is five.The number of iterations of Algorithm 2 is two.The agent uses SAC [37] as a policy planning subroutine to learn an augment-statedependent policy in the simulation BAMDP.For more details, see Appendix. Table 1 shows the result of the two-stage optimization and the joint optimization.Note that the two-stage optimization is an existing method, and the joint optimization is the proposed algorithm, as described in Sect.III.For each task, Table 1 reports the score averaged over five runs with different random seeds.For each run, this paper estimates the expected return by averaging the return in 100 sampled real MDPs.For both tasks, the joint optimization achieves better performance.Figs.6-7 show the behaviors in the real BAMDP when planned using the two-stage optimization and the joint optimization, respectively.The policy planned using the two-stage optimization cannot stabilize the pendulum around (0, 0) as shown in Fig. 6, leading to the worse performance shown in Table 1.This is because the simulation BAMDP trained by the two-stage optimization cannot accurately represent transitions around (0, 0).The joint optimization trains the simulation BAMDP by assigning larger importance-weighting to data around (0, 0).As a result, the policy planned using the joint optimization can stabilize the pendulum around (0, 0) as shown in Fig. 7, resulting in the better performance shown in Table 1. VI. CONCLUSION AND FUTURE DIRECTIONS This paper discusses importance-weighted variational inference to train a BAMDP model in offline Bayesian MBRL.The proposed algorithm optimizes a unified objective function that is an importance-weighted variational objective function for training a model and is a penalized expected return for planning a policy.In theory, since a method using standard variational inference without importance-weighting optimizes an objective function of interest only with respect to a policy, the proposed algorithm is better in terms of optimizing one objective function.In practice, numerical experiments demonstrate that the proposed algorithm can perform better. Future directions to improve the proposed algorithm will be as follows.Firstly, this paper considers the case where the number of real MDPs collected in offline data, M , is not large.To address a large number of real MDPs, the average encoding distribution, β 0 (z) = 1 M m q φ * (z|D ofl m ), needs to be approximated by a mixture of variational posteriors with pseudo-inputs [38] or a similar technique.Secondly, applying to large-scale tasks is an important challenge.One of the bottlenecks is density ratio estimation in highdimensional settings, as this is itself a research topic [39], [40].It is necessary to incorporate recent developments.Thirdly, improving variational inference of BAMDP as a latent variable model is essential for both unweighted and importance-weighted settings. APPENDIX A DERIVING POLICY EVALUATION ERROR BOUND The policy evaluation error between the real and simulation MDPs is bounded as follows (see Sect.IV-A of [21]). where ξ (θ, φ, z; π) = E sa∼ dπ θ,z ,s Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply. APPENDIX B NUMERICAL EXPERIMENT SETTINGS The inverted pendulum task and the cartpole swing-up tasks are modifications of OpenAI Gym [41].The modified parts are as follows.For the inverted pendulum task, the time discretization width is 0.1, the mass is 0.5, the viscosity coefficient is uniformly sampled from [0, 0.3] as task variation, the initial angle and angular velocity uniformly are sampled from [−0.75π, 0.75π] and [−5, 5], and the cost function is 1 − exp(−0.5 × angle 2 ).For the cartpole swingup task, the goal is changed from balancing to swing-up, the time discretization width is 0.05, the pole mass and length are uniformly sampled from [0.05, 0.3] and [0.4,0.5] as task variation, the initial angle is uniformly sampled from The details of model training are as follows.For the encoder, f φ and [µ φ , log σ φ ] are four-layer neural networks with ReLU activation with 32 hidden units.The decoder, Pθ,z , is two-layer neural networks with ReLU activation with 48 hidden units in the inverted pendulum task and with 64 hidden units in the cartpole swing-up task.The encoder and the decoder are trained using standard variational inference or importance-weighted variational inference for BAMDP.The importance-weight model, ŵπ m,θ,z , is four-layer neural networks with tanh activation with 32 hidden units and learned using a logistic regression loss and α = 0.2.The penalty model, ûm,θ,z , is four-layer neural networks with tanh activation with 16 hidden units and learned using a regression loss. The discount factor is γ = 0.99.The constant scaling the KL divergence regularization term is ν = 1.The penalty coefficient for importance-weighted variational inference for BAMDP is c = 0.1.The penalty coefficient for standard variational inference is λ = κ, to compare with importance-weighted variational inference for BAMDP under the same condition. m,n , a m,n , s ′ m,n )} N n=1 be the offline data collected in the mth real MDP, where (s m,n , a m,n , s ′ m,n ) is the n-th transition sample observed in the m-th real MDP.Let D ofl = {D ofl m } M m=1 be the entire offline data.Let P m be the m-th real MDP's transition probability function.Hereinafter, for notational shorthand, this paper uses sa = (s, a), sa m,n = (s m,n , a m,n ), and sas ′ m,n = (s m,n , a m,n , s ′ m,n ).Let d ofl m (sa) be the underlying state-action distribution of sa m,n . 1 ) ESTIMATED OBJECTIVE FUNCTION FOR TRAINING (θ, φ) It is also possible to consider importance-weighting withd π m (sa) d ofl m (sa) a: ESTIMATING κThis paper estimates κ by κ FIGURE 1 . FIGURE 1. Behaviors in real and simulation BAMDPs when using standard variational inference (policy evaluation). FIGURE 2 . FIGURE 2. Behaviors in real and simulation BAMDPs when using standard variational inference (policy evaluation). FIGURE 4 . FIGURE 4. Real MDP parameter and simulation mpd latent variable when using standard variational inference (policy evaluation). FIGURE 5 . FIGURE 5.Real MDP parameter and simulation mpd latent variable when using standard variational inference (policy evaluation). FIGURE 6 . FIGURE 6. Behaviors in real and simulation BAMDPs when planned using two-stage optimization (inverted pendulum policy optimization). FIGURE 7 . FIGURE 7. Behaviors in real and simulation BAMDPs when planned using joint optimization (inverted pendulum policy optimization).
2023-12-23T16:06:45.313Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "c880317356e2585fde6db64750aef93da4b67ff7", "oa_license": "CCBY", "oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/6514899/10368011.pdf", "oa_status": "GOLD", "pdf_src": "IEEE", "pdf_hash": "b942e29836247624dd0f8eea342ad3e2011c4f8f", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
21991628
pes2o/s2orc
v3-fos-license
Oviposition deterrent activity from the ethanolic extract of Pongamia pinnata, Coleus forskohlii, and Datura stramonium leaves against Aedes aegypti and Culex quinquefaciatus Mosquitoes are responsible for spread of many diseases than any other group of arthropods. Diseases such as malaria, filariasis, dengue hemorrhagic fever (DHF), and chikunguinya are real threat to mankind. In the present study, ethanolic extracts of leaves of Pongamia pinnata, Coleus forskohlii, and Datura stramonium were evaluated for oviposition deterrent activity against Aedes aegypti and Culex quinquefasciatus. The oviposition deterrent tests of ethanolic extract of Pongamia pinnata, Coleus forskohlii, and Datura stramonium leaves reduced egg laying by 97.62%, 77.3%, 100% against Aedes aegypti and 59.10%, 39.22%, 82% against Culex quinquefasciatus at higher concentration (0.1%). INTRODUCTION Mosquitoes are the vectors for the dreadful diseases of mankind. Of all the insects that transmit diseases, mosquitoes represent the greatest menace. [1] While most people consider mosquitoes as an annoyance, these tiny assassins have the potential and lethal capacity to kill more than a million victims a year around the world. [2] Prevalence of mosquito-borne diseases is one of the world's most health hazardous problems. [3] One of the methods available for the control of mosquitoes is the use of insecticides. Chemical control using synthetic insecticides had been favorable so far because of their speedy action and easy application. [1] Synthetic insecticides are toxic and adversely affect the environment by contaminating soil, water, and air. Botanical pesticides are promising in that they are effective, environment-friendly, easily biodegradable, and also inexpensive. [4] The mosquito Aedes aegypti acts as a vector for an arbovirus responsible for yellow fever in Central and South America and in West Africa. It is also the vector of dengue hemorrhagic fever, which is endemic to South East Asia, the Pacific islands area, Africa. [5] Culex quinquefasciatus Say is the main vector of bancroftian filariasis. Global prevalence of lymphatic filariasis is 120 million and population at risk is 1.3 billion. In India, there may be up to 31 million microfilareamics and 23 million cases of symptomatic filariasis. [6] Urbanization and changed lifestyles mainly contribute to the proliferation of larval habitats resulting in disease epidemics. [7] It is estimated that every year at least 500 million people in the world suffer from one or the other tropical diseases that include malaria, lymphatic filariasis, schistosomiasis, dengue, trypanosomiasis, and leishmaniasis. Of late chikungunya, a serious mosquito borne epidemic has gained momentum in India. These diseases not only cause high levels of morbidity and mortality, but also inflict great economic loss and social disruption on developing countries such as India, China, etc. Mosquito population can be reduced by disrupting its oviposition. [7] To avoid the propensity of bioaccumulation and induction of malignancy in nontarget animals, a safe and more congenial method of vector control by natural and cheaper means of using plants as insecticides became popular. [8] Plants are considered as a rich source of bioactive chemicals and they may be an alternative source of mosquito control agents. [9] The co-evolution of plants with insects has equipped them with a plethora of chemical defenses, which can be used against insects. Since botanicals are less likely to cause ecological damage, a large number of plants have been screened for their insecticidal activities against mosquitoes and some of these have been found to possess promising effects. [10] The present study was an attempt to explore oviposition deterrent activity from ethanolic extract of Pongamia pinnata, Coleus forskohlii, and Datura stramonium leaves against Aedes aegypti and Culex quinquefasciatus. Collection of plants and extraction Fully developed leaves of Pongamia pinnata, Coleus forskohlii, and Datura stramonium were collected and voucher specimens have been authenticated by Dr. Rajanna (Botanist), Department of botany, G.K.V.K, Bangalore, India. The leaves were washed with tap water, shade dried, and powdered. The powdered plant material was loaded in soxhlet apparatus and was extracted with ethanol. The solvent from the extract was subjected to vacuum evaporator to collect the crude extract. Standard stock solutions were prepared by dissolving the residues in the ethanol. These solutions were used for oviposition deterrent bioassay. Oviposition deterrent bioassay The oviposition deterrent test was performed using the method of Xue et al. [11] against Aedes aegypti, Anopheles stephensi, and Culex quinquefaciatus. Fifteen gravid female were (10-day-old, 4 days after blood feeding) transferred to each mosquito cage (45 * 38 * 38 cm) covered with a plastic screen, with a glass top, and a muslin sleeve for access. Swathi, et al.: Oviposition deterrent activity A 10% sucrose solution was available at all times. Serial dilutions of leaf extract were made in ethanol. Enamel bowls containing 100 ml of rainwater were treated with leaf extract to obtain test solutions of 0.01, 0.025, 0.05, 0.075, and 0.1%. Two enamel bowls holding 100 ml of rainwater were placed in opposite corners of each cage, one treated with the test material, and the other with a solvent control that contained 1% ethanol. The positions of the bowls were alternated between the different replicates so as to nullify any effect of position on oviposition. Three replicates for each concentration were run, with cages placed side by side for each bioassay. All experiments were run at ambient temperature (27 ± 2ºC) with relative humidity of 70−80%. After 24 h, the number of eggs laid in treated and control bowls was recorded. The percent effective repellency for each leaf extract concentration was calculated using the following formula. [12] ER% = NC -NT *100 NC Where, ER = Percent effective repellency NC = Number of eggs in control NT = Number of eggs in treatment RESULTS Results of the oviposition deterrent activity of Pongamia pinnata, Coleus forskohlii, and Datura stramonium ethanolic leaves extract against Aedes aegypti and Culex quinquefasciatus are presented in Table 1. The data was recorded and statistical data was calculated and presented. From above results, we can conclude that Datura stramonium has more efficient oviposition deterrence against Aedes aegypti and Culex quinquefasciatus when compared to Pongamia pinnata and Coleus forskohlii.
2018-04-03T05:07:07.066Z
2010-10-01T00:00:00.000
{ "year": 2010, "sha1": "3f590f640c3e4c05b92dc921cf0cfc7dc22a4608", "oa_license": "CCBY", "oa_url": "https://europepmc.org/articles/pmc2992147", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "d525c442a9f9073ba6eaf668e4901312529f34fc", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
247139230
pes2o/s2orc
v3-fos-license
Evaluation of the Oculus Rift S tracking system in room scale virtual reality In specific virtual reality applications that require high accuracy it may be advisable to replace the built-in tracking system of the HMD with a third party solution. The purpose of this research work is to evaluate the accuracy of the built-in tracking system of the Oculus Rift S Head Mounted Display (HMD) in room scale environments against a motion capture system. In particular, an experimental evaluation of the Oculus Rift S inside-out tracking technology was carried out, compared to the performance of an outside-in tracking method based on the OptiTrack motion capture system. In order to track the pose of the HMD using the motion capture system the Oculus Rift S was instrumented with passive retro-reflective markers and calibrated. Experiments have been performed on a dataset of multiple paths including simple motions as well as more complex paths. Each recorded path contained simultaneous changes in both position and orientation of the HMD. Our results indicate that in room-scale environments the average translation error for the Oculus Rift S tracking system is about 1.83\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$1.83$$\end{document} cm, and the average rotation error is about 0.77∘\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$0.77^\circ$$\end{document}, which is 2 orders of magnitude higher than the performance that can be achieved using a motion capture system. Introduction In many virtual reality applications that require high accuracy of Head Mounted Display (HMD) tracking, it may be advisable to replace the built-in tracking system of the HMD with a third party solution (Debarba et al. 2018). For example, interaction with physical objects in industrial or clinical medicine tasks requires a highly accurate correspondence between the virtual environment and the real world. The goal of this work is to provide a quantitative comparison between the built-in tracking system of the Oculus Rift S HMD with respect to the accuracy that can be attained by exploiting a motion capture system, which acts as ground truth. Indeed, motion capture systems work at high speed and achieve a sub-millimeter accuracy (Merriaux et al. 2017). The Oculus Rift S belongs to the second generation of consumer VR HMDs (since 2016). It is a tethered device that exploits the hardware of an external computer (CPU, graphics card, and RAM) to deliver high quality virtual reality experiences. The Oculus Rift S does not require any external device for positional tracking. Instead, it features five cameras that enable inside-out tracking. In general, the two common approaches for HMD tracking are called outside-in and inside-out (Rolland et al. 1999). In outside-in systems multiple fixed external cameras are used to track the pose (3D position and 3D orientation) of the HMD. In particular, the external cameras track a set of reference points located on the headset and on the controllers (if any). Usually, the set of reference points is a pattern (constellation) of IR LEDs or passive (retro-reflective) markers. The pose of the HMD can be obtained in an absolute reference frame defined in a calibration step. Outside-in tracking systems are generally faster and more accurate than insideout systems. Moreover, the localization accuracy of outsidein systems can be improved by adding more cameras. Other advantages of the outside-in technologies are that they work even in the dark, they can be used to track the HMD and the body of the user simultaneously (also including external 1 3 rigid objects), and that hand controllers can be tracked even if the user has them behind his/her back. The disadvantages of outside-in tracking systems are that the HMD must be instrumented with reference points, and that these systems are much more expensive. Inside-out tracking systems use cameras placed on the HMD looking outward. An algorithm based on visual-inertial odometry determines in real-time the position and the orientation of the HMD by observing low-level features of the surrounding environment. The pose of the HMD can be determined only relative to the initial headset configuration. Inside-out HMD tracking systems are easier to set up and offer reduced costs. In particular, calibration is straightforward as there is no need to install fixed cameras with mounts or to instrument the environment with markers. The main disadvantage of inside-out technologies is that tracking is less accurate. The main contribution of this paper, which was not considered in previous works, is the evaluation of the Oculus Rift S inside-out tracking technology in a room scale virtual reality setup, against an outside-in tracking system based on the OptiTrack motion capture. To this purpose, the Oculus Rift S was instrumented with passive markers and calibrated. A dataset of HMD movements of a user walking around the environment has been recorded. Each recorded path contains simultaneous changes in both position and orientation of the HMD. The dataset includes paths that vary from simple straight motions to more complex and longer random walks. Our results indicate that in room-scale environments the average translation error for the Oculus Rift S tracking system is about 1.83 cm, and the average rotation error is about 0.77 • , which is 2 orders of magnitude higher than the performance that can be achieved using a motion capture system. The paper is organized as follows. Section 2 reviews the state-of-the-art research on the evaluation of HMDs tracking accuracy. Section 3 describes the method used in this study, including the experimental setup, the calibration and data acquisition techniques, the acquired dataset, and the evaluation metrics. Section 4 illustrates the experimental results, while Sect. 5 draws conclusions. Related work The closest work to ours is by Jost et al. (2021), where a quantitative evaluation of the Oculus Rift S was carried out in a controlled and small-scale environment using an industrial robot to move the HMD. Translation and rotation were tested separately. The results indicated a high accuracy for both translation ( 1.66 ± 0.74 mm) and rotation ( 0.34 ± 0.38 • ). The main differences to our work are that we consider more ample movements performed in a room-scale environment, and that the movements are more complex, i.e., they contain changes in both rotation and translation. Most previous works on the evaluation of HMDs tracking accuracy focused on devices that belong to the first generation of consumer VR (since 2016), like the Oculus Rift (DK1, DK2 and CV1) and the HTC Vive. The rotation accuracy of the Oculus Rift DK1 was evaluated by Xu et al. (2015) showing a good estimate of full range motions in cervical spine mobility measurements. The validity of the Oculus Rift DK2 to assess postural changes during balance tasks was investigated by Marchetto and Wright (2019). It was shown that the HMD may be successfully used for assessing postural control without external posturography equipment. A user study was conducted by Chessa et al. (2019) to evaluate the perceptual quality of the Oculus Rift DK2 for immersive virtual reality. The device enabled a strong sensation of presence and did not provoke undesired effects such as cybersickness or fatigue in short tasks. A computer vision approach was presented by Chang et al. (2016), using a high-speed camera, to evaluate timing and accuracy of the Oculus Rift DK2. An evaluation of the HTC Vive HMD was performed by Niehorster et al. (2017) at static poses along a grid of lines drawn on the floor. An analysis of the spatial tracking performance of the HTC Vive HMD was conducted in small scale environments by Jost et al. (2019) using a motion capture system as ground truth, showing high accuracy. A similar analysis was carried out, in larger environments, by Ikbal et al. (2021) using an industrial robot as ground truth source. The results indicated an average error of about 3 mm and 0.5 • . The HTC Vive lighthouse positioning system was evaluated by Greiff et al. (2019) for tracking micro unmanned aerial vehicles, showing sub-centimeter position accuracy. A simplified error model for HTC Vive tracking system was proposed by Wu et al. (2020). The method can be adopted to predict in advance the magnitude of tracking errors in a given configuration of multiple lighthouses (transmitters) and receivers. A comparison between Oculus Rift HMDs and the HTC Vive was presented in different works. In Suznjevic et al. (2017) the HTC Vive and the Oculus Rift CV1 were compared in terms of ease of use, intuitiveness and quality of experience when performing pick and place tasks in virtual reality. In general, the HTC Vive was marginally better. In Borrego et al. (2018) the Oculus Rift CV1 and the HTC Vive were evaluated in terms of accuracy and jitter. Both devices showed good and similar performance at sitting, while the HTC Vive presented worse accuracy and jitter at standing height, even though it must be recalled that the HTC Vive provides a working area twice as large as that of the Oculus Rift CV1. In Lubetzky et al. (2019) head tracking performance of the Oculus Rift CV1 was compared against the HTC Vive HMD during static and dynamic standing tasks in virtual environments. The results indicated excellent agreement between the two HMDs with respect to a motion capture system. A weaker agreement was observed for vertical displacement in a static task and moderate agreement was observed for pitch and yaw displacement in a dynamic task. In Bauer et al. (2021) the performance of the HTC Vive Pro HMD was evaluated, showing a high reproducibility of a few millimeters. However, the HTC Vive Pro tracking system has issues when several lighthouses are used, and it has systematic effects like a tilted reference plane. Other studies involved the HTC Vive tracker (a small device that includes the same tracking technology of the Vive HMD) and its motion controllers. A hybrid tracking system was developed by Groves et al. (2019) using the HTC Vive Pro controller, which enabled optical tracking of a surgical instrument with respect to the HMD, achieving sub-millimeter accuracy. The accuracy of the HTC Vive tracker was investigated by Borge et al. 2019) the Opti-Track motion capture system served as reference. An accuracy ranging from sub-millimeter to millimeter was obtained. The accuracy of the Vive trackers for rehabilitation and medical tracking tasks was investigated by van der Veen et al. (2019), suggesting that the HTC Vive sensors can be used successfully for clinical analysis of human motions. The static accuracy of HTC Vive tracker and motion controller was evaluated by Spitzley and Karduna (2019). The measured errors of both VIVE sensors were below 0.4 • and 3 mm. In Flueratoru et al. (2020) the HTC Vive tracker was adopted as ground truth system for UWB indoor localization, while in Lwowski et al. (2020) the HTC Vive Tracker was employed for robot localization. An investigation of the HTC Vive tracking system for gait analysis was carried out by Guaitolini et al. (2021) indicating that the device can accurately monitor gait parameters. In Palma et al. (2021) an augmented reality system was proposed that allows users to interact with a 3D-printed copy of an artefact in a virtual environment using a physical replica (tracked by the HTC Vive tracker) as a tangible user interface. Approaches for six degrees of freedom human body pose estimation based on the HTC Vive lighthouse transmitters were presented in Caserman et al. (2019), and in Jansen et al. (2019) for automatic calibration. In Vox et al. (2021) a method for human body tracking was developed, based on the HTC Vive tracker and on an inverse kinematic model of the human body, and it was compared against a marker-based optical motion capture system showing some inaccuracies. Experimental setup The experimental setup consists of a room of size 8.2 × 5.5 × 2.9 m, shown in Fig. 1. In order to perform the outside-in tracking of the HMD an OptiTrack motion capture system was adopted with twelve Prime 13 cameras. This configuration allows an effective capture volume of about 5 × 3 × 2.5 m, with a precision of about 0.2 mm. The Prime 13 camera (shown in Fig. 2) is a high speed IR sensor (Gigabit Ethernet, 240 maximum frame rate) that provides sub-millimeter accuracy, and that has a range of about 12 m. The camera resolution is 1280 × 1024 (1.3 MP). The OptiTrack system provides on-camera image analysis for detection of marker location, size and roundness, that relieves the CPU from computation of low-level information. The experimental setup also comprises an Oculus Rift S HMD, instrumented with six passive retro-reflective markers as shown in Fig. 3. The six markers define a single rigid body and are tracked with six degrees of freedom by the OptiTrack system. The Oculus Rift S is a tethered HMD, with a 5-meter cable (with DisplayPort and USB 3.0 connections). A desktop computer running Unity 3D and Motive (the optical motion capture software by OptiTrack) was adopted for data recording and to generate the virtual reality environment. Hardware and software specifications are provided in Table 1. Data acquisition and processing Multiple reference frames are defined in the proposed setup as illustrated in Fig. 4. The fixed world reference frame W of the OptiTrack motion capture system (also shown in Fig. 1) is located on the floor of the room. Reference frame W is known after a one-time calibration phase of the OptiTrack system. Reference frame K(t) is attached to the HMD rigid body and it is tracked by the OptiTrack software (Motive). The position and the orientation of reference frame K(t) with respect to the HMD rigid body are constant over time, and they depend on the configuration of the markers on the headset. Reference frame O is the world reference frame of the Oculus Rift S inside-out tracking system. In general, reference frames W and O are different, moreover, the origin of reference frame O may change for each recorded path as it depends on the initial configuration of the HMD. Reference frame U(t) is attached to the HMD rigid body and it is . 4 Main reference frames used for data acquisition, calibration and evaluation. Axes x, y and z are displayed using red, green and blue arrows, respectively tracked by the Oculus Rift S tracking system. In particular, reference frame U is located at the midpoint of the user's eyes, with forward (Z-axis) and down (Y-axis) vectors. Data acquisition and processing was carried out by using a custom Unity 3D script, according to the workflow displayed in Fig. 5. A dataset of HMD paths was recorded by a single user walking around in the room scale environment while wearing the headset. The Unity script, which operates at 60 frames per second, records at each frame t (Unity 3D recorder block in Fig. 5) Extrinsic calibration This section describes the extrinsic calibration procedures that are required to evaluate the tracking accuracy of the Oculus Rift S HMD. Since transformations O U M(t) and W K M(t) track two different reference frames on the HMD a one-time calibration procedure is required to obtain K U M , i.e., the fixed 4 × 4 transformation matrix of reference frame U(t) with respect to K(t) , as described in Sect. 3.3.1. Extrinsic calibration between reference frames K and U As frames K(t) and U(t) are related by a constant transformation K U M , K U M can be estimated by applying an extrinsic calibration algorithm given multiple synchronized samples of O U M(t) and W K M(t) taken at different poses of the headset. To this purpose a specific calibration path of the HMD was recorded that consists mainly of (in place) rotational movements around multiple axes, as these movements are known to be the most effective for this type of calibration. A set of sampled data O U M c (t), W K M c (t) was then extracted from the calibration path, where subscript c stands for "calibration". As shown in Fig. 4, the reference frames are related as follows: By using (1) for two frames, t and (t − 1) , an equation in the form of A X = X B is obtained, where: are solved for X given multiple pairs A i , B i by using the standard formulation by Horaud and Dornaika (1995). To ensure a sufficiently large change in rotation between two consecutive samples, data A i = W K M c t i and B i = O U M c t i are sampled from the calibration path whenever the rotation becomes larger than 5 • . That is, t i is the lowest t so that: where, given a transformation matrix T, operator ∠(T) denotes the rotation angle of the axis-angle representation of the rotation matrix of T. Extrinsic calibration between reference frames O and W The transformation matrix W O M of reference frame O with respect to W can not be determined in advance for all recorded paths used for the experimental evaluation, as the initial configuration of reference frame O may potentially change for each recorded path. In this work two different approaches are compared to calibrate the transformation between reference frames W and O for each single path. The two calibration methods are based on the alignment of paths W U M(t) and O U M(t) . The first approach is named Single State (SS) alignment, while the second approach is named Multiple States (MS) alignment, as in Zhang and Scaramuzza (2018). The Single State alignment method exploits only the configuration of the HMD reference frame at the beginning of the path, i.e., when the tracking drift is not present. Given initial M of the headset as measured by the motion capture system, and the initial transformation (2) Dataset The experimental evaluation was conducted on a custom dataset containing a set of recorded HMD paths of a user walking around the environment (Fig. 6). For the dataset acquisition the user wore the HMD that displayed a 3D virtual reconstruction of the room (Fig. 7). The user was free to rotate his head around during the experiments. Therefore, each recorded path of the dataset contains simultaneous changes in both position and orientation of the HMD. The dataset contains a total of 85 paths, organized in five subsets of paths as follows: It must be noticed that the OptiTrack system may lose tracking of the HMD for a few frames in certain conditions. For example when the user walks close to the corners of the room or when the HMD is occluded. In these cases invalid measurements were discarded and excluded from the evaluation (Path cleanup block in Fig. 5). Evaluation This section describes the evaluation metrics that have been used to assess the tracking accuracy. Data analysis was performed by computing both translation and orientation errors. The absolute rotation error dR(t) for each sample at time t was computed as the rotation angle of the axis-angle repre- Experimental results The translation error dT(t) and the rotation error dR(t) , averaged over each subset of paths and over the complete dataset, are reported in Tables 2 and 3, respectively. Table 2 and 6 An image of the user wearing the HMD while recording the dataset Fig. 7 The 3D virtual reconstruction of the room Table 3 also report the standard deviation and the maximum error. Data are also illustrated in Fig. 8 and in Fig. 9. The average error computed on the whole dataset is about 1.83 cm and 0.77 • (SS alignment method), and 1.12 cm and 0.66 • (MS alignment method). The lowest error was obtained for the Line paths, due to their simple shape. Conversely, the more complex paths in the Random subset have an average error which is significantly higher than all other path types. The average error of Circle and Eight paths, that have an intermediate complexity, is contained between the average error of Line paths and Random paths. The Eight paths have a slightly lower error than Circle paths, possibly due to the longer average duration of Circle paths compared to Eight paths (116 s and 109 s as reported in Sect. 3.4). The error of Dynamic paths is slightly higher than the error for Circle paths. Therefore, it can be observed that the Oculus Rift S native tracking system is rather robust to dynamic environments. Example paths from the dataset, tracked by the Oculus Rift S and by the motion capture system, are shown in Figs. 10, 11, 12 and 13. Enlarged views of some example paths are displayed in Fig. 14 and in Fig. 15. As expected, the Oculus Rift S path obtained through MS alignment is closer to the ground truth OptiTrack path than the SSaligned path. The translation and rotation errors over time for the Circle path in Fig. 11 and the Random path in Fig. 13 are shown in Figs. 16 and 17, respectively. In the Circle path, the average translation error is 1.55 cm for the Single State alignment method. The translation error obtained by the Single State alignment approach increases at the beginning of the path when the user moves away from the starting position, and it decreases near the end of the The region in the black cube is shown enlarged in Figure 15 path, when the user comes back to the initial position, thus suggesting a not negligible error in the estimated rotation component of O W M ss . Conversely, the translation error obtained by the Multiple States alignment method is rather constant, about 1.08 cm on average, thus suggesting that the Multiple States alignment method provides a better calibration of the reference frames. In the Circle path, the average rotation error is about 0.52 • for the Single State alignment, and 0.37 • for the Multiple States alignment. In the Random path, the average translation error is about 2.90 cm, and the average rotation error is 2.3 • (with MS alignment), which are significantly larger than in the Circle path. The larger errors in the Random path are due to the more complex shape of the path that includes frequent changes in motion direction and speed. Repeatability in calibration between reference frames K and U (Sect. 3.3.1) has been assessed by rerunning the calibration procedure on 20 different calibration paths of the headset. The results indicate that the standard deviation of the translation is about 0.24 cm, whereas the standard deviation of the rotation angle in the axis-angle representation is about 0.44 degrees. Conclusions This work investigated the tracking accuracy of the Oculus Rift S HMD in room scale environments. The built-in tracking algorithm of the Oculus Rift S was compared to the performance that can be achieved by using an Opti-Track motion capture system. The results show that, in room-scale environments, the translation and rotation accuracy of the built-in HMD tracking system is about 1.83 cm and 0.77 • on average. Therefore, it may be concluded that in most virtual reality applications the insideout tracking system of the Oculus Rift S is more than adequate, however, for specific virtual reality tasks requiring high quality tracking it may be advisable to replace the built-in tracking system of the Oculus Rift S with a third party solution. Moreover, it can be observed that the proposed method to evaluate the accuracy of the Oculus Rift S tracking system is general and it can be applied to other HMDs. Future work will investigate more robust tracking algorithms by combining data from the motion capture equipment and from the HMD built-in tracking system. Conflict of interest The authors declare that they have no conflict of interest. Ethical approval All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. Informed consent was obtained from all individual participants included in the study. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2022-02-27T16:07:21.218Z
2022-02-25T00:00:00.000
{ "year": 2022, "sha1": "c376eef64729522af994c35478477df58333711f", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10055-022-00637-3.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "e0db006477ecc41ce09e5b90db0ab31ba18ead80", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Computer Science" ] }
126226342
pes2o/s2orc
v3-fos-license
Neutron Energy Spectra and Yields from the 7Li(p,n) Reaction for Nuclear Astrophysics Neutrons produced by the 7Li(p, n)7Be reaction close to threshold are widely used to measure the cross section of s-process nucleosynthesis reactions. While experiments have been performed so far with Van de Graaff accelerators, the use of RF accelerators with higher intensities is planned to enable investigations on radioactive isotopes. In parallel, high-power Li targets for the production of high-intensity neutrons at stellar energies are developed at Goethe University (Frankfurt, Germany) and SARAF (Soreq NRC, Israel). However, such setups pose severe challenges for the measurement of the proton beam intensity or the neutron fluence. In order to develop appropriate methods, we studied in detail the neutron energy distribution and intensity produced by the thick-target 7Li(p,n)7Be reaction and compared them to state-of- the-art simulation codes. Measurements were performed with the bunched and chopped proton beam at the Van de Graaff facility of the Institute for Reference Materials and Measurements (IRMM) using the time-of-flight (TOF) technique with thin (1/8") and thick (1") detectors. The importance of detailed simulations of the detector structure and geometry for the conversion of TOF to a neutron energy is stressed. The measured neutron spectra are consistent with those previously reported and agree well with Monte Carlo simulations that include experimentally determined 7Li(p,n) cross sections, two-body kinematics and proton energy loss in the Li-target. Introduction Nucleosynthesis of heavy elements (A > 60) in stars involves neutron-capture processes [1,2]. Abundances of these elements are determined by the stellar rates of the slow (s-) process and rapid (r-) process and the half-life of relevant nuclides. Hence neutron capture cross sections on stable and unstable nuclides in the stellar regimes of energy are essential quantities in nuclear astrophysics. Due to the need for an appropriate intense neutron source, these values have not been measured yet for a large number of nuclides at or near important branching points of the s-process. The neutrons produced by the 7 Li(p, n) 7 Be reaction for an incident proton energy around 30 keV above the reaction threshold (1880.4 keV) on a thick Li target (thick enough that the proton energy is reduced to below the threshold energy while still in the Li) are emitted in a cone of ∼ 120 • angular opening with an energy distribution close to that of a Maxwellian flux with kT ≈ 25 keV which is close to the temperature of some of the s-process sites [3]. Intense 7 Li(p, n) 7 Be neutron sources are being developed for the measurements of such cross-sections at the Soreq Applied Research Accelerator Facility (SARAF) and Goethe University Frankfurt, based on high-intensity RF accelerators. SARAF The Soreq Applied Research Accelerator Facility (SARAF) [4] is based on a continuous wave (CW), proton/deuteron RF superconducting linear accelerator capable of delivering currents up to 2 mA. Phase I of SARAF (see figure 1) will produce proton and deuteron beams with energies up to ∼ 4 and 5 MeV respectively. FRANZ The Frankfurt neutron source at the Stern-Gerlach-Zentrum (FRANZ) [5] is currently under construction at the Institute for Applied Physics at the Goethe University in Frankfurt (see figure 2). It will be capable of delivering a proton beam with a current up to 20 mA in CW mode, resulting in beam powers of ∼ 40 kW, with energies up to 2 MeV. There will be several target stations, for activation and time-of-flight measurements. combination of SARAF and LiLiT will produce an intense quasi-stellar neutron source peaked at E n ≈ 25-30 keV with a neutron flux of about 10 10 -10 11 neutrons per second (a factor 10-100 larger than presently available). The FRANZ target The current design of the FRANZ setup is based on a solid Li target (with a Cu or Ag backing) cooled by water flowing in two water channels (figure 3 (right)). A large diameter cooling channel is expected to take a major part of the heat from the copper target assembly. A small second channel, which cools the target backing directly, is an attempt to move as close to the heat source as possible while trying to keep neutron distribution aberrations minimal. 7 Li(p, n) Simulated Neutron Spectra A simulation code, SimLiT [6], that uses experimentally determined 7 Li(p, n) cross sections, two-body kinematics and proton energy loss in the Li-target, was developed to calculate neutron spectra, intensities and angular distribution. In the LiLiT setup there is a significant amount of material surrounding the target, which may affect the neutron energy spectrum and intensity. We therefore need to simulate this environment for actual experiments. Detailed simulations using the codes SimLiT as the neutron source and GEANT4 for neutron transport, with a particular emphasis on the detector response, were carried out (see figures 4 and 5). 7 Li(p, n) Experimental Neutron Spectra A series of experiments were conducted at the IRMM Van de Graaff accelerator. The 1912 keV proton beam, with energy spread of ∼ 1.5 or 15 keV, irradiated a LiF target. We used 6 Li glass detectors (1/8 and 1 thick) to measure the neutron time-of-flight (TOF) in the relevant angular range. If we denote the neutron flightpath by L, and the neutron TOF by t, the nominal neutron energy is given by: E n = 1 2 m n ( L t ) 2 . However, the effect of the neutron detector thickness is important for a reliable extraction of the neutron energy spectrum. We present below experimental spectra (figure 4) obtained with each detector compared with the detailed simulations done with the SimLiT and GEANT codes. The simulations reproduce correctly the TOF spectra. In figure 5, we show that a simple approach taking the mean geometrical distance between target and detector results in consistent extracted energy spectra for the two detector thicknesses, in reasonable agreement also with the neutron spectrum emitted by the lithium target as calculated by the code SimLiT. Figure 6 shows the experimental spectrum integrated over all angles compared with the simulation, as reported in [8]. The energy distribution, after integrating over all angles, for the thick (1 ) detector. We can see good agreement between the experimentally measured spectrum (black), the simulated (SimLiT+GEANT4) spectrum (red) and the spectrum calculated directly by SimLiT (blue). The horizontal bars represent uncertanties in neutron energy determination. Typical statistical counting errors are shown.
2019-04-22T13:03:29.281Z
2016-01-05T00:00:00.000
{ "year": 2016, "sha1": "e2aeb753f2ca1740aed87eeae47a6237c359fd14", "oa_license": "CCBY", "oa_url": "http://iopscience.iop.org/article/10.1088/1742-6596/665/1/012027/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "f92644aeaacfcb8bea0f337780a68abd520b022a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
119160024
pes2o/s2orc
v3-fos-license
Existence results for incompressible magnetoelasticity We investigate a variational theory for magnetoelastic solids under the incompressibility constraint. The state of the system is described by deformation and magnetization. While the former is classically related to the reference configuration, magnetization is defined in the deformed configuration instead. We discuss the existence of energy minimizers without relying on higher-order deformation gradient terms. Then, by introducing a suitable positively $1$-homogeneous dissipation, a quasistatic evolution model is proposed and analyzed within the frame of energetic solvability. Introduction Magnetoelasticity describes the mechanical behavior of solids under magnetic effects. The magnetoelastic coupling is caused by rotations of small magnetic domains from their original random orientation in the absence of a magnetic field. The orientation of these small domains by the imposition of the magnetic field induces a deformation of the specimen. As the intensity of the magnetic field is increased, more and more magnetic domains orientate themselves so that their principal axes of anisotropy are collinear with the magnetic field in each region and finally saturation is reached. We refer to e.g. [6,11,13,16] for a discussion on the foundations of magnetoelasticity. The mathematical modeling of magnetoelasticity is a vibrant area of research, triggered by the interest on so-called multifunctional materials. Among these one has to mention rare-earth alloys such as TerFeNOL and GalFeNOL as well as ferromagnetic shape-memory alloys as Ni 2 MnGa, NiMnInCo, NiFeGaCo, FePt, FePd, among others. All these materials exhibit so-called giant magnetostrictive behaviors as reversible strains as large as 10% can be activated by the imposition of relatively moderate magnetic fields. This strong magnetoelastic coupling makes them relevant in a wealth of innovative applications including sensors and actuators. Following the modeling approach of James & Kinderlehrer [17], the state of a magnetostrictive material is described by its deformation y : Ω → R 3 from the reference configuration Ω ⊂ R 3 and by its magnetization m : Ω y → R 3 which is defined on the deformed configuration Ω y := y(Ω) instead. This discrepancy, often neglected by restricting to small deformation regimes, is particularly motivated here by the possible large deformations that a magnetostrictive materials can experience. We shall here be concerned with the total energy E defined as Here, W stands for the elastic energy density, the second term is the so-called exchange energy and α is related to the typical size of ferromagnetic texture. The last term represents magnetostatic energy, µ 0 is the permittivity of void, and u m is the magnetostatic potential generated by m. In particular, u m is a solution to the Maxwell equation where χ Ω y is the characteristic function of the deformed configuration Ω y . We shall consider E under the a.e. constraints det ∇y = 1, |m| = 1, which correspond to incompressibility and magnetic saturation (here properly rescaled). Note that incompressibility is reputed to be a plausible assumption in a vast majority of application [13]. The aim of this paper is twofold. At first, we concentrate on the static problem. By assuming that W is polyconvex and p-coercive in ∇y for p > 3 we check that E admits a minimizer. This result is to be compared with the discussion in Rybka & Luskin [27] where weaker growth assumptions on W but a secondorder deformation gradient is included. On the contrary, no higher order gradient is here considered and we make full use of the incompressibility constraint. In this direction, we shall mention also the PhD thesis by Liakhova [18], where the the dimension reduction problem to thin films under the a-priori constraint 0 < α < det ∇y < β is considered. This perspective has been numerically investigated by Liakhova, Luskin, & Zhang [19,20]. More recently, the incompressibility case has been addressed by a penalization method from the slightly compressible case by Bielsky & Gambin [3], still by including a second-order deformation gradient term. We also mention the two-dimensional analysis by DeSimone & Dolzmann [12] where no gradients are considered and the existence of a zero energy state is checked by means of convex integration techniques. Our discussion on the static problem is reported in Section 2. Finally, let us point out that a closely related static model on nematic elastomers was recently analyzed by Barchiesi & DeSimone in [2]. A second focus of the paper is that of proposing a quasi-static evolution extension of the static model. This is done by employing a dissipation distance between magnetoelastic states which combines magnetic changes with the actual deformation of the specimen. Note that the rate-independence of this evolution seems well motivated for fairly wide range of frequencies of external magnetic fields. We also ensure that the elastic deformation is one-to-one at least inside the reference configuration allowing for possible frictionless self-contact on the boundary. Let us mention that some models of rate-independent magnetostrictive effects were developed in [4,5] in the framework magnetic shape-memory alloys and in [25,26] for bulk ferromagnets. We tackle the problem of ensuring the existence of quasi-static evolutions under frame of energetic solvability of rate-independent problemsà la Mielke [23,24]. We restrict ourselves to the isothermal situation. In particular we assume that the process is sufficiently slow and/or the body thin in at least one direction so that the released heat can be considered to be immediately transferred to the environment. By relying on the classical energetic-solution technology [21] we prove that the implicit incremental time discretization of the problem admits a time-continuous quasi-static evolution limit. Details are given in Section 3. Energy Let the reference configuration Ω ⊂ R 3 be a bounded Lipschitz domain. Let us assume from the very beginning p > 3 and consider deformations y ∈ W 1,p (Ω; R 3 ) ⊂ C(Ω; R 3 ) where the bar denotes set closure. We impose homogeneous boundary conditions by prescribing that y = 0 on Γ 0 ⊂ ∂Ω where Γ 0 has a positive surface measure. Magnetization, representing the density of magnetic spin moments, is assumed to be defined on the open set Ω y := y(Ω) \ y(∂Ω) and to have a fixed norm 1 (note that our problem is isothermal), namely, m : Ω y → S 2 . The incompressibility constraint reads det ∇y = 1 almost everywhere in Ω. In particular, this entails invertibility of y through the Ciarlet-Nečas condition [9] which in our situation reads |Ω y | = |Ω|. Indeed, we have that We shall define the sets Note that, as p > 3, the set Y is sequentially closed with respect to the weak topology of W 1,p (Ω; R 3 ). This indeed follows from the sequential continuity of the map y → det ∇y from W 1,p (Ω; R 3 ) to L p/3 (Ω) (both equipped with the weak convergence), the weak closedness of the Ciarlet-Nečas condition [8,9], and from the compactness properties of the trace operator. For the sake of brevity, we shall also define the set Q as Moreover, we say that {(y k , m k )} k∈N Q-converges to (y, m) ∈ Q as k → ∞ if the following three conditions hold By following an argument from [27, Lemma 3.5], here simplified by the incompressibility assumption, we can show that Q-bounded sequences are Q-sequentiallyprecompact. Proposition 2.1. Every Q-bounded sequence admits a Q-converging subsequence. Proof. Let (y k , m k ) be Q-bounded. The compactness in the y-component, i.e. (4a), follows from the weak closure of Y. Assume (without relabeling the subsequence) that y k ⇀ y in W 1,p (Ω; R 3 ) and fix ε > 0. We denote by Ω y the set Ω y ε := {z ∈ Ω y ; dist(z, ∂Ω) > ε}. As p > 3 we have that W 1,p (Ω; R 3 ) ֒→ C(Ω; R 3 ) compactly. This in particular entails that Ω y ε ⊂ Ω y k for k sufficiently large. Hence, we infer that Taking into account that |m k | = 1 we get ( again for a non-relabeled subsequence) that m k ⇀ m in W 1,2 (Ω y ε ; R 3 ). Here the extracted subsequence and its limit m could depend on ε. On the other hand, as {Ω y ε } ε>0 exhausts Ω y we have that m is defined almost everywhere in Ω y . By following the argument in [27,Lemma 3.5] we exploit the decomposition We now check that the above right-hand side goes to 0 as k → ∞ and ε → 0. As to the first term, since Ω y is compact we have that for any ε > 0 there exists an open set O ε such that O ε ⊃ Ω y and |O ε \Ω y | < ε. The uniform convergence y k → y yields that Ω y k ⊂ O ε for k sufficiently large. Therefore, |O ε \ Ω y ε | can be made arbitrarily small if ε is taken small enough, and the first term in the right-hand side of (5) converges to 0 as k → ∞ and ε → 0. The second term in the right-hand side of (5) goes to 0 with k → ∞ as m k → m strongly in L 2 (Ω y ε ; R 3 ). As |m| = 1 almost everywhere, the third term in the right-hand side of (5) is bounded by χ Ω y − χ Ω y ε L 2 (R 3 ;R 3 ) which goes to 0 as ε → 0. This shows the convergence (4b). A similar argument can then be used to show that namely convergence (4c). Remark 2.2. Notice that the proof of the strong convergence of {χ Ω y k m k } still holds if we replace Ω by some arbitrary measurable subset ω ⊂ Ω. Keeping in mind that det ∇y k = det ∇y = 1 almost everywhere in Ω, for all k ∈ N, and that all mappings y k and y are invertible, we calculate This shows m k • y k ⇀ m • y in L 2 (Ω; R 3 ). As the L 2 norms converge as well, we get strong convergence in L 2 (Ω; R 3 ). Eventually, as m k takes values in S 2 one has that m k • y k ⇀ m • y in L r (Ω; R 3 ) for all r < ∞ as well. The following result is an immediate consequence of the linearity of the Maxwell equation (2). Lemma 2.3. Let χ Ω y k m k → χ Ω y m in L 2 (R 3 ; R 3 ) and let u m k ∈ W 1,2 (R 3 ) be the solution of (2) corresponding to χ Ω y k m k . Then u m k ⇀ u m in W 1,2 (R 3 ) where u m is the solution of (2) corresponding to χ Ω y m. Let us finally enlist here our assumptions on the elastic energy density W . where W : is convex for every m ∈ S 2 . In particular, we assume material frame indifference (6b) and invariance under magnetic parity (6c). Recall that for F ∈ R 3×3 invertible one has cof F is defined as cof F := (det F )F −⊤ . In the present incompressible case det F = 1 we simply have cof F := F −⊤ . Eventually, assumption (6d) corresponds to the polyconvexity of the function W (·, m) [1]. Assumptions (6) will be considered in all of the following, without explicit mention. Theorem 2.4 ( Existence of minimizers). The energy E is lower semicontinuous and coercive with respect to Q-convergence. In particular, it attains a minimum on Q. Proof. Owing to the coercivity assumption (6a), one immediately gets that E sublevels are Q-bounded, hence Q-sequentially compact due to Proposition 2.1. The magnetoelastic term in E is weakly lower semicontinuous because of the assumptions (6) on W , see [1,14]. The exchange energy term in E is quadratic hence weakly lower semicontinuous. The magnetostatic term is weakly lower semicontinuous by Lemma 2.3. The existence of a minimizer follows from the direct method, e.g. [10]. For the sake of notational simplicity in all of this section no external forcing acting on the system was considered. It is however worth mentioning explicitly that the analysis extends immediately to the case of the linear perturbation of the energy E given by including the term The first term is the so-called Zeeman energy and h ∈ L 1 (Ω y ; R 3 ) represents an external magnetic field. Moreover, f ∈ L q (Ω; R 3 ) is a body force, and g ∈ L q (Γ t ; R 3 ) is a traction acting on Γ t where Γ t ⊂ ∂Ω is relatively open, ∂Γ 0 = ∂Γ t (this last two boundaries taken in ∂Ω), and 1/p + 1/q = 1. Eventually, we could replace the homogeneous Dirichlet boundary condition y = 0 on Γ 0 with some suitable non-homogeneous condition without difficulties. Evolution Let us now turn to the analysis of quasi-static evolution driven by E. In order to do so, one has to discuss dissipative effect as well. Indeed, under usual loading regimes , magnetically hard materials, experience dissipation. On the other hand, the dissipation mechanism in ferromagnets can be influenced by impurities in the material without affecting substantially the stored energy. This allows us to consider energy storage and dissipation as independent mechanisms. Our, to some extent simplified, standpoint is that the amount of dissipated energy within the phase transformation from one pole to the other can be described by a single, phenomenologically given number (of the dimension J/m 3 =Pa) depending on the coercive force H c [7]. Being interested in quasistatic, rateindependent processes we follow [22,23,24] and define the so-called dissipation distance between to states q 1 := (y 1 , m 2 ) ∈ Q and q 2 := (y 2 , m 2 ) ∈ Q by introducing D : Q × Q → [0; +∞) as follows Here, the rationale is that although the system dissipates via magnetic reorientation only, elastic deformation also contributes to dissipation as m lives in the deformed configuration. Assume, for simplicity, that the evolution of the specimen during a process time interval [0, T ] is driven by the time-dependent loadings so that we can write a (time-dependent) energy functional E : Our aim is to find an energetic solution corresponding to the energy and dissipation functionals (E, D) [23,24], that is an everywhere defined mapping q : ∀ t ∈ [0, T ] : E(t, q(t)) + Var(D, q; 0, t) = E(0, q(0)) + t 0 ∂ t E(θ, q(θ)) dθ, (8b) where we have used the notation the supremum being taken over all partitions of [s, t] in the form {s = t 0 < t 1 < ... < t J−1 < t J = t}. Condition (8a) is usually referred to as the (global) stability of state q at time t. For the sake of convenience we shall call stable (at time t) a state fulfilling (8a) and denote by S(t) ⊂ Q the set of stable states. The scalar relation (8b) expresses the conservation of energy instead. We shall now state the existence result. Sketch of the proof. This argument follows the by now classical argument for existence of energetic solutions. As such, we record here some comment referring for instance to [15,21] for the details. Starting from the stable initial condition q 0 ∈ S(0) we (semi)discretize the problem in time by means of a partition 0 = t 0 < t 1 < . . . < t N = T of [0, T ] such that the diameter max i (t i − t i−1 ) → 0 as N → ∞. This gives us a sequence q N k such that q N 0 := q 0 and q N k , 1 ≤ k ≤ N, is a solution to the following minimization problem for q ∈ Q minimize E(t k , q) + D(q, q N k−1 ). The existence of a solution to (9) follows form Theorem 2.4 combined with the lower semicontinuity of D. In particular, Remark 2.2. implies that the dissipation term in (9) is continuous with respect to the weak convergence in Q. We now record that minimality and the triangle inequality entail that the obtained solutions are stable, i.e., q N k ∈ S(t k ) for all k = 0, . . . , N. Let us define the right-continuous piecewise interpolant q N : [0, T ] → Q as Following [21] we can establish for all N ∈ N the a-priori estimates y N L ∞ (0,T );W 1,p (Ω;R 3 ) ≤ C, χ Ω y N ∇m N L ∞ ((0,T );L 2 (R 3 ;R 3 )) ≤ C, χ Ω y N m N L ∞ ((0,T );L ∞ (R 3 ;R 3 )) ≤ C, (10c) m N • y N BV (0,T ;L 1 (Ω;R 3 )) ≤ C. These a-priori estimates together with a suitably generalized version of Helly's selection principle [24,Cor. 2.8] entail that, for some not relabeled subsequence, we have q N → q pointwise in [0, T ] with respect to the weak topology of Q. This convergence suffices in order to prove that indeed the limit trajectory is stable, namely q(t) ∈ Q(t) for all t ∈ [0, T ]. Indeed, this follows from the lower semicontinuity of E and the continuity of D. Moreover, by exploiting minimality we readily get that Taking the sum of the latter on k we readily check that the one-sided inequality in relation (8b) holds for t = T . The converse energy inequality (and hence (8b) for all t ∈ [0, T ]) follows from the stability q(t) ∈ S(t) of the limit trajectory by [21,Prop. 5.6]. Note that the previous existence result can be adapted to the case of timedependent non-homogeneous Dirichlet boundary conditions by following the corresponding argument developed in [15].
2013-11-29T09:10:44.000Z
2013-11-16T00:00:00.000
{ "year": 2013, "sha1": "d83ae261b1efa826c78c85dad1670cce3f45d8f2", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3934/dcds.2015.35.2615", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "d83ae261b1efa826c78c85dad1670cce3f45d8f2", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Mathematics" ] }
84037133
pes2o/s2orc
v3-fos-license
Prescription of enzyme-containing products in South Africa Enzymes are traded in five categories, namely medical (intervention), diagnostic (detection and quantification), molecular biology, biofuel and industrial. Therapeutic enzymes have been investigated for different uses, for example, for the treatment of genetic disorders, blood clotting disorders, cancer and infectious diseases and for burn debridement. No studies on the prescription of enzyme-containing products in South Africa could be found. Enzymes are classified in the Monthly Index of Medical Specialities under digestants, enzymes and fibrinolytics. The primary aim of this study was to investigate the prescription patterns and cost of enzyme-containing products in South Africa. A private health-care medicines claims database for 2010 and 2011 of approximately 4.5 million records was analysed retrospectively. Enzyme-containing products constituted a small percentage of medical insurance claims (only 0.02% of approximately 4.5 million claims for products and procedures), yet they were relatively expensive. A total of 906 products was prescribed at a cost of almost ZAR2 million over the 2 years. Hyaluronidase was the most frequently prescribed (60.04%), followed by pancreatin-containing products (34.66%). Pancreatin (lipase/ protease/amylase) is primarily used in the management of pancreatic exocrine insufficiency. The average cost per hyaluronidase prescription paid by the medical insurance schemes was ZAR280. Other enzymecontaining products prescribed were imiglucerase, alteplase and tenecteplase. Imiglucerase was overall the most expensive. Alteplase, tenecteplase and streptokinase are antithrombotic enzymes that are used in the treatment of acute myocardial infarction or ischaemic stroke. Streptokinase, regarded as the most affordable antithrombotic enzyme, was not prescribed during the period under study. With the growing opportunities for enzymes for therapeutics, the use of enzyme-containing products which are comparatively expensive require cost-effectiveness studies. Introduction Enzymes are natural proteins that catalyse chemical reactions, converting a specific set of reactants (substrates) into specific products.Enzymes are highly specific and have several applications in different industries, such as the paper, starch, leather, pharmaceutical, baking, beer brewing, detergent, and wine-making industries. 1 Enzymes are traded in five distinct categories, namely medical (intervention), diagnostic (detection and quantification), molecular biology, biofuel and industrial.Based on their application, enzymes can be categorised into two major categories 1 : industrial enzymes and medical enzymes.The differences between industrial and medical enzymes are given in Table 1. Table 1: Differences between industrial and medical enzymes 1 Industrial enzymes Medical enzymes Produced in large quantities Produced in small quantities Partially purified but at an optimum Extensively purified Economic concerns are very important Excellent functionality Used as catalysts; hence, functionally, industrial enzymes are catalytic Used to treat various diseases; hence, functionally, medical enzymes are therapeutic Source of industrial enzymes is microbial and recombinant Source of medical enzymes is mainly human or animal and recombinant The market for enzymes in medicine is growing.The global market for industrial enzymes was valued at USD2.9 billion in 2008 and reached about USD3.1 billion in 2009. 1 In contrast to this, the global market for medical enzymes was estimated at USD6 billion in 2010, and it is growing to an estimated USD7.2 billion in 2015. 2 Therapeutic enzymes are the biggest segment in terms of revenue generated. 2This sector was valued at USD5.3 billion in 2010 and is expected to increase to USD6.3 billion in 2015. 2 The variety of enzymes and their potential therapeutic applications are considerable. 3Some examples of enzymes which have realised the potential to become important therapeutic agents are asparaginase, hyaluronidase, ribonuclease, streptokinase and urokinase. 3Enzymes as medicines (therapeutic enzymes) have two important features that distinguish them from all other types of medicines. 4Firstly, enzymes often bind and act on their targets with great affinity and specificity, and, secondly, enzymes are catalytic and convert multiple target molecules to the desired products. 4These two features make enzymes specific and potent medicines that can accomplish therapeutic biochemistry in the body that small molecules cannot.Enzymes can often be used for treatments complementary to those by small molecules, without one necessarily being better than the other.Each has its own best application.These characteristics have resulted in the development of many enzyme-containing medicines for a wide range of disorders. For the past 50 years, therapeutic enzymes have been investigated for the treatment of genetic disorders, blood clotting disorders, cancer and infectious diseases, as well as for burn debridement, amongst others, and have been registered as 'orphan drugs' or 'therapeutic interventions'. 4The field has developed rapidly, and, in 1987, the US Federal Drug Administration approved the first recombinant enzyme drug alteplase, which is a human tissue plasminogen activator. 4Even though products classified under 'Enzymes' in the Monthly Index of Medical Specialities (MIMS) 5 are reimbursed by medical aid insurance schemes in South Africa, no drug utilisation studies could be found in the literature on the prescription, usage patterns and cost of these products despite the fact that they are relatively expensive and deemed of national importance.MIMS 5 lists most of the pharmaceutical products in the South African market, especially those regularly prescribed, and is regarded as a standard reference source of available medicines in South Africa. Enzyme-containing products are included in MIMS Category 27.0.0(Enzymes), but are also referred to in MIMS Categories 8.3 (Fibrinolytics) and 12.1 (Digestants). 5Only six enzyme active ingredients are listed in these categories -hyaluronidase, imiglucerase and pancreatin and three antithrombotic enzymes (alteplase, tenecteplase and streptokinase). 5,6his list is limited, even though there are more types of enzymes used in therapeutics in other parts of the world, either as registered pharmaceuticals or as orphan drugs. 46][7] All these products are for parenteral administration, except the pancreatincontaining products. Hyaluronidase (hyaluronoglucosaminidase EC 3.2.1.35)is a glycosidase hydrolysing 1,4-linkages between N-acetyl-β-D-glucosamine. 8 The enzyme is extracted from ovine or bovine testes as the protein is present on the posterior head and the acrosomal membrane of mammalian sperm. 9Recently, recombinant forms of hyaluronidase (produced by the combining of material from more than one origin), such as rHuPH20, have been introduced onto the market. 9,10Hyaluronidase modifies the permeability of connective tissue through the hydrolysis of hyaluronic acid, which temporarily decreases the viscosity of the cellular cement and promotes diffusion of injected fluids or of localised transudates, thus facilitating their absorption.It is used as an adjunct to increase the absorption and dispersion of other injected drugs 10 , for hypodermoclysis, for improved resorption of subcutaneously administered radiocontrast media in urography, for the effective decrease of injected depots of hyaluronic acid in aesthetic surgery 11 , and as an adjunct in subcutaneous urography for improving resorption of radiopaque agents.It is used off-label for the treatment of vitreous haemorrhage and diabetic retinopathy.Sodium hyaluronate 10 mg/mL is included in the Standard Treatment Guidelines and Essential Medicines List for South Africa (Hospital Level Adults). 12It is used as an ocular peri-operative pharmaceutical product and is classified under 'Surgical and diagnostic products' (Section 18.8). 12It is therefore used in the public health sector in hospitals, but no data on the total number of prescriptions could be found. Imiglucerase is a recombinant DNA-produced analogue of human β-glucocerebrosidase 4 (EC 3.2.1.45).The enzyme hydrolyses the beta glycosidic links in glucocerebroside that is an intermediate in lipid metabolism.A mutation in the glucocerebrosidase gene leads to the disorder known as Gaucher's disease (a lysosomal storage disease) that occurs in the absence of glucocerebrosidase activity.The use of glucocerebrosidase in enzyme replacement therapy is the first of its kind using an exogenous enzyme targeting its natural site of activity in the body. 4ncreatin is an extract from ovine pancreas and contains lipases (pancreatic triacylglycerol lipase EC 3.1.1.3),α-amylase (EC 3.2.1.1),proteases (trypsin EC 3.4.21.4) and chymotrypsin (EC 3.2.21.1) in varying proportions.Pancreatin is used to treat pancreatic insufficiencies (both prescription and over-the-counter) as well as in the treatment of fat malabsorption in HIV patients and pancreatic insufficiency in cystic fibrosis patients (where the lipases are from recombinant maize 4 ). The antithrombotic enzymes are tissue plasminogen activators that are used to remove blockages in blood vessels in acute ischaemic strokes, myocardial infarctions and pulmonary oedemas.Alteplase and tenecteplase (EC 3.4.21.68) are serine proteases of human origin that cleave plasminogen to plasmin, which is the enzyme responsible for clot breakdown.Streptokinase (EC 3.4.24.29) produced by various strains of streptococci is able to bind and activate plasminogen in a non-proteolytic manner to break down fibrin clots. In the light of the identification of medical enzymes as an important research focus for South African academia and industry, this study identified trends in the prescription of medical enzymes in South Africa.The primary aim of the study was to investigate the prescription patterns and cost of products classified as enzymes in a South African private health-care insurance claims database over a 2-year period. Methodology A retrospective, cross-sectional drug utilisation study was conducted on the database of a private medical insurance scheme administrator in South Africa.According to the South African Board of Healthcare Funders that represents 72 health insurers in South Africa, only 16% of the South African population is covered by medical insurance. 13This figure equates to 3.5 million insured members and their 4.6 million dependants.The remainder of the South African population, that is 39.9 million people, is dependent on the government's medical services. Data covered 2010 and 2011 and included medication, procedures and devices (a total of 2 126 264 records for 2010 at an amount claimed of ZAR173 812 440.86, and 2 298 312 records for 2011 at an amount claimed of ZAR169 127 258.13).Each medication record contained information on the age and gender of the patient, with a unique number to identify each patient, the date of the prescription, detailed information on the dispensed drug (name, package size, formulation, strength and quantity), price and various reimbursement variables. MIMS 5 was used to identify and classify the medicines.All records for 'Enzymes' (MIMS Category 27.0.0Enzymes (8.3; 12.1)) were extracted, as well as records in Categories 8.3 (Fibrinolytics) and 12.1 (Digestants). 5icrosoft Access ® and Excel ® were used to analyse the data.Basic descriptive statistics were calculated.The cost indicated is the amount that was paid by the respective medical aid insurance schemes and may differ from the single exit price (SEP) 14 that is used in South Africa, as not all medical aid insurance schemes cover the full costs of these products and co-payments may have to be made by patients.At the time of the study (at the juncture between 2010 and 2011), EUR1.00 was equal to ZAR9.38, USD1.00 was equal to ZAR7.64 and GBP1.00 was equal to ZAR11.48. Limitations of the study were that no clinical information or diagnoses were available in the database, and that only data of patients served by the private health-care sector in South Africa were included.Also, only products containing enzymes that were prescribed during 2010 and 2011 are discussed in the results section, although more trade name products have since become available on the South African market.Permission to conduct this study was obtained from the Research Ethics Committee (Human) of the Nelson Mandela Metropolitan University (ethics clearance number: H08-HEA-PHA-005). Results and discussion Enzyme-containing products constituted only 0.02% of the approximately 4.5 million claims for products and procedures paid for by the medical insurance schemes interrogated in this study, and 0.57% of cost.Of the enzyme-containing products available for prescription in South Africa, only five were prescribed and submitted for reimbursement by the medical insurance company whose database was interrogated. In The different enzyme-containing product classes that were dispensed and paid for are given in Figure 1.Most products were prescribed in MIMS Category 27.0.0,accounting for 62.69% of the number of prescriptions for enzymes and 80.99% of the total amount claimed for enzymes over the 2 years.About half of the products (52.21%) were dispensed by private hospitals. Hyaluronidase was the most frequently prescribed (60.04% of all enzyme products), followed by pancreatin-containing products (34.66%Hyaluronidase is classified in the Anatomical Therapeutic Chemical / Defined Daily Dose (ATC/DDD) Index as an enzyme under 'Blood and Blood Forming Organs (Other Haematological Agents)'. 6,15It is used offlabel for the treatment of vitreous haemorrhage and diabetic retinopathy.It is not possible to speculate on the reason for its use.Recently, it was reported that administering recombinant human hyaluronidase (rHuPH20) with meal-time insulin injections could help improve blood sugar control in people with type-1 diabetes (the combination led to smaller rises in glucose levels than treatment with insulin lispro alone). 16iglucerase was overall the most expensive (an average cost of ZAR58 103.26 for the 200 units/5 mL vials and ZAR62 470.48 for the 400 units/5 mL vials prescribed, and a total cost of ZAR697 239.17 for the 200 units/5 mL vial and ZAR749 645.73 for the 400 units/5 mL vial).Imiglucerase, used in the treatment of Gaucher's disease, is also listed in MIMS Category 26.0.0 (Biologicals) 5 and in ATC Group A16AB02. 6,15 prescriptions were dispensed for alteplase at an average cost of ZAR6572.60 per prescription.Alteplase is used as fibrinolytic therapy in acute myocardial infarction within 6 h of symptom onset, as thrombolytic treatment in patients with acute massive pulmonary embolism and haemodynamic instability, as thrombolytic treatment of acute ischaemic stroke initially within 3 h after the onset of stroke symptoms and after the exclusion of intracranial haemorrhage. 5necteplase, of which 14 injections were dispensed, is also used as thrombolytic therapy in acute myocardial infarction as soon as possible after symptom onset but within 6-9 h of symptom onset.The average amount claimed per injection was ZAR11905.72 for 8000 units and ZAR13091.24for 10 000 units. The pancreatin-containing products are available from pharmacies and are indicated as supplementation for pancreatic exocrine insufficiency caused by chronic pancreatitis, cystic fibrosis or partial pancreatectomy. 4he formulation with dimethicone is used for abdominal distention due to cumulative gas and foam, in hepatic and biliary dysfunction and in post-operative flatulence and pre-gastrointestinal radiologic 5 These products were mostly dispensed by pharmacies and in private hospitals and were relatively inexpensive. In the database being interrogated, no prescriptions were encountered for streptokinase, which is indicated for severe myocardial infarction and is regarded as the most affordable antithrombotic enzyme. 6In the SEP file of January 2012, 14 streptokinase was indicated as 'not approved' and it was not prescribed during the period under study. Conclusion and recommendations No studies could be found in the literature on the prescription patterns of enzyme-containing products in South Africa.Therefore, the aim of this study was to investigate these patterns as well as the cost of enzyme-containing products in South Africa using a private medical insurance scheme database.A limitation of this study was the absence of diagnoses in the database, which did not allow for the determination of the reason for the use of the various enzyme-containing products. Considering the increased emphasis on therapeutic enzymes and the growing global market for enzymes, it is noteworthy that medical enzymes only constituted 0.02% of reimbursements from the medical claims database.Whilst it may be difficult to speculate on the underlying reasons, both cost and familiarity with enzymes may play a role in the prescription patterns found.Medicinal products containing enzymes are relatively expensive and warrant further studies into their costeffectiveness.Only one trade name product was prescribed for each enzyme-containing product in this study (although some trade names had more than one dosage strength or pack size).It will be interesting to monitor how prescription patterns and cost will be affected when more trade name products are introduced.In the absence of other drug utilisation studies with which to compare the results, this study can be regarded as a baseline study and further studies are recommended. 14e average cost per product paid by the medical aids was ZAR280.54(ZAR279.22 in 2010 and ZAR281.86 in 2011).The SEP (unit price) on 12 January 2012 for hyaluronidase was ZAR283.40 (effective from 22 May 2010).14 Number of products and amount claimed (in ZAR) of the different classes of enzyme-containing products as a percentage of the total number and the total cost of all enzyme-containing products (n=906). Table 2 : Number of enzyme active ingredients and total amount claimed for each enzyme active ingredient 14ingle exit price (SEP)14as on 12 January 2012.
2017-10-19T17:02:17.148Z
2014-09-01T00:00:00.000
{ "year": 2014, "sha1": "8e0ef79c925cfd2441773ad9a1d5c4b29afd4a78", "oa_license": "CCBY", "oa_url": "https://sajs.co.za/article/download/3976/5695", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "8e0ef79c925cfd2441773ad9a1d5c4b29afd4a78", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257766490
pes2o/s2orc
v3-fos-license
The convergence rate of vanishing viscosity approximations for mean field games Motivated by numerical challenges in first-order mean field games (MFGs) and the weak noise theory for the Kardar-Parisi-Zhang equation, we consider the problem of vanishing viscosity approximations for MFGs. We provide the first results on the convergence rate to the vanishing viscosity limit in mean field games, with a focus on the dimension dependence of the rate exponent. Two cases are studied: MFGs with a local coupling and those with a nonlocal, regularizing coupling. In the former case, we use a duality approach and our results suggest that there may be a phase transition in the dimension dependence of vanishing viscosity approximations in terms of the growth of the Hamiltonian and the local coupling. In the latter case, we rely on the regularity analysis of the solution, and derive a faster rate compared to MFGs with a local coupling. A list of open problems are presented. Introduction Mean Field Games (MFGs) are a mathematical framework used to model and analyze strategic interactions in a large population, which was independently developed by Lasry and Lions [43,44,45], and Caines, Huang and Malhamé [38]. In MFGs, each individual makes decisions based on her own objective as well as the behavior of the entire population, represented by a mean field which describes the probability distribution of the collective state of the system. MFGs are widely used to model complex systems in economics [18,40] and engineering [23,37,38]. The standard form of MFGs is given by the following system of partial differential equations (PDEs): in Ω : where T d := R d /Z d is the d-dimensional torus, T > 0 and ν ≥ 0. The Hamiltonian H(x, p) is a convex function with respect to the second variable p. MFGs are used to describe Nash equilibria in differential games with a continuum of players, where m ν (t, x) is the density of players at time t and at position x. The variable u ν is the value of a typical player's optimal control problem, so it is a solution to some Hamilton-Jacobi equation. As u ν is the optimal value (that the player can possibly achieve), the optimal strategy is −D p H(x, Du ν ). If the Date: April 4, 2023. 1 density of the population flows to the direction preferred by the optimal strategy, the game has a Nash equilibrium. This amounts to solving (1.1). When ν > 0, the system is of second order. The system by sending ν to zero is called the vanishing viscosity limit, which is of first order. In this paper, we study the convergence rate of second-order MFGs to the vanishing viscosity limit as ν → 0 + . A special focus is on the dependence of the rate on the dimension d, and hence on whether or under what circumstances it induces the curse of dimensionality or the lack thereof. For simplicity, we assume that the terminal dataū is independent of the density m ν . We distinguish two cases for the coupling (or the running cost) f : • When f (x, m) depends on the pointwise value of the density m(t, x), the coupling is referred to as local. • When f (x, m) depends on the entire distribution of m(t, ·) and is uniformly smooth for all distributions, it is referred to as a regularizing, nonlocal coupling. We assume that f is increasing in m in the local case, and satisfies the Lasry-Lions monotonicity condition in the nonlocal case, so that the uniqueness of solutions is guaranteed. It is also worth mentioning that the convergence rate of vanishing viscosity approximations of Hamilton-Jacobi equations was studied in [21,24,58,59], and the optimal rate is ν 1 2 . The same problem was considered for hyperbolic systems in [3,5,6], and for Fokker-Planck equations with C 1 nonlocal drifts in [26,27,62]. Before delving into the problem, we digress a bit to explain the motivations to study the convergence of vanishing viscosity in MFGs. (1) In recent years, there has been a growing interest in modeling autonomous vehicles' control and their macroscopic traffic flow by first-order MFGs (ν = 0) [35,36,41]. As pointed out in [35], numerical methods converge slowly, or even fail to converge for first-order MFGs. This is not surprising as iterative algorithms may be ill-posed due to irregular coefficients in the transport equation. On the other hand, it is known [7,8,9] that the policy iteration algorithm converges exponentially fast for second-order MFGs (ν > 0). So a reasonable idea is to approximate first-order MFGs by second-order MFGs, and a quantitative rate of second-order MFGs to the vanishing viscosity limit provides the approximation error. Moreover, the convergence of the policy iteration algorithm is exponential in ν −1 , which yields a tradeoff between bias and algorithm efficiency. (2) The Kardar-Parisi-Zhang (KPZ) universality class describes the limiting behavior of a collection of random growth models, and the underlying continuum object is the KPZ equation [20,57]. Due to its nonlinear nature, the KPZ equation is hardly accessible except for some initial conditions which lead to integrability. Recently, there has been a line of work on large deviations of the stochastic heat equation (SHE), and hence the (1 + 1)-dimensional KPZ equation by the Cole-Hopf transform, under the weak noise theory [42,50]. Rigorous treatments have been developed in [28,47,48,61]. Under narrow wedge initial condition, the "most probable" KPZ path conditioned to be λ at time T is given by h(t, x) = log Z[ρ m ](t, x), where Z[ρ] given ρ = ρ(t, x) solves the PDE ∂ t Z = 1 2 ∂ xx Z + ρZ, for (t, x) ∈ (0, T ] × R, Z(0, ·) = δ 0 , and ρ m solves the variational problem inf 1 2 ρ 2 L 2 : Z[ρ](T, 0) = e λ . Of particular interest is the lower-tail limit as λ := −ν −1 → −∞ (so ν → 0 + ). Setting the scaling Z ν [ρ](t, x) := Z[ν −1 ρ(·, ν 1 2 ·)](t, ν − 1 2 x), ρ ν (t, x) := νρ m (t, ν − 1 2 x), and h ν (t, x) := ν −1 log Z ν [ρ ν ](t, x), [48,61] showed that h ν converges locally uniformly to a limiting shape h * as ν → 0 + (by integrability of h * ) but with no rate. Curiously, (h ν , ρ ν ) solves the PDEs: with a suitable choice of the initial-terminal conditions (by a Riemann-Hilbert approach). By taking H(x, p) = 1 2 p 2 and f (x, m) = m (a local coupling), and letting the equations (1.1) specify to (1.2). Thus, our result on the convergence of second-order MFGs to the vanishing viscosity limit stipulates how the lower-tail limit of the most probable KPZ path in (1 + 1)-dimension is obtained, and gives a quantitative rate in the large deviation limit thereof. Of course, the Cole-Hopf transform from the SHE to the KPZ equation and the weak noise theory is only valid in dimension d = 1. There is no obvious theory for dimension d ≥ 2 (see [12,19,49] for recent development), so it is not clear how large deviations of the KPZ equation in dimension (d + 1) with d ≥ 2 is connected to MFGs. Nevertheless, our results for MFGs hold for general dimensions. Now we turn back to MFGs. The well-posedness of (1.1) has been thoroughly studied in the case of nonlocal, monotone, and regularizing couplings for both second-order MFGs (ν > 0) and first-order MFGs (ν = 0) [43,44,45], and for time-dependent MFGs [25]. When the function f (x, m) depends locally on the value of m and ν > 0, the well-posedness problem has been addressed by [29,30] for classical solutions, and by [53,54] for weak solutions. When ν = 0, in general, one cannot expect the existence of classical solutions. [14] exploited the variational method, and showed that first-order MFGs can be viewed as an optimality condition for two convex problems. This approach can also be used for quadratic Hamiltonian MFGs in the whole domain [52]. Recently, allowing the terminal conditionū to depend on the density m ν (T, ·), [51] obtained the weak solution as the limit of a sequence of classical solutions to strictly elliptic problems. Here is an overview of our main results on the convergence rate of vanishing viscosity approximations for MFGs. For both local and nonlocal couplings, it is known that as ν → 0 + , the solutions (u ν , m ν ) converge to (u, m), where (u, m) are solutions to first-order MFGs [1,13,16]. However, the convergence rate is not well understood. This paper provides the first quantitative rate of vanishing viscosity approximations for MFGs. (a) Local coupling. When the coupling f is local, we apply a duality approach which relies on the fact that equation (1.1) is the optimality condition for two convex optimization problems. This approach was used in [56], and can be traced back to [4]. To obtain a convergence rate of solutions as ν → 0 + , the main assumption is the coercivity of the Hamiltonian and the coupling, as specified in the conditions (H5-1) and (H5-2). Similar assumptions were made in [32,56] to get a Sobolev estimate for m. With these assumptions, we prove that (u ν , m ν ) converges in some Sobolev norm at a polynomial rate as ν → 0 + (see Theorem 4.3). To illustrate, take H(x, p) = |p| r and f (x, m) = m q−1 for some q, r > 1 (while our results hold for more general Hamiltonian and local couplings). Let Our result shows that In particular, we have Ω (m ν − m) 2 dxdt ν 2 q(1+β) for q ≥ 2 (see Remark 4.2). Note that if 1 q + 1 r ≤ 1, then β = 1 and the rate in (1.3b) is ν 1 2 which is independent of the dimension d; if 1 q + 1 r > 1, then the rate becomes ν c d for some c > 0 which decays slowly as the dimension d is large. So there is no curse of dimensionality if the growth of H and f is sufficiently large (i.e. 1 q + 1 r ≤ 1), while the convergence may suffer from the curse of dimensionality otherwise. However, we do not know whether the rate ν 1 1+β is tight; if it is, or the rate is ν κ(d) with any κ decreasing to 0 as d → ∞ for 1 q + 1 r > 1, then it implies a phase transition in the dimension dependence of vanishing viscosity approximations for MFGs at 1 q + 1 r = 1. When the Hamiltonian is quadratic (q = 2), we prove a stronger convergence result for u ν : (see Theorem 5.3). This result, presented in Theorem 5.3, cannot be directly deduced from (1.3b), as the weighted Poincaré inequality may not be applicable. Instead, we use the higher regularity of m ν (see [32]) and the equations. For d = 1, the condition in (1.4) requires r > 5 2 , which fails to cover the KPZ Hamiltonian with r = 2. To address this, we further need d ≤ 3 to get uniform boundedness of u ν in the KPZ setting with q = r = 2, as presented in Theorem 5.4. We also mention that if the terminal dataū =ū(x, m ν (T, x)) depends on m ν , the convergence rate of vanishing viscosity for MFGs remains open. In this case, we have no longer the optimality condition characterization. (b) Nonlocal and regularizing coupling. When the coupling is nonlocal and regularizing, we adopt a different approach. Instead of using the optimization structure, we rely on the duality of the two equations and some regularity properties of the solutions, both uniform and non-uniform with respect to ν. Our main tool is the uniform semi-concavity of the solution u ν , which results from the superlinear growth of the Hamiltonian and classical Hamilton-Jacobi equations [11]. This property allows us to establish a uniform bound on the W 1,2 norm of ν 1 2 m ν for all ν > 0. With these findings, we first prove (1.3b) with r = 2 and β = 1, and (see Theorem 6.2). This method, while requiring less restrictive assumptions, leads to a faster convergence rate compared to those with a local coupling. Assuming that (1.5) implies pointwise convergence of f (x, m ν ) to f (x, m) as ν → 0 + , we obtain a pointwise convergence of u ν with rate ν 1 4 (see Theorem 7.2). While under a weaker condition (H4"), the estimate (1.5) implies that f (x, m ν ) converges to f (x, m) in L 1 (Ω) with a rate ν 1/4 . Then we use both the dual equation method (see [46]) and the viscosity solution method (see [21]) to prove that the HJ equation of u is stable under both L 1 -perturbation of coefficients and vanishing viscosity. Indeed, we obtain for all t ≤ T , We also mention that in the literature,m is often assumed to be strictly positive, see [15,16,51,55] for superlinear Hamiltonian, and [31] for linear Hamiltonian. The condition is removed under extra requirements on H and f [14,33], or for the kinetic type first order MFGs (with continuous initial measure) [34]. Relying on [15,16], we show that this condition can be dropped without further restrictions (see Theorem 3.2). The remainder of the paper is organized as follows. In Section 2, we provide background on MFGs with a local coupling. In Section 3, we prove the well-posedness of MFGs with nonnegative data. In Sections 4 and 5, we study the convergence rate of vanishing viscosity for MFGs with a local coupling. In Sections 6 and 7 we consider the convergence rate of vanishing viscosity for MFGs with a nonlocal and regularizing coupling. Finally, a list of open problems are presented in Section 8. Assumptions and Preliminaries for Local Coupling We first discuss the assumptions for the case of local coupling f . The assumptions are made so that (1.1) is well-posedness for all ν ≥ 0, and most of them can be found in [15,16]. We assume that there exists C 0 ≥ 1 such that: (H1) (Conditions on the coupling) f : T d × [0, ∞) → R is continuous in both variables, strictly increasing with respect to the second variable, and there exists q > 1 such that for all m ≥ 0 and x ∈ T d . Moreover, we make the normalization condition: ) for each x ∈ T d is defined as H * (x, ξ) := sup p∈R d ( ξ, p − H(x, p)). Then H * is continuous and satisfies for some C 0 ≥ 1 (without loss of generality let us still use C 0 ) such that Later we also write q ′ := q q−1 as the conjugate of q. 3. Let F be defined as and F (x, m) = +∞ if m < 0. Then (H2) yields for some C 0 ≥ 1, We define F * (x, ·) to be the Fenchel conjugate of F (x, ·). Then F * (x, α) is strictly convex in α, F * (x, α) = 0 for α ≤ 0, and for some C 0 ≥ 1, 4. Unlike [15,16], we do not need to assumem > 0. It was used in [15,16] to show the existence of a solution for the optimization problem (2.6). One can consider equations with more general second order terms: where A = A(x) is assumed to be a Lipschitz continuous map, taking values in the set of symmetric, uniformly positive definite matrices. But for this, one needs to further assume r ≥ q ′ , see [16]. 2.1. Optimization problems. We discuss two optimal control problems which are in duality, and we refer to [14,16]. For any ν ≥ 0, the first problem is with Ω = (0, T ) × T d , and where the continuity equation holds in the sense of distributions. When m = 0, we use the usual convention: Now we discuss the second minimization problem. Recall q, r > 1 from (H1)(H2) and q ′ , r ′ are their conjugates, respectively. Set and γ > 0 can be an arbitrarily large constant when q ′ = 1 + d r . Then we let and let K 2,ν be the set of (u, α) ∈ L γ (Ω) × L q ′ (Ω) such that Du ∈ L r (Ω) and the following holds in the sense of distributions The precise meaning of the inequality is given in [16,Section 3]. The second optimization problem (also called the relaxed problem in [14,16] It turns out that for each ν ≥ 0, the optimization problems (2.4) and (2.6) are in duality. The first equality below is proved in [14,16] by the Fenchel-Rockafellar theorem.The second equality is due to the fact that one can always replace α by max{α, 0} as F * (x, α) = 0 for α ≤ 0. Here we remark thatm > 0 is not needed in the proofs. Moreover, the minimum of the first term is achieved by a unique pair in The last statement that the minimum is achieved by a unique pair in K 1,ν is because the set K 1,ν is convex, and the functions F (x, ·) and H * (x, ·) are strictly convex for each x. Well-posedness for non-negative data In this section, we show well-posedness of (1.1) without assumingm to be strictly positive. First we recall the notion of weak solutions in [16]. Let ν ≥ 0 and γ be from (2.5). the following integrability conditions hold: (ii) The following (in)equalities hold in the sense of distribution: with u ν (T, ·) ≤ū, and Here the last term is well-defined due to Lemma 5.1 [16]. The theorem is proved in [16] with an extra assumption thatm is strictly positive. The assumption is only used in their Proposition 5.4 to show the existence of a solution to the optimization problem (2.6). We use a uniform lower bound of u ν and a slightly different test function to remove the constraint. We state the result below. is well-defined due to Lemma 5.1 [16] (u ν has a "trace" in a weak sense). Proof. The proof follows largely the one of Proposition 5.4 [16]. Step 1. From the first paragraph of their argument, there exists a minimizing sequence (u n ν , α n ν ) ∈ K 2,ν for problem (2.6) such that u n ν , α n ν are continuous, α n ν ≥ 0, and u n ν is a viscosity solution to Since H is convex, this equality also holds in the sense of distribution [39]. Moreover, since It is clear that the right-hand side is independent of ν. Step 2. Next, we show some bounds for α n ν and u n ν that are uniform in n and ν. We integrate (3.2) againstm + 1 on Ω (instead ofm that was used in [16]) to get By (H2) and (H3) (m,ū ∈ C 1 ), there exists C ≥ 1 depending only on (H3) and C 0 such that for some C depending only on ū ∞ , H(·, 0) ∞ and T . Next, using that r > 1 and q ′ > 1, we get for any ε > 0 there is C ε satisfying Applying these in (3.4) and taking ε to be sufficiently small, we obtain for some C independent of n and ν ∈ [0, 1], 3). It follows from (2.3) and the definition of Step 3. Since (u n ν , α n ν ) is a minimizing sequence, for all n sufficiently large andᾱ ν := −∂ tū − ν∆ū + H(x, Dū) we obtain which is finite. So there exists C > 0 independent of n and ν ∈ [0, 1] such that for all large n, With the uniform bound, one can take weak limit of (u n ν , α n ν ) along a subsequence of n → ∞. Following Step 2 and Step 3 in Proposition 5.4 [16], we obtain that the weak limit (u ν , α ν ) is a minimizer of the optimization problem (2.6). Finally, (3.1) follows from (3.5). The qualitative vanishing viscosity limit is studied in [16,Theorem 6.5]. We state it below. Vanishing Viscosity Limit -Local Coupling To study the convergence rate of vanishing viscosity limit of mean field games, we further need the following regularity and coercivity assumptions. (H4) (Regularity condition) There exist C 0 , C 1 , C 2 ≥ 0 such that for all x, y ∈ T d and m ≥ 0, and for any p ∈ R d and σ ∈ (0, 1], The condition (4.1) on H is stronger than the one on F , but it is still satisfied by a large class of Hamiltonian. For example, if H(x, p) = h 1 (x)|p| r +h 2 (x) with two Lipschitz continuous functions h 1 , h 2 : T d → R and h 1 ≥ 0, then (4.1) holds with C 1 := Dh 1 ∞ /(r − 1) and So it follows from (H4) that for all x, y ∈ T d and α ≥ 0, It is not hard to see that F * is increasing in α. Indeed for any σ ≥ 0, Moreover, by (4.4), for any σ ≥ 0, where m α+σ is such that We only prove (4.3) with this selection of functions in the appendix. The proof for (4.2) is almost the same. Finally, let us comment that in the above examples the value of c 0 is optimal. Indeed, if τ ≡ 1, q = q ′ = 2, it is easy to see that c 0 ≤ 1 2 . The optimality can also be seen in the proof. The following lemma is a consequence of the coercivity condition. The proof is identical to the one of Lemma 3.2 [56], and we skip it. Then if (H5-1) holds, there exists C > 0 such that for all ν ∈ [0, 1] we have Proof. We use the optimization structure of (1.1) described in Theorem 3.2. Step 1. By Theorem 2.1 and the proof of Proposition 3.3, we can take a minimizing sequence (u n , α n ) ∈ K 2,0 such that α n ≥ 0, and − ∂ t u n + H(x, Du n ) = α n , u n (T, ·) =ū (4.13) in the sense of distribution, and A(u n , α n ) ≤ A(u, α) + 1/n. Due to (3.1), there exists C > 0 such that for all n sufficiently large we have Du n L r (Ω) + α n L q ′ (Ω) ≤ C. (4.14) Step 2. The goal is to show that Below we construct (u n ε , α n ε ) from (u n , α n ) that can be used as a candidate for the optimization problem (2.6). Convergence of classical solutions -Local Coupling In this section, we prove the convergence of u ν under some extra conditions (see (H6)-(H8) below). We will assume that (u ν , m ν ) with ν > 0 are classical solutions. Indeed, [17] showed that the solutions are smooth when Hamiltonians are purely quadratic. We also refer readers to [30,29] for results about classical solutions. We will present two results for quadratic Hamiltonians (so q = 2). The first one is for r > 2 and we use the conditions (H6)(H7) below (by (H6) we mean (H6-1)-(H6-3)), while the second one is for r = 2 and d ≤ 3, and we further need (H8). where I d denotes the identity matrix. Lemma 5.1. Assume (H1)(H6). Then (H5-2) holds with J 2 (x, m) = m and J * 2 (x, α) defined such that f (x, J * 2 (x, α)) = α. If further assuming H pp (x, p) ≥ cI d for some c > 0 uniformly in T d × R d , then (H5-1) holds with J 1 (x, p) = p and J * 1 (x, ξ) defined such that H p (x, J * 1 (x, ξ)) = ξ. Here J * 2 is well-defined as f (x, α) is strictly increasing in α. Similarly, J * 1 is well-defined as H(x, ·) is C 2 and is uniformly strictly convex in p, and the map H p (x, ·) : Proof. Let us fix x ∈ T d . In view of Remark 4.1.2, it suffices to consider m, α > 0. Take m α such that f (x, m α ) = α. The definition of F * yields Since F is strictly convex by (5.1), and F m (x, m) = f (x, m), we have Then, using f (x, m α ) = α, we get from (5.2) that We proved the first claim. The proof for the second claim is identical. In the next lemma, we collect and prove some ν-uniform estimates. 4) When η = 1 + d r , δ > 0 can be an arbitrarily large constant, and in this case, the constant C depends also on the choice of δ. If further assuming inf T d ×R d H pp (x, p) ≥ cI d for some c > 0, then we have and (H6)(H7). In [32], the authors only considered the case of ν = 0 and assumed (H5-1) with J 1 , J * 1 independent of x. However, this assumption is not needed to obtain only (5.3). Also it is not hard to see that their argument generalizes to all ν ∈ [0, 1] and the constant is independent of ν. It is not hard to see that when η = 1 + d r , δ > 0 can be arbitrary and the constant C depends also on δ. The last claim follows from [32] and the second claim of Lemma 5.1. Now we are ready to prove our main theorem of the section. where β is given in (4.12). Proof. The proof is split into four steps. Step 4. Putting these estimates into (5.9) yields By Theorem 3.4, m ν ′ converges strongly to m in L 2 (Ω) as ν ′ → 0. Thus, as u ν −u ν ′ is uniformly bounded in L 4 (Ω) by Lemma 5.2 and δ ≥ 4, we get lim inf Since u ν ′ are uniformly bounded in L δ (Ω) and u ν ′ converges weakly to u in L r (Ω) along a subsequence of ν ′ → 0 by Theorem 3.4, we actually have u ν − u ν ′ converges weakly to u ν − u in L δ (Ω) along a subsequence of ν ′ → 0. Then we show that the functional v → Ω v 2 m dxdt is continuous on L δ (Ω). Let η ′ = η η−1 be the conjugate of η. Note that for any v 1 , v 2 ∈ L δ (Ω), by Lemma 5.2, and by (5.3), The right-hand side of (5.10) can be arbitrarily small when v 1 and v 2 are very close in L r (Ω). Thus v → Ω v 2 m dxdt is continuous on L r (Ω), and it is straightforward that it is also convex. Hence we get which finishes the proof. Theorem 5.3 fails to cover the KPZ setting (1.2) when r = 2 that we are interested in. Below, we prove a convergence result for d ≤ 3 and q = r = 2. The statement is slightly different from the previous theorem that we use weight m ν instead of m. Proof. We break the proof into three steps. Step 1. Since d ≤ 3, Lemma 5.2 yields that u ν is uniformly bounded for all ν ∈ [0, 1] in Ω. For 0 < ν ′ < ν < 1, let Then the following holds in the classical sense, We multiply the above equality by −2U m ν , and use U 2 as the test function against the equation of m ν in (1.1). Adding them up and integrating from t to T for some t ∈ [0, T ) yield Since U (T, ·) ≡ 0, this simplifies to (5.11) Step 2. We estimate each term in the right-hand side of (5.11). In view of the last claim of Lemma 5.2, by Hölder's inequality and uniform boundedness of u, u ν , we obtain By Proposition 3.3 and boundedness of U , Here m ν ∞ is finite (depending on ν) because (u ν , m ν ) is a classical solution. Similarly as done in the proof of Theorem 5.3, we have Next, the condition (H8) and uniform boundedness of U again yield where in the last inequality, we applied Theorem 4.3 and Lemma 5.1. Vanishing Viscosity Limit -Nonlocal Coupling In this section, we discuss mean field games with cost functions that are nonlocal and are regularizing on the set of probability measures. We allow the terminal data of u ν to depend on m ν (T, ·). Consider m ν (T, ·)). (6.1) Let P be the set of Borel probability measures µ on T d . The set P can be endowed with the well-known Kantorovitch Rubinstein distance (or 1-Wasserstein distance), that is for any µ 1 , µ 2 ∈ P, where Π(µ 1 , µ 2 ) denotes all Borel probability measures on T d × T d that have µ 1 as its first marginal and µ 2 as its second marginal. We make the following assumptions (see [13,45]). There exists C > 0 such that the following holds. (H1') (Regularizing condition) f : T d × P → R satisfies for any µ ∈ P, and for any x ∈ T d and µ 1 , µ 2 ∈ P, Moreover, f is strictly monotone in the second variable in the sense that for any µ 1 , µ 2 ∈ P, if µ 1 = µ 2 , then If µ ∈ P has a density function m, we write f (x, m) := f (x, µ). For any R > 0, there exists C R > 0 such that for any (x, p) ∈ T d × R d with |p| ≤ R, Moreover, H is convex in the second variable in the sense that there exists a nonnegative function c 1 : T d → [0, ∞) such that for all p, p ′ ∈ R d and x ∈ T d , (H3') (Conditions on the initial and terminal data)m : T d → R is a C 1 non-negative density function.ū : T d × P → R satisfies for any µ ∈ P, and for any x ∈ T d and µ 1 , µ 2 ∈ P, Moreover, we have the monotonicity condition: For any m 1 , m 2 ∈ P, If µ ∈ P has a density function m, we writeū(x, m) :=ū(x, µ). [13,45]) that under the assumptions of (H1')(H2')(H3'), there exists a unique classical solution to (6.1) when ν > 0. When ν = 0, (6.1) is still well-posed [13,45], and the first equation in (6.1) is satisfied in the viscosity sense, and the second equation holds in the weak sense. It can be shown that m ν (t, ·) is continuous in time with respect to Kantorovich-Rubinstein distance, and so f (x, m) is continuous in time. The typical example of f (ofū) is where φ : T d → R is a smooth even kernel and g : T d → R is a smooth function such that g is strictly increasing in the second variable. Indeed, it is direct to see (see also [10,Example 4 The following regularity results are consequences of (H1')(H2')(H3'). Lemma 6.1. There exists C > 0 such that for all ν ≥ 0 we have m ν , |u ν |, |Du ν | ≤ C in Ω, (6.2) and Proof. The comparison principle yields that u ν is uniformly finite for all ν ≥ 0. The proof for Lipschitz regularity of u ν follows from [2] (we also refer readers to [11] and [60]). Semiconcavity of u ν is given in Theorem 5.3.6 [11] (as u ν is uniformly Lipschitz continuous, by modifying H(x, p) for large p one can assume that all second derivatives of H are uniformly finite). Finally, from the results in [13,Section 4.2], it follows that m ν is uniformly bounded. Now we prove the first main theorem of the section. Proof. The proof consists of four steps. Convergence of u ν -Nonlocal Coupling In this section, we proceed to show convergence results of u ν as ν → 0. First, let us assume a strong condition (H4') on f andū, and we prove pointwise convergence of u ν to u. Later, we also consider a weaker condition (H4"). where φ is a smooth even function on T d and g is C 1 on T d × R, then the condition reduces to Indeed suppose m 1 , m 2 are two probability density function in T d . Since |φ * m i | ≤ φ ∞ , we have On the other hand, for c : So there exists C > 0 independent of m 1 , m 2 such that for all x ∈ T d , We will need the following lemma. in the sense of viscosity, and v(0, ·) = v ε (0, ·) is a C 2 function. Then there exists C > 0 such that for all ε > 0, Though the proof is given by the classical viscosity solution approach (see e.g., [21] for the case when H = H(p) and g ≡ 0), for readers' convenience, we provide it in the appendix. Note u satisfies in the sense of distribution, and u is also a viscosity solution by [39]. Moreover, by [13,Lemma 4.14] and (H1'), f (x, m) is Lipschitz continuous in time. Then it follows from Lemma 7.1 that which finishes the proof. Now we consider a weaker condition: (H4") There exists C > 0 such that for any ε > 0, if µ 1 , µ 2 ∈ P satisfy And the same holds if we replace f byū . Since, under (H4"), it is only known that f (x, m(t, ·)) and f (x, m ν (t, ·)) are close in average from Theorem 6.2, we do not expect a pointwise strong convergence of u ν to u. We will apply a dual equation method to prove an L ∞ t L 1 x convergence (see also [46]). Theorem 7.3. Under the assumptions of Theorem 6.2, assume (H4"). Then there exists C > 0 such that for all ν ∈ (0, 1] we have Proof. For ν ∈ (0, 1], let w ν be the unique solution to m(T, ·)). By Lemma 6.1, there exists C > 0 such that for all ν ∈ (0, 1], |Du|, |Du ν |, |w ν | ≤ C and D 2 u ν , D 2 w ≤ CI d in Ω. (7.1) We will compare u ν with w ν and then w ν with u. Step 1. Consider W := w ν − u ν , which then satisfies Recall that By (H4") and Theorem 6.2, Step 2. For any fixed t 1 ∈ [0, T ), we consider the dual equation of (7.2) in [t 1 , T ] × T d : where ψ 0 is a smooth function on T d . By Divergence Theorem, we have In view of (7.3) and (7.4), integrating the above equality in the time interval [t 1 , T ] yields (7.5) Step 3. Now, we estimate ψ L ∞ ([t 1 ,T ]×T d ) in terms of ψ 0 L ∞ (T d ) . It follows from the equation of ψ that for any n ≥ 2 an even number, So for any t ∈ [t 1 , T ], This yields that if ∇ · G(s, x) is uniformly bounded from above for all ν, then by Gronwall's inequality, Passing n → ∞ yields Applying this in (7.5), we get for some C > 0, holds for all smooth ψ 0 , which implies that w ν (t 1 , ·) − u ν (t 1 , ·) L 1 (T d ) ≤ Cν 1/4 for any t 1 ∈ [0, T ). (7.6) Step 4. To finish the proof of (7.6), we now show that ∇ · G(t, x) is uniformly bounded from above. Indeed we have By (7.1) and (H2'), there exists C > 0 such that for all ν, s ∈ [0, 1], Therefore, we obtain that ∇ · G ≤ C for some C independent of ν. Step 5. Due to (7.6), to conclude the theorem, it suffices to show that This is a consequence of Lemma 7.1 (see also the proof of Theorem 7.2). Open problems In this section, we collect a few open problems related to the convergence of vanishing viscosity approximations for MFGs and its connection to the KPZ equation. (1) In Theorem 4.3, we proved a rate of ν 1 2(1+β) for the convergence of (m ν , Du ν ) in some Sobolev norm assuming that H(·, p) grows as |p| r , and f (·, m) grows as m r−1 . We know that if 1 q + 1 r ≤ 1, this rate is ν 1 4 which is independent of the dimension d; while 1 q + 1 r > 1, the exponent 1 2(1+β) ≍ d −1 . Is the rate ν 1 2(1+β) is tight, or does the threshold 1 q + 1 r = 1 induces a phase transition in the dimension dependence of the rate exponent of vanishing viscosity approximations for MFGs? For the (1+1)-dimensional KPZ equation, it is known that under narrow wedge initial condition, u ν converges locally uniformly but without rate. (a) Does the convergence still hold in a stronger sense, e.g. locally uniformly for u ν , and do we get the same convergence rate? (b) Can we relax the condition q = 2, (5.6) and r = 2 in these theorems? In Theorem 5.4, we proved the convergence of u ν in L 2 norm weighted by m ν . Can we prove the same rate in L 2 norm weighted by m (and further locally uniformly)? (3) For the (1 + 1)-dimensional KPZ equation, Theorem 4.3 implies that ρ ν converges in L 2 norm with a rate ν 1 4 ; Theorem 5.4 shows that h ν converges in a weighted L 2 norm with rate ν 1 8 . This relies on the Cole-Hopf transform from the SHE to the KPZ equation, which underlies the weak noise theory. Is there a higher dimensional weak noise theory to connect large deviations of the KPZ equation to vanishing viscosity for MFGs? (4) There are also a few directions to extend this work from a PDE perspective. For instance, (a) Is it possible to generalize our results to non-compact domains (e.g. R d )? (b) In the local coupling setting, can we allow the terminal data u(T, ·) to depend on m(T, ·) pointwise? (c) What is the convergence rate of vanishing viscosity approximations for kinetic MFGs (see [34])? We hope that our convergence analysis of vanishing viscosity approximations for MFGs may trigger further research connecting the KPZ equation and MFGs in dimension d ≥ 2, and on the dimension effect as d → ∞.
2023-03-28T01:22:28.463Z
2023-03-25T00:00:00.000
{ "year": 2023, "sha1": "257f32e1731c76cb7a84bb78ea69588f55f35d50", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "069c6ef97557a2725e5302dc9f6f368db784c4c0", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
201317410
pes2o/s2orc
v3-fos-license
Assessment of the ecological quality of soft-bottom benthic communities in the Syrian coast , Eastern Mediterranean The present study was aimed to evaluate several benthic biological indices to assess the ecological quality status of marine environments in Syria. Samples were obtained from four different areas of Syrian coast (Al-Bassit, Banias, Tartous and Al-Hamidia) on a monthly scale from March 2007 to December 2007. Results showed that the percentage of benthic species are increased significantly in the stations with healthy ecological status. The cluster analysis and the MDS have shown that the studied stations are subjected to environmental disturbance. Some species have been found to be more sensitive (high values of Hurlbert Index) than others such as the Rhinoclavis kochi, which belonged to the Gastropodes taxa and Notomastus latericeus belonged to the polychaetes taxa. The ecological quality of the Syrian coast was assessed using three biotic indices (H′, AMBI and BQI). The Shannon–Wiener index H′ was ranged from 2.88 to 5.39. Thus, the ecological status varied between moderate and high. The values of AZTI Marine Biotic Index (AMBI) and Benthic Quality Index (BQI) classified the ecological state between high and good (slight disturbance). Introduction The benthos (flora and fauna) is an important component in marine ecosystems.On one hand, it is a main element in the nutrient cycle, detrital decomposition and as a food source for higher trophic levels.On the other hand, the macrobenthic species are considered as sensitive indicators of changes in the marine environment caused by natural or anthropogenic disturbances.The effects of these disturbances comprise changes in diversity, biomass, abundance of stress tolerant or sensitive benthic species, and the structure of the benthic community (Warwick and Clarke, 1994;Kaiser et al., 2000;Grall and Chauvaud, 2002). Recently, the benthic system was widely used in marine monitoring and assessment of ecological quality (Simboura and Zenetos, 2002;Quintino et al., 2006;Rosenberg et al., 2004;Reiss and Kroncke, 2005;Albayrak et al., 2006Albayrak et al., , 2010)).The analysis of changes in benthic communities, using univariate measurements such as species abundance or richness (Quintino et al., 2006) or multivariate statistical approaches which distinguish patterns in species composition have become an important tool in the assessment and monitoring of the biological effects of marine pollution (Clarke and Warwick, 2001).Species sensitivity/tolerance values are frequently used in various indices for assessing marine environmental quality.A low sensitivity value means that the species has been found largely in species-poor environments.A high sensitivity value on the other hand means that the species occurs in a high diversity community and has a high competitive ability.It was seldomly found in species poor and disturbed environments (Fleischer et al., 2007). Biotic indices are increasingly being used in quality status assessments and management (Borja et al., 2003a;Muniz et al., 2005).These include the Shannon-Wiener index (H′) (Borja and Muxika, 2007), AZTIs Marine Biotic Index AMBI (Borja et al., 2000) and the Benthic Quality Index BQI (Rosenberg et al., 2004).BQI was designed to assess environmental quality according to the Water Framework Directive (WFD).Tolerance scores, abundance, and species diversity factors are used in its determination.The main objective of this index is to attribute tolerance scores to the benthic fauna in order to determine their sensitivity to disturbance (Fleischer et al., 2007).Borja et al. (2000) have proposed the adoption of AMBI, using macrobenthic organisms as bio-indicators.This index is based essentially upon the distribution of five ecological groups, of soft-bottom macrofauna (Grall and Glémarec, 1997).These are in relation to their sensitivity to an increasing stress gradient.Syrian marine coastal ecosystems as a sector of the eastern Mediterranean are affected by several anthropogenic activities such as overfishing along the coast, industrial effluents especially in Banias city, domestic sewage disposal, tourism, harbors… etc.Few previous studies estimate the biodiversity of zoobenthos of Syria (Kucheruk et al., 1998;Saker et al., 2002;Ammar, 2004Ammar, , 2010;;Ibrahim et al., 2010;Ammar et al., 2011;Ammar and Arabia, 2014).It is an important thing to indicate that, until now there is no study about the species specific sensitivity/tolerance measures. The objectives of this study were: (i) to provide species sensitivity lists based on the datasets available, (ii) to compare different univariate and multimetric indices used for quality assessment purposes with regard to their variability, and (iii) to assess the environmental quality status in the sub-littoral zone by the fluctuations in variability of the benthic fauna composition. Study area and Sampling Method: Samples were collected from the sub-littoral zone at depth ranged between 10-50 m, at 4 sites along the Syrian coast (Table 1): Al-Bassit (at the north of syrian coast), Banias, Tartous and Al-Hamidia (at the south) (Fig. 1). The structure of the bottom differs by station, muddy sand in Al-Bassit (A), where depth ranged between 40-50 m, coarse sand in Banias (B) where depth ranged between 20-25 m, muddy sand in Tartous (C), where depth ranged between 10-17 m and mixed (gravel with sand and mud) in Al-Hamidia (D), where depth ranged between 17-25 m.In addition, Al-Bassit is a main fishing and tourism area at the north of Syria, Banias coast is subject to different human activities such as fishing, oil and thermal pollution and sewage inputs.Tartous is a fishing basin and trade harbor.The longitudinal distance over which samples were obtained was ≈180 km.All samples were taken by Van-Veen grab 1/40 m 2 , five replicates samples were collected at each cruise, during March and December 2007 (Table 1).In all, 120 samples were obtained.All samples were immediately sieved through a 0.5 mm mesh screen, samples preserved in 5 % formalin.After the sorting process, macrobenthos were separated to the lowest animal taxonomic groups and all species were counted and weighted for each sample (Arabia, 2011). Statistical Analyses: For all the analyses, mean of abundance for every five replicates samples were considered and species abundance data were square root transformed.We used the Primer 7 statistical software to assess spatial variation of benthic communities between sites and temporal variation within sites.Furthermore, different techniques were performed: (i) Analysis of similarity (ANOSIM) was performed to detect differences in the benthic community structure between the different Ecological Status categories of the seawater; (ii) a cluster analysis was used to find natural groupings of samples and to find how the groups themselves form clusters at lower levels of similarity; and (iii) Non-metric Multidimensional Scaling (MDS) with the Bray-Curtis similarity index, to analyze variations in community composition in relation to the tropic status of the seawater. Shannon-Wiener Index: The Shannon-Wiener Index is a measure of species diversity of a particular site most commonly used in benthic ecology (Labrune et al., 2005).This index incorporates species richness as well as equitability (Kroencke and Reiss 2010).In this study, the Shannon diversity was calculated using the logarithm for a base 2. It is dependent on sample size and was calculated according to the formula: Where Pi is the proportion of the i th species in the sample.The minimum value for H′ is 0 and is obtained when one species is present.Simboura and Zenetos (2002) indicated classification scheme for soft-bottom benthic communities based on H′ values in response to five quality status classes of WFD. Hurlbert Index: We calculated the expected number of species (ES) according to Hurlbert's (1971) formula, which is used in the computer software Primer 7 (Warwick and Clark, 1994): Where N is the total number of individuals in a given sample and ''Ni'' is the number of the ''ith'' species in the same sample, for all samples with more than 50 individuals.This ES50 value is defined as the species specific sensitivity/tolerance measure. The validation of the index is based on the individuals of each species being randomly distributed, which is not always the case.Low ES50 values are supposed to be calculated from samples where the mostly tolerant species are abundant and, therefore, from disturbed habitats.High values of ES50 come from samples with sensitive species and indicate a healthy environment (Fleischer et al., 2007). BQI: The Benthic Quality Index (BQI) is a biotic index which was designed to assess environmental quality.Since, the original version of BQI is known to be sampling effort dependent (e.g. increase in sampling effort results in higher probability of obtained ingrare species), the adjusted calculation was applied (Fleischer et al., 2007;Fleischer and Zettler, 2009).The index was expressed as: Where totA is the abundance of individuals of species i at the considered station; is the sum (at the considered station) of the abundances of individuals of all species for which it is possible to calculate the Hurlbert Index (ES50); S is the species richness at the considered station.The BQI normally varied between 0 (bad ecological quality) and 20 (high ecological quality) with a total of five stages of classification.Higher BQI vales are associated with lower pollution levels.The approach proposed by Rosenberg et al. (2004), follows the assumption that the most tolerant species are likely to be associated with the lowest biodiversity, lower ES50 values and therefore attaining lower sensitivity estimates. AMBI: The AZTI Marine Biotic Index (AMBI) of benthic species was assigned to five ecological groups ranging from sensitive species (group I) to species highly tolerant to stress (group V).A Biotic Coefficient can be calculated based upon the percentage of each ecological group within each sample (Reiss and Kroncke, 2005;Fleischer et al., 2007): Here, EG gives the percentage of the total numerical abundance in the sample for each of the five ecological groups considered.In general, AMBI identifies five ecological groups corresponding to most sensitive species (ecological group 1) to most opportunistic/tolerant species (ecological group 5).The AMBI normally varied between 0 (unpolluted) and 6 (heavily polluted) being 7 when the sediment is azoic (Borja et al., 2000(Borja et al., , 2003b) ) with a total of five stages of classification.For calculations of the AMBI index, we used the AMBI program which available on the web page: http://www.azti.es,where a list that includes >2700 benthic species and their assignment to the ecological groups presented. Short-term description: A total of 241 zoobenthic species were encountered of which 13 genera belonging to 13 Macrotaxa of this, 98 species(41 %) belonged to the Gastropoda, followed by Bivalvia with 58 species (28 %), while 47 species (18 %) belonged to the Polychaeta and 17 (3 %) belonged to the Crustacea.Other benthic groups such as Echinodermata, Bryozoa, Scaphopoda, Clitellata, Nematoda, Sipunculida and Anthozoa represented 3 % (Fig. 2).On the other hand, the top five most abundant benthic organisms in the Syrian water, considering all sampling sites, belonged to Molluska.Cerithium scabridum were by far the most abundant benthic organisms (20.98 %), followed by Bittium tarentinum (14.33 %), the Bittiumt arenarium (7.6 %), Alvania cimex (4.86 %) and Axinulus croulinensis (2.8 %).The dominance of two species of gastropoda Cerithium scabridum as non-indigenous species and Bittium tarentinum as native one, refer to competition on the habitat in the two sites A and D, while in C and D Cerithium scabridum was absolutely dominant, which may be due to specific and suitable or tolerant ability to environmental conditions such as increase in temperature and salinity. In general, Al-Bassit (A) site was characterized by relatively high abundance of benthic organisms (Fig. 3) with 202 species and mean abundance 12468 ind./m 2 , followed by Al-Hamidia (D) site with 144 species and 5173 ind./m 2 , and in Banias (B) and Tartous (C) with 121 species 2408 ind./m 2 , and 78 species 1760 ind./m 2 , respectively.Number of species and abundance at each site reveals that the muddy and gravelly bottoms in Al-Bassit and Al-hamidiasites are richer in zoobenthos species than the coarse sandy bottoms in the other two sites.The changes in the benthic communities reflected the natural seasonal variation in the physicochemical characters and anthropogenic temporal variability occurs in the four sites.Severe disturbances caused by oil pollution and increase in surface water temperature (+5 °C) occurred near the terminal of the electro thermal Station in Banias (B).The ANOSIM analyses indicated that the values of global R varied between (R = 0.594, p<0.01) for sites (A, B) and (R=1, p<0.01) for sites (C, D).The mid-range value of R (0.873) for the global test of sites A, B, C and D establishes that there are statistical significant differences in the benthic macrofaunal community composition between these sites.Similarity of mid-range values of Rare slightly lower (0.757, 0.856, 0.887 and 0.994) for the B v D, A v D, B v C and A v C comparisons, contrasted with much lower value of R (0.594) for A v B; imply that the two sites C and D differs from the other sites. The Cluster dendrogram (Fig. 4) showed that site D has a different community composition across its replicates than the groups (B, A) or (B, C); note that, it is less clear whether there is any statistical evidence of a distinction between the (A and B) The MDS plot at 40 % similarity (Fig. 5) was based on macrobenthic speciesabundance data in each station.It can be seen that all six replicates from sites C and D, where the bottom of both sites is similar, are quite different in community composition from both sites A and B. Shannon-Wiener Index: The Shannon-Wiener index ranged from 2.88 to 5.39 (Fig. 6).Thus, the corresponding ecological status according to Simboura and Zenetos (2002) varied between moderate and high.At station Al-Bassit (A) the Shannon Index varied between 3.67 and 4.70, which corresponds to an ecological status from moderate in March to high in July.Atstation Banias (B), the Shannon Index ranged from 2.88 in March to 4.39 in July.Thus, the ecological status varied between moderate and high.At Tartous (C) and Al-Hamidia (D) sites, the variability were less pronounced, the ecological status was good-high in the both sites. Hurlbert Index: ES 50 values for 155 species were calculated and some of these listed in Table (2).The span between the lowest rank species and the highest ranked was from 0 to 19.476.Lowest values were calculated for Gammarus sp., Donacilla cornea and Mitrella vatovai.Highest values were recorded for Rhino claviskochi (19.476),Turboella dolium (17.258) and Notomastus latericeus (17.127).Examples of ES50 values of some common species in the area: Cerithium scabridum (13.119),Bittium tarentinum (10.139),B. arenarium (12.883),Alvania cimex (13.520) and the Axinulus croulinensis (9.662).Thus, for these species the values are rather similar and above the mean value 4.945.It is important to indicate that this is the first study of the species-specific sensitivity/tolerance measure.Though, this ES50 list represents a beginning stage for further development for Syrian waters.In addition, testing the index with other datasets in this study area may revealed the importance of pursuing the development of species sensitivity lists for Syrian waters. BQI: Mean BQI have been calculated for 24 sample occasions along the Syrian coast.The calculated BQI values varied between 15.74 and 21.04 (Fig. 7).In general, the BQI were similar with minor temporal changes at the same station, where the BQI values indicate a good and high environmental status in the all sites.The highest values of BQI (21.04, 20.9) were obtained for site (A) during October and March.This may be due to the reproduction of many benthic groups especially crustacean and polychaetes.The lowest value was obtained for Banias (B), where this site was subject to intensive pressure.On the other hand, the highest obtained BQI values were coincided with the most abundant benthic organisms (C.scabridum and B. tarentinum), which reflect the fact that benthic macrofauna is an excellent ecosystem component which mirrors the ecological status of the marine environment, and therefore it has become a standard component of marine environmental monitoring (Bilyard, 1987). The change in community structure induced by changes in the environmental status was related to the increase in the number of benthic species in Al-Bassit (A), while this site was characterized by the most relatively high abundant benthic organisms in the study area, where, the recruitment of zoobenthos occurs in spring and summer. AMBI: The result of the AMBI index shows a high and good ecological status at all sites.However, the status was rather constant over the whole study period.Thus, the corresponding ecological status remained stable as well, with 100 % constancy (proportion of samples having the same ecological status) at station B, 84 % at station A and C, 67 % at station D. Nevertheless, the proportion of the ecological groups at each station changed slightly during the study period (Fig. 8). On the other hand, the values of AMBI indicate as light disturbance of the benthic communities in most sampling stations and the ecological state ranges between high to good (i.e.slight disturbance) from one sampling site to another and from one month to another.Whereas, the proportion of the ecological groups at each station changed slightly between months (Fig. 9).In Al-Bassit (A), the percentage of group I species (disturbance-sensitive) was more than 50 % during the study period.These species usually found at unpolluted environments (Muxika et al., 2005), abundance of group II species (disturbanceindifferent) and group III species (disturbance tolerant) was highest at Banias (B) and Tartous (C). However, the increase of disturbance tolerant species and decrease of sensitive one in these two sites refer to the level of stress.While in Al-Hamediah (D) site, the dominance of disturbance-sensitive species indicates unpolluted conditions. The two multimetrics indices (AMBI, BQI) were chosen since they are based on two different approaches of ecological grouping.The composition of ecological groups, in the case of AMBI, changed temporally for each community, these changes were mainly due to the changes in abundance of the dominant species.Thus, the three indices Shannon-Wiener, BQI and AMBI are effective at the study sites. Conclusion In conclusion, the present study showed that (i) the temporal and spatial variability in abundance, diversity and community structure of the zoobenthos was mainly caused by different environmental conditions; (ii) the multimetric indices perform satisfactorily to assess the ecological quality status which seem to be less influenced by the seasonal variability of the macrofauna; (iii)The Shannon-Wiener index ranged the ecological status to moderatea high; (iv) the BQI and AMBI produced values indicating a good and high environmental status in the all sites. Thus, at any future plan for redevelopment of the marine ecological system in Syria, it is an essential thing for the decision makers to depend on the benthic indices to assess the ecological quality status.In addition, it is also important to continue the development and improvement measurements of ecosystem components and ensure that these measurements are representative of the state of ecosystem in the Syrian coast. Figure 1 . Figure 1.Map of the study area indicating the station locations. Figure 2 . Figure 2. Percentage of zoobenthos taxa in the survey areas of the Syrian coast. Figure 3 . Figure 3. Number of individuals (ind.lm - ) of zoobenthos in the survey areas of the Syrian coast. Figure 4 . Figure 4. Cluster dendrogram of the zoobenthos showing reference sites; computed for the six replicates from each of the four sites (Al-Bassit A, Banias B, Tartous C and Al-Hamidia D). Figure 5 . Figure 5. Non-metric multi-dimensional scaling (MDS) analysis ordination in two dimensions; computed for the six replicates of the zoobenthos from each of the four sites (Al-Bassit A, Banias B, Tartous C and Al-Hamidia D), based on Bray-Curtis similarity metric of benthic fauna. Figure 6 . Figure 6.The Shannon Index of the zoobenthos for the six replicates from each of the four sites (Al-Bassit A, Banias B, Tartous C and Al-Hamidia D). Figure 8. Mean AMBI of the zoobenthos for the three stations of the Syrian coast during the study period. Figure 9 . Figure 9. Relative abundance of ecological groups of the zoobenthos at the Syrian coast according to the AMBI: (I) disturbance-sensitive, (II) disturbance-in different, (III) disturbance-tolerant, (IV) second-order opportunistic and (V) first order opportunistic. Table 1 . Dates of sampling during March to December 2007 and some features of the sites at the Syrian coast. Table 2 . Hurlbert Index (ES50) values for some species of zoobenthos in the Syrian coast.
2019-08-22T21:42:30.260Z
2016-12-28T00:00:00.000
{ "year": 2022, "sha1": "b3fa449ea9a226b27a1d99341dbd6fce5fc4d90a", "oa_license": "CCBYNC", "oa_url": "http://mjms.uobasrah.edu.iq/index.php/mms/article/download/95/56", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "b3fa449ea9a226b27a1d99341dbd6fce5fc4d90a", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geography" ] }
256192579
pes2o/s2orc
v3-fos-license
Interaction between genetics and inulin affects host metabolism in rainbow trout fed a sustainable all plant-based diet Abstract Inulin affects nutrition and metabolism in many animals. Although inulin is widely used in the diet of teleosts, its mechanism of action is unknown. Here, we investigated the effect of inulin (2 %) on the intestinal microbiome and metabolism in rainbow trout (Oncorhynchus mykiss) selected for growth and survival when fed a 100 % plant-based diet (suave) and a control line (temoin). Metabolic responses to the two factors (line and inulin) in liver, intestine, muscle and adipose were tissue-specific, with line and interaction between the two factors influencing overall expression in liver. In the intestine, inulin and line and in muscle, line influenced the expression of metabolic genes. Microbiota between the mucus and digestive contents was significantly different, with genera from Proteobacteria being more abundant in the mucus, whereas genera from the Firmicutes and Planctomycetes being more abundant in contents. Effect of inulin and interaction between factors on the microbiome was evident in contents. The significant taxa of control and inulin-fed groups differed greatly with Streptococcus and Weissella being significantly abundant in the inulin-fed group. There was a general trend showing higher levels of all SCFA in temoin group with propionic acid levels being significantly higher. An operational taxonomic unit (OTU) belonging to the Ruminococcaceae was significantly abundant in suave. The tissue-specific correlations between OTU and gene expression may indicate the link between microbiome and metabolism. Together, these results suggest that line and inulin impact the gene expression in a tissue-specific manner, possibly driven by specific OTUs enriched in inulin-fed groups and suave. The rainbow trout is a cold-water carnivorous species with high commercial value. In a commercial setting, the diet of this species consists of a high proportion of protein and oil from marine fish (1) . The supply of these traditional fish feed ingredients is economically and ecologically unsustainable due to high fishing pressure, which directly leads to the collapse of wild fish stocks (1) . As an alternative, a feed consisting exclusively of plant-based ingredients has been developed and extensively tested to meet all the nutritional requirements of rainbow trout (2)(3)(4) . This next-generation sustainable diet formula is a major step towards achieving the Sustainable Development Goals (SDG14) set by the UN. However, these total plant-based diets are known to cause severe metabolic abnormalities such as glucose intolerance and high visceral fat when fed to rainbow trout (5,6) . To overcome this limitation, a genetic line (hereafter 'line') of fish that grow and survive better when fed a 100 % plant-based diet was developed through selective breeding (four generations) (3) . This line of fish is able to digest and metabolise the total plant-based diet better than the naïve fish and maintain a similar growth profile to fish fed conventional ingredients (3) . However, knowledge about metabolic adaptations, the intestinal microbiome and the interaction between diet, intestinal microbiome and host metabolism is unknown for the selected line (suave). On the other hand, prebiotics such as inulin are potential modulators of various metabolic and immune processes in different animals, including fish (7) . Inulin is known to affect energy metabolism, regulation of inflammation and immune homoeostasis in the intestine via its microbially derived metabolites (SCFA) (8,9) . Systemically, these dietary fibre-derived metabolites are transported to the liver via the portal vein and are involved in the pathways of fatty acid synthesis, oxidation and fat storage (10) . In addition, SCFA are known to affect glucose uptake in muscle and adipose tissue (11) . The members of the intestinal microbiome involved in the breakdown and utilisation of inulin in humans and livestock are well documented (12) . The microbial species encoding inulin-degrading functions may vary, but the mechanism of inulin degradation and utilisation by the microbiota, as well as receptor-mediated uptake and utilisation of SCFA by the host, are conserved in humans and other mammals (13,14) . The effect of inulin on growth, disease resistance, immune parameters, digestive enzymes and metabolism has been demonstrated in teleosts with a wide range of dietary habits (15)(16)(17)(18) . In rainbow trout, there are several studies demonstrating the beneficial effects of inulin on growth and immune status (19)(20)(21) . Recently, the involvement of inulin in metabolic processes in teleosts has also been demonstrated. Studies show that inulin can attenuate the negative metabolic syndrome caused by high-carbohydrate feeding in tilapia (22) and alter the expression of genes involved in various metabolic pathways in rainbow trout (4) . Taken together, these results suggest that the mechanism of action of inulin in teleosts may be similar to that in mammals. Given the overwhelming interest in exploiting these beneficial aspects of the interaction between inulin and the gut microbiome for better health and nutritional management in teleosts, a more detailed investigation of this aspect is warranted. Therefore, in this study, we investigate the effects of inulin and line on the metabolism (via microbially derived SCFA) and microbiome of rainbow trout fed a 100 % plant-based basal diet. To this end, two lines (temoin and suave) of rainbow trout (mean weight: 128·6 g ± 8·4) were fed a 100 % plant-based basal diet containing either 2 % inulin or devoid of it for a period of 120 days. In addition to growth and plasma parameters, host metabolic responses were measured by examining gene expression in various organs, including liver, intestine, muscle and adipose tissue. Since the established link between the inulin and the host is the microbially derived metabolites such as SCFA, we investigated the changes in the microbiome as well as the changes in the content of SCFA in the intestine. Changes in microvillar length are also reported, as inulin is known to affect these epithelial structures. Diet and experimental set-up The feeding trial was conducted at the PEIMA fish breeding and rearing facility (INRAE, Sizun, France). Two genetic lines of rainbow trout (hereafter referred to as line) were used for the feeding trial, namely temoin (the INRAE synthetic strain; a domesticated strain maintained at the PEIMA facility with a large number of spawners and without artificial selection in order to maintain the genetic variability) and suave (a selected line from the synthetic strain obtained after four generations of selective breeding based on the ability to survive and grow when fed a 100 % plantbased diet). A two-factorial design was used, with line and inulin intake as the factors. A total plant-based diet (containing only plant ingredients and vegetable oils supplemented with free amino acids) with (2 %) or without (0 %) inulin was prepared at the feed manufacturing facility (INRAE Donzacq). The dosage of inulin was decided based on our previous study (4) . The diets were isoproteic (about 45 % crude protein), isolipidic (about 22 % crude fat) and isoenergetic (about 24 kJ/kg DM) and were prepared to meet the nutritional requirements of rainbow trout (23) . The composition of the ingredients and the proximate composition of the diet are given in Table 1 and Supplementary Table 1, respectively. Fifty juvenile rainbow trout with a stocking density of about 3·5 kg/m 3 were introduced into each of the 1800-litre fibreglass circular tanks. The average initial weight of the fish in each group is given in Table 2. During subsequent growth, number of fish was reduced once by random elimination to keep a density below 15 kg/m 3 in each tank. At the same time, early maturing males (at 1 year) were discarded. There was a total of four groups consisting of TVO: temoin-0 % inulin; TVI: temoin-2 % inulin; SVO: suave-0 % inulin; and SVI: suave-2 % inulin. Each group was allocated three tanks. Fish were reared under standard conditions during the experimental period, that is, water oxygen level of 9 mg/l, temperature between 6·0 and 18°C and pH of 6·5, water flow rate of 0·7 litre/s and natural photoperiod. The fish were fed by automatic feeder five times a day for 16 weeks. Total weight of fish in each tank was measured every 3 weeks to assess growth parameters. The amount of feed given was recorded daily to calculate the consumption index. The total amount of feed consumption increased steadily with growth and the feeding rate was adjusted accordingly. Sampling At the end of the feeding experiment, we randomly sampled twelve fish per group (four/tank). The fish were first anaesthetised with tricaine methane sulfonate; (MS-222) (50 mg/l) and then euthanised with a higher dose of MS-222 (100 mg/l). The fish were weighed, and then blood samples were collected using heparinised syringes and tubes and centrifuged at 3000 g for 10 min to isolate the plasma. Plasma samples were stored at −20°C until analysis of plasma parameters. The liver was dissected and weighed. Muscle samples were also taken. Then the viscera were dissected out, and the adipose tissue was removed. The mid-intestine was cut open, and the digestive contents were separated from the mucus. The mucus samples were obtained by scraping with a glass slide. The contents and mucus were kept in separate tubes for microbiome analysis. Part of the mid-intestinal tissue was also collected for gene expression analysis. From a separate group of fish, 2/tank (6/group) intestinal content samples were collected for SCFA analysis. All samples (except for electron microscopy) were frozen with liquid N 2 and then stored at −80°C. Intestinal tissue (mid-intestine) was stored in 4 % formaldehyde for electron microscopy. Diet and whole-body proximate composition The same protocol was used to analyse the nutrient composition of the diet and the whole body. The nutrient composition of the feed was performed with fresh samples, while the nutrient composition of the whole body was derived from freeze-dried samples. The moisture content of the samples was measured by drying the samples at 105°C for 24 h. The weight of the postdried samples was subtracted from the weight of the pre-dried samples. The ash content of the samples was measured by burning the samples at 550°C for 16 h and subtracting the weight of the post-combusted samples from that of the pre-combusted samples. The energy content of the samples was measured using the adiabatic bomb calorimeter (IKA). Total lipids were measured by petroleum ether extraction using the Soxtherm system (Gerhardt analytical systems). Crude protein content was measured by the Kjeldahl method using the Kjeltek™ 8400 system (FOSS) after acid extraction. Measurement of the plasma biochemical parameters Plasma parameters were measured using commercial kits in combination with a microplate reader. Various biochemical plasma parameters such as glucose (Glucose RTU, bioMérieux) (24) , TAG (PAP 150, bioMérieux) (25) , cholesterol (Cholesterol RTU, bioMérieux) (26) and free fatty acids (NEFA C Kit, Wako Chemicals) (27) were measured. Total free amino acid was quantified according to the method of Moore (28) , with glycine as standard. Microbiome analysis DNA extraction. DNA from the intestinal contents and mucus samples was extracted using the QIAamp fast DNA stool kit (Qiagen) according to the manufacturer's instructions. Some modifications were made to the protocol to achieve a better yield from the difficult-to-lyse bacterial cells (29) . The purity and integrity of the extracted DNA were assessed using the NanoDrop 2000c (Thermo) and an agarose gel, respectively. The sequencing libraries were prepared according to the standard protocol recommended by Illumina® (Illumina) and as described elsewhere (4,30) . Briefly, the V3 and V4 regions of the bacterial 16s rRNA gene were amplified using the recommended set of primers (31) linked to the Illumina® adaptor overhangs. The final primer pairs were as follows: Forward 5'-TCGTCGGCAGCGTCAGATGTGTATAAGAGACAGCCTACGG-GNGGCWGCAG-3' and Reverse 5'-GTCTCGTGGGCTCG GAGATGTGTATAAGAGACAGGACTACHVGGGTATCTAATC-C-3'. The preparation of the library involved two stages of PCR. In the first stage, the PCR mix (25 μl) contained 12·5 μl of KAPA HiFi Master Mix (Roche) together with 5 μl each of forward and reverse primers (1 μM each) and 2·5 μl of DNA (about 100 ng). Reactions were performed in duplicate. Thermocycling conditions included pre-incubation for 3 min at 95°C, followed by thirty cycles of denaturation at 95°C for 30 s, annealing at 55°C for 30 s and extension at 72°C for 30 s. A final extension was performed at 72°C for 5 min. After PCR, duplicate reactions belonging to one sample were pooled and an aliquot was run on the agarose gel to confirm a positive reaction (about 550 bp). The positive control with a mock bacterial DNA (ZymoBIOMICS Microbial Community DNA Standard, Zymo Research Irvine) and a negative control with water were also included. The negative PCR control showed no visible band on the agarose gel after PCR. The PCR products were transported to La Plateforme Génome Transcriptome de Bordeaux (PGTB) to perform the second-stage PCR. In the second stage, index PCR was performed to add sample-specific barcodes to the PCR products using the Nextera XT index kit according to the manufacturer's protocol (Illumina). The PCR set-up was the same as in stage 1 except that only eight cycles were used. PCR products were purified using AMPure XP beads (Beckman Coulter) and quantified using the KAPA library quantification kit for Illumina platforms (Roche) according to the manufacturer's instructions. The sequencing libraries were pooled equimolarly (4 nM) and sequenced using a 250-bp Paired-End Sequencing Kit v2 (Illumina). Sequence data analysis. The paired-end sequencing data were analysed using the UPARSE pipeline (32) as described elsewhere (33) . The paired-end sequences were merged, and the primer binding sites were removed. Sequences were then quality-filtered using the maximum error rate strategy (threshold = 1) (34) . The sequences from different samples were merged after being uniquely labelled, the dataset was dereplicated and the singletons were removed. Operational taxonomic unit (OTU) clustering was performed at 97 % similarity. The raw reads were mapped to the OTU to create the OTU abundance table. Taxonomies were assigned using the SINTAX algorithm (35) . Assignments with < 0·8 confidence value were filtered out. A phylogenetic tree in Newick format was created. The OTU table, taxonomy table and phylogenetic tree were exported to the phyloseq package for downstream analysis (36) . SCFA measurement The SCFA measurements are performed as previously described (4) . Briefly, frozen intestinal content samples (1 g) were placed in glass bottles filled with clean air zero supplied by an F-DGS air zero generator (Evry). These bottles were connected to heated inlet line (100°C) of the SIFT-MS instrument via the sample inlet. To compensate for the dispersion in the bottle during SIFT-MS sampling, a Tedlar bag (Zefon International Inc.) filled with dry and clean zero air was connected to the bottle inlet. The closed bottle was incubated at 60°C ± 2 for 2 h before SIFT-MS analysis. Full-scan mass spectra were recorded for each positive precursor ion (H 3 O þ , O 2 &x25CF;þ and NO þ ) in a m/z range from 15 to 250 with an integration time of 60 s. Quantification was performed using the NO þ precursor ion as described before (37,38) . Electron microscopy Electron microscopic examinations were carried out at the Bordeaux Imaging Centre -University of Bordeaux, a core facility of the French network 'France Bio Imaging'. The processing and sectioning of the samples have been described in detail previously (4) . Mid-intestinal tissue samples were fixed with 2·5 % (v/v) glutaraldehyde in 0·1 M phosphate buffer (pH = 7·4) for 2 h before being stored at 4°C. Samples were then washed in phosphate buffer and fixed in 1 % (v/v) osmium tetroxide in 0·1M phosphate buffer for 2 h in the dark (RT) and then washed. Then the samples were dehydrated and embedded in epoxy resin using the automated microwave tissue processor for electron microscopy (Leica EM AMW; Leica Microsystems). After polymerisation, the samples were cut with a diamond knife (Diatome) on an ultramicrotome (EM UCT, Leica Microsystems). After localisation of the regions of interest, ultrathin sections (70 nm) were picked up on copper grids and subsequently stained with UranyLess and lead citrate. The grids were examined with a transmission electron microscope (H7650, Hitachi) at 80 kV. Gene expression analysis RNA from liver, intestine and muscle was extracted using the TRIzol reagent method (Invitrogen) according to the manufacturer's protocol. RNA from the adipose tissue was extracted using the RNeasy lipid tissue mini kit according to the manufacturer's instructions (Qiagen). The quality and quantity of the RNA were checked on a 1 % agarose gel and a nanodrop (Thermo scientific), respectively. Two μg of RNA was converted to cDNA using the enzyme superscript III reverse transcriptase and random hexamers (Invitrogen). After reverse transcription, the cDNA was diluted 50-fold before being used in RT-qPCR. RT-qPCR was performed on a LightCycler® 384 system (Roche Diagnostics) according to the protocol described previously (including primer details) (4) . The data normalisation was performed using geNorm (39) . The reference genes eef1a and rna18s were used to calculate the normalisation factor in liver and intestine, while a combination of actb and gapdh was used in muscle and actb and eef1a in adipose tissue. Statistical analysis Feeding efficiency, bulk fish weight and specific growth rate were measured per tank (n 3 per group). For SCFA and electron microscopy, two fish were sampled per tank (n 6 per group). All other parameters were measured on four individual fish per tank (n 12 per group). All statistical analyses were performed using R software (version 3.6.3) (40) in combination with phyloseq (36) . The heat map was created with heatmapper (41) . All data subjected to either one-way ANOVA or two-way ANOVA were tested for normal distribution and homogeneity of variances. If the data did not meet any of these assumptions, a non-parametric Kruskal- Table 2. Growth parameters of rainbow trout (temoin and suave) fed with the control diet (O) and the diet supplemented with 2 % inulin (I) Growth parameters The data are presented as the mean ± SD, n 3 tanks except for the final body weight (n 12) and hepatosomatic index (n 12). Means between the groups were compared using a two-way ANOVA. A P-value < 0·05 is considered as significant and is presented in bold. * Feed efficiency = wet weight gain (g)/feed intake (g). † Hepatosomatic index = 100 × (liver weight/total body weight). ‡ Specific growth rate = 100 × (Ln (final body weight, g)-Ln (initial body weight, g))/d. § Weight gain = final weight-initial weight (g). Wallis test or a non-parametric equivalent of the factorial ANOVA (aligned rank transform ANOVA) was used. α-diversity (observed OTU and Shannon index), β-diversity (Bray-Curtis dissimilarity index) and OTU compositions were calculated using the phyloseq package. The effects of two factors (inulin and line) on α-diversity were calculated separately for the two sample types (i.e. contents and mucus) using two-way ANOVA. For the β-diversity measures, the homogeneity of sample dispersions between groups was checked using the betadisper function of the vegan package (42) . Permutative multivariate analysis of variance (PERMANOVA) was used to test the significance of Bray-Curtis distance between samples belonging to different sample types (content and mucus) and to analyse the effect of the factors (inulin or line) on β-diversity in contents and mucus. The effect of two factors (inulin and line) on the expression levels of each gene was analysed using two-way ANOVA. A similar approach was used to test the effect of experimental factors on microvilli length and SCFA levels. Based on the relative expression of the genes, a Bray-Curtis distance matrix was calculated and a group-wise comparison was performed using twoway PERMANOVA after checking the homogeneity of dispersions. LEfSe (linear discriminant analysis effect size) was performed to identify the significant features (OTU) belonging to specific groups (43) . The p-values and linear discriminant analysis (LDA) score thresholds were set at 0·05 and 4, respectively. Regularised canonical correlation analysis was performed using the mixOmics package to understand the correlations between relative gene expression in different tissues and OTU abundances in content and mucus samples (44) . Zero-inflated OTU data were transformed using centred log transformation (clr) before analysis. Correlations with a value above 0·45 are shown. Diet and whole-body composition, growth and plasma parameters In the present study, the composition of the various macronutrients did not differ between the control and experimental diets (P > 0·05; online Supplementary Table 1). We also did not detect any effect of either factor (line and inulin) or the interaction between them on the proximate composition of the whole body (P > 0·05; online Supplementary Table 2). The final bulk body weight and weight gain were significantly higher (10·77 %) in suave (P < 0·05, Table 2). Similarly, there was a significant effect of line on plasma glucose (P = 0·0012) and TAG (P = 0·0026; Table 3). Expression of different genes involved in metabolism We investigated the expression of several genes involved in amino acid metabolism, energy metabolism, fatty acid oxidation, fatty acid conversion, gluconeogenesis, glucose transport, glycolysis, lipogenesis and SCFA uptake in different tissues. Liver. The hepatic gene expression profile is shown in Fig. 1(a). The overall (group-wise) expression was significantly affected by line (PERMANOVA, P = 0·016) and the interaction between line and inulin (PERMANOVA, P = 0·030). Effect of factors and interaction (P < 0·05) on expression of genes in different pathways is given below. Interaction: The expression of genes in amino acid metabolism (asat1), energy metabolism (cox4 and cs), gluconeogenesis (g6pcb2a) and lipogenesis (acly and g6pdh) were up-regulated in inulin-fed groups only in the suave. On the contrary, the expression of a fatty acid oxidation gene (cpt1b) was down-regulated in the suave when fed with inulin. Intestine. The intestinal gene expression profile is shown in Fig. 1(b). The group-wise expression was significantly affected by both factors, line (PERMANOVA, P = 0·001) and inulin (PERMANOVA, P = 0·001). Effect of factors and interaction (P < 0·05) on expression of genes in different pathways is given below. Interaction: No significant interaction effect between the inulin and the line was found. Muscle. The gene expression profile of muscle is shown in Fig. 2(a). The overall (group-wise) expression pattern was significantly affected by line (PERMANOVA, P = 0·004). Effect of factors and interaction (P < 0·05) on expression of genes in different pathways is given below. Inulin: The expression of alat2 (amino acid metabolism), cpt1a (fatty acid oxidation) and g6pdh (lipogenesis) was upregulated in inulin-fed groups. On the other hand, the expression of a gene in glycolytic pathway (hk1) was down-regulated in inulin-fed group. Interaction: The levels of cpt1a and cpt1b (fatty acid oxidation) was higher in inulin-fed group only in the temoin. On the other hand, fas (lipogenesis) was up-regulated in the inulin-fed suave. Adipose. The gene expression profile of adipose tissue is shown in Fig. 2(b). Group-wise expression profile was not affected by Effect of genetics and inulin on trout metabolism any of the factors. Effect of factors and interaction (P < 0·05) on expression of genes in different pathways is given below. Line: The expression of gdh2 (amino acid metabolism) was lower in suave, and the expression of cox2 (energy metabolism) and fbp1b1 (gluconeogenesis) was up-regulated in suave. Inulin: The expression of g6pcb1a (gluconeogenesis) was down-regulated when fed inulin. Intestinal microbial diversity and composition Effect of sample type on the microbial diversity and composition. We analysed the microbial diversity and composition in the two sample types, namely mucus and digestive contents (contents), separately. There was a significant effect of sample type on microbial diversity and composition. The αdiversity of the contents was significantly higher than that of the mucus (P = 0·0002). We found no significant effect of line or the inulin on α-diversity measures (observed OTU and Shannon index) in the mucus or digestive contents (P > 0·05) ( Fig. 3(a)). The data are presented as the mean ± SD, n 12. Means between the groups were compared using a two-way ANOVA. A P-value < 0·05 is considered as significant and is presented in bold. Energy metabolism Fatty acid oxidation PERMANOVA of Bray-Curtis distances between samples shows a significant effect of sample type (P = 0·0001), with the mucus and contents samples forming separate clusters. The individual samples are plotted in a two-dimensional space using NMDS (Fig. 3(b)). The top twenty OTU in terms of total abundance (after removal of the genera Mycoplasma and Streptophyta) are shown in Fig. 3(c). These OTU include Bacillus, Janthinobacterium, Lactobacillus, Moraxella, Pseudomonas, Ralstonia, Singulisphaera, Sphingomonas, Streptococcus, Weissella and others. The different abundances of OTU between the mucus and the contents were analysed using LEfSe (Fig. 3(d)). The relative abundance of Firmicutes was significantly higher in the contents than in the mucus. Among this phylum, the families Lactobacillaceae (genus: Lactobacillus), Leuconostocaceae (genus: Weissella) and Streptococcaceae were the most important representatives. Phylum Planctomycetes was also a significant feature in the content, which comprised of one significant OTU in the genus Singulisphaera. On the other hand, Proteobacteria was the most abundant phylum in the mucus samples. Among this phylum, Alphaproteobacteria (genus: Sphingomonas), Betaproteobacteria and Gammaproteobacteria (genus: Pseudomonas) were found in significantly higher amounts compared with the content. In the Betaproteobacteria class, there were two significantly abundant families, including Burkholderiaceae (two OTU belonging to the genus Ralstonia) and Oxalobacteriaceae (genus: Janthinobacterium). Effect of inulin and line on the microbial β-diversity and composition. Two-way PERMANOVA revealed that there was a significant effect of inulin and the interaction between inulin and the line on the β-diversity of the content samples (P: I = 0·026; P: I × L = 0·025) ( Fig. 4(a)). In contrast, mucus samples showed no such responses (P > 0·05) (Fig. 4(b)). LEfSe analysis to identify the differentially abundant groups between two dietary conditions revealed thirteen features, nine of which belonged to the control group and four to the inulinfed group. Two phyla were significantly abundant in the control group, including Proteobacteria and Actinobacteria. Among the Proteobacteria, two families, namely Enterobacteriaceae and Pseudomonadaceae (genus: Pseudomonas), were significantly abundant (Fig. 4(c)). Significant OTU in the inulin-fed group included Weissella and Streptococcus (Fig. 4(c)). Comparison between the two lines revealed four features, three of which belonged to the temoin (genus: Pseudomonas and Brevundimonas) and one to the suave (family: Ruminococcaceae) (Fig. 4(d)). Correlation between the operational taxonomic unit abundance and gene expression Liver v. content OTU: The OTU belonging to Lactobacillus, Weissella and Ruminococcaceae showed a strong negative correlation with the genes involved in glycolysis, gluconeogenesis and fatty acid oxidation. Moraxella and Bacillus were showing a negative correlation with genes involved in lipogenesis, amino acid metabolism and energy metabolism ( Fig. 5(a)). In contrast to the OTU belonging to Lactobacillus, Bacillus showed a positive correlation with some genes involved in glycolysis and fatty acid oxidation (Fig. 5(a)). Intestine v. content OTU: The OTU belonging to Pseudomonas was positively correlated with genes involved in many pathways, notably glycolysis, amino acid metabolism and energy metabolism. On the other hand, Streptococcus was negatively correlated with several pathways, most notable being the glucose transport, lipogenesis and amino acid metabolism (Fig. 5(b)). Planctomycetaceae, was negatively correlated with all genes, especially those involved in glycolysis (pfklb), amino acid metabolism (asat1 and asat2) and energy metabolism (cox4, cs, atp5a and sdhb) (Fig. 5(b)). Intestine v. mucus OTU: Ralstonia, Janthinobacterium, Pseudomonas and Sphingomonas were negatively correlated with genes involved in energy metabolism, amino acid metabolism, lipogenesis, glycolysis, fatty acid oxidation, fatty acid conversion and glucose transport (Fig. 5(c)). There was a positive Groups correlation between these OTU and one of the fatty acid oxidation genes (cpt1d) (Fig. 5(c)). SCFA levels and microvilli length The SCFA acetic acid, butyric acid, caproic acid, propionic acid and valeric acid were measured in the intestinal contents ( Fig. 6(a)). There was a significant effect of line on the level of propionic acid (P = 0·034). In general, the level of all SCFA was higher in temoin. Within the temoin line, the levels of all SCFA were generally higher in the inulin-fed group, except for butyric acid. We observed a significant effect of both inulin and line on microvilli length (Fig. 6(b)). Microvilli were significantly longer in suave (P = 1·47e-15). On the other hand, they were significantly shorter in fish fed inulin (P = 2·22e-16). Discussion In the last decade, much research has been done on the dietmicrobiome-host (metabolism) axis in mammals. One of the main focuses has been on the benefits of fibre-utilising bacteria and their metabolites (SCFA). In humans and livestock, a direct link has been established between dietary fibres and various metabolic processes in liver, skeletal muscle, intestine and adipose tissue (45) . Although there is great interest in harnessing the beneficial effects of prebiotic-derived microbial metabolites to improve aquatic animal health and metabolism, the prebioticmicrobiome-host axis is not well understood. Prebiotics such as inulin have been used in the diets of teleosts for decades, although not much is known about whether inulin has the same effect on host metabolism (via SCFA) in teleosts as it does in mammals. In addition, there is a lack of knowledge about the effects of genotype on inulin degradation and utilisation and the bacterial groups that respond to inulin in teleosts. To address this, in the present study, we investigated the metabolic effects of feeding inulin (2 %) for 16 weeks to two different lines (temoin and suave) of rainbow trout. Growth and plasma parameters Final body weight was significantly higher in fish selected for better utilisation of the 100 % plant-based diet (suave), as previously observed (3) . The lower growth rate in naïve lines of rainbow trout fed an all-plant diet has been attributed to a combination of factors, including lower feed intake and feed efficiency (3) . However, in the present study, feed acceptance and feed efficiency did not differ significantly between the two lines. It should be noted that there was a significant weight difference between the two lines before the start of the experiment. And this difference remained throughout the experiment. It was important to keep the age of the two lines the same at the beginning of the experiment, so the weight difference had to be accepted (3,46) . Moreover, the significant difference between the lines in weight gain also underscores the fact that these two lines naturally grow at different rates (46) . Inulin had no effect on growth parameters. This is interesting because inulin is known to positively affect the growth of many teleosts, including rainbow trout (21,47) . The source of inulin, the genetic background of the fish used in the experiment and the differences in basal diet could be the plausible reason for the discrepancies (48) . In addition, the species-specific microbiota and the intra-species differences in the intestinal microbial communities (observed in trout) could also lead to such discrepancies (4,49) . We found a significant difference in plasma glucose and TAG levels between the two lines. It is likely that glucose uptake is not as efficient in the suave because of its adaptation to a plant-based diet, which is generally rich in dietary fibre. Glucose uptake from the high-fibre diet is relatively slow compared with the low-fibre diet (50) . On the other hand, the higher TAG in temoin could be due to increased lipolysis, because carnitine palmitoyltransferase 1 was generally more highly expressed in liver and muscle of the temoin line. Effect on the hepatic metabolism In the liver, total group-wise gene expression was affected by line. Most of the tested genes in the different metabolic pathways showed lower expression in the suave compared with the temoin. Interestingly, selection of the fish on a plant-based diet resulted in decreased glycolysis. A higher plasma glucose level in the suave also suggests a lower availability of glucose for hepatic glycolysis. Amino acid catabolism is one of the major metabolic pathways in rainbow trout, providing substrates necessary for energy metabolism (51) . In the present study, the expression of genes responsible for amino acid catabolism was also lower in the suave. In addition, genes involved in gluconeogenesis were also less expressed in suave. Taken together, these results could possibly indicate the use of prebiotic-derived substrates (SCFA) (instead of amino acids and glucose) for energy metabolism in suave. The involvement of SCFA (acetate) instead of glucose in the production of acetyl-CoA, which is required for the tricarboxylic acid cycle, has been reported previously (52) . Indeed, some genes (cox4 and cs) involved in energy metabolism were affected by the interaction between line and inulin (higher expression only in inulin-fed suave), further supporting this assumption. The overall gene expression was not significantly affected by inulin, although two genes of the fatty acid oxidation pathway showed high expression in the inulin groups. Inulin (via the action of various SCFA) has been shown to increase fatty acid oxidation in humans and other animals (53) . It is known that these metabolic changes in the liver are due to the activity of acetate and propionate, because butyrate is generally preferentially taken up by intestinal cells (45,54,55) . Although we did not measure the amount of the various SCFA in plasma or liver, the amount of SCFA in the intestine was generally higher in the inulin-fed groups, suggesting a possible relationship between inulin and fatty acid oxidation. Moreover, two genes of lipogenic metabolism were induced to a higher extent in the inulin-fed fish only in suave. This suave-specific induction contradicts the anti-lipogenic effect of inulin in mammals and needs further investigation (45,55) . Metabolic changes in the intestine Regarding the collective expression of all tested genes in the intestine, we observed a significant effect of line and inulin. In particular, there was a strong decrease in the expression of several genes involved in amino acid catabolism and energy metabolism in suave. Rainbow trout is a carnivorous teleost. The efficient use of amino acids compared with glucose and fatty acids to meet energy requirements is already well established in this species (56) . The lower expression of energy metabolism genes in the suave suggests that selection on a purely plantbased diet results in changes in the mechanisms of energy homoeostasis due to reduced amino acid degradation and glycolysis. This observation contradicts what has been documented in mammals. SCFA are known to positively affect intestinal energy metabolism by introducing SCFA into the β-oxidation pathway, leading to the production of acetyl-CoA, which is used in energy metabolism (45) . Moreover, most of the metabolic effects in the intestine are mediated by butyrate, and intestinal butyrate levels were relatively low compared with other SCFA, suggesting that the dynamics of SCFA production and utilisation in the trout intestine may be different from those in mammals and need further investigation. Metabolic changes in the muscle The overall gene expression in muscle was significantly affected by line, and expression was significantly lower in suave. The major groups of genes that were down-regulated in suave include amino acid catabolism, energy metabolism and glucose transporters. Suave is reported to gain 35·3 % weight within one generation when fed a plant-based diet. Also, in the present study, weight gain was higher (10 %) in suave than in temoin, and these advantages in weight gain may be due to the sensory, morphological and metabolic changes that the selected line undergoes (46,57) . The expression pattern in the present study may be indicative of the metabolic changes undergone by this line. In contrast, the control line (temoin) fed a plant-based diet appears to metabolise the diet poorly. It is likely that energy metabolism in this group is subject to regulatory mechanisms involving molecules from fatty acid oxidation and amino acid catabolism (58) . In addition, the higher expression of genes for fatty acid oxidation and lower lipogenesis when fed inulin may again indicate the inability of the temoin group to efficiently utilise inulin. Inulin had no significant effect on gene expression in muscle. An essential role of prebiotics and their derivatives (SCFA) in skeletal muscle metabolism has been demonstrated in humans by increased uptake and oxidation of fatty acids and decreased lipogenesis (54) . In addition, an increase in glucose uptake and retention of nitrogen (protein metabolism) has been suggested (54) . In the present study, the effect of inulin on the expression of genes for fatty acid oxidation, lipogenesis and glycolysis was not evident. This counterintuitive finding may be due to the fact that muscle is not the primary site of action for SCFA in carnivorous rainbow trout. Moreover, in addition to the direct effects of SCFA, metabolites released after hepatic assimilation of SCFA are known to have effects on metabolic processes in muscle (54) . Metabolic changes in adipose tissue There were no drastic group-wise changes in the expression of metabolic genes in adipose tissue. It should be noted that most of the genes that showed a change responded to the interaction effect of line and inulin. As observed in liver, lipogenic genes are up-regulated only in the inulin-fed group of suave. The effect of SCFA on lipogenic pathways in the adipose tissue is uncertain, as there are some studies showing a lipogenic effect of SCFA, whereas others show the opposite (59)(60)(61) . However, regardless of the effect on lipogenesis, higher levels of the fatty oxidation and energy metabolism are consistently observed (62,63) . These observations are very similar to those observed in the present study, in which genes were up-regulated in both the lipogenic pathway and fatty acid oxidation. It is likely that the higher fatty acid oxidation is in turn related to the higher energy metabolism observed in the same group of fish. This relationship between fatty acid oxidation and energy metabolism has been documented previously (63) . As for adipose tissue, only the suave appears to have adapted to utilise inulin, as has been described in mammals, which may be indicative of the metabolic changes experienced by the suave as a result of selection. Microbial mediation of prebiotic digestion The effect of prebiotics (or dietary fibre) is mainly mediated by the intestinal microbiome via the production of metabolites such as SCFA. These microbial processes are carried out in mammals by different groups of microbes belonging to the phyla Bacteroidetes and Firmicutes (64) . The intestinal microbiome of rainbow trout is dominated by Mycoplasma (4,65) and is abundant in the mucus/epithelial samples (66) . Therefore, we separated the mucus and digestive content samples in the present study. As expected, Mycoplasma was a common group in the mucus samples, while the contents were abundant with Streptophyta (most likely of dietary origin), as the diet was entirely plant-based (online Supplementary Fig. 1). β-diversity indicated significantly different clustering of the samples depending on the sample type (either mucus or contents). Interestingly, these clusters persisted despite the removal of Mycoplasma and Streptophyta OTU, with high abundance of Ralstonia, Pseudomonas, Janthinobacterium and Sphingomonas (all belonging to Proteobacteria) in the mucus, while members belonging to Firmicutes (Lactobacillus and Streptococcaceae) and Planctomycetes (Singulisphaera) were a significant feature in the contents, indicating the adaptation of the microbes to the specific microenvironment (either mucus or contents). Together with these results, the significantly lower α diversity in the mucus compared with the contents suggests that the contents may offer a nutrient-rich niche (high fibre) and harbour a higher diversity of bacterial populations compared with the mucus. In support of this idea, a prevalence of Firmicutes in high-fibre diets has already been described in many animals (64,67) . Although Planctomycetes are not a known species in the intestinal microbiome of animals, this group is known to contain an enzyme repertoire required for the degradation of polysaccharides (68) , suggesting their potential role in dietary fibre utilisation. We also analysed the effect of inulin and line on the β-diversity of the mucosal microbiome and content separately. There was no effect of line on the β-diversity of the microbiome, but an OTU belonging to the Ruminococcaceae was a significant feature in suave, indicating the adaptability of this OTU to the intestinal environment of suave. A relationship between genotype and microbial population has been described in many species, including teleosts (69)(70)(71) . In addition, several species of Ruminococcus have been described as one of the most efficient fibre-degrading groups in ruminants (67) . The significant effect of inulin was observed only in the contents further supporting the hypothesis that fibre-degrading bacteria are more abundant in the contents and that they respond more readily to dietary inulin compared with the mucosal microbiome. This type of differentiation between the mucosal and content microbiome in terms of response to dietary components has been observed previously in Atlantic salmon (72) . Correlation between the gene expression and microbial abundance There were negative correlations between several genes (hepatic glycolysis and gluconeogenesis) and OTU belonging to Lactobacillus. On the other hand, a positive correlation of these pathways with Bacillus OTU was observed. A similar finding has been previously reported in rainbow trout (4) . It needs to be investigated whether the abundance of Lactobacillus and Bacillus is related to the high plasma glucose levels observed in suave. Interestingly, the most abundant bacterial groups in the contents (OTU among Firmicutes) were not correlated with the expression of genes in the intestine. The OTU that were predominant in the mucosa were negatively correlated with the genes down-regulated in intestine (energy metabolism and amino acid catabolism) of suave fed with inulin. This counterintuitive down-regulation of energy metabolism genes in the intestine could be due to the uptake of SCFA by the mucosal microbiota, resulting in reduced availability of these metabolites to epithelial cells (73) . SCFA in intestine It was evident that the levels of all the SCFA in general were higher in the intestinal content of temoin (especially in the inulin-fed group). This difference was significant in the case of propionic acid. It has been previously shown that the SCFA are absorbed by the intestinal cells quite rapidly after their release from the intestinal microbiome (45) . Presumably, the rate of absorption of the SCFA in the intestine of suave is higher due to their adaptation to a plant-based diet through selection (which is generally rich in SCFA). Measuring SCFA levels in multiple organs such as intestine, liver and blood simultaneously in the future studies would be more insightful. SCFA receptor expression levels were not drastically modulated in any of the tissues studied. This underscores the need for a detailed study of the mechanisms involved in SCFA production, uptake by host cells and metabolism in the various organs of teleosts. Moreover, in the present study, no direct relationship can be established between the content of SCFA and their biological effect, because the measurement of SCFA in the intestine is complicated due to (1) complex molecular cross-feeding mechanisms, which in turn depend on the microbial composition (56) and (2) different SCFA have different sensitivities to different SCFA receptors and are therefore preferentially utilised by different organs (74) . The length of the intestinal microvilli was higher in suave, and apparently this is an adaptation of the selected lineage to better nutrient absorption. This adaptation has been noted previously in several animals (75) . The relationship between the higher villus length and the higher weight gain in suave needs further investigation. However, we observed a decrease in microvillus length in the groups fed inulin. Although the reason for this decrease is not clear, a similar effect was observed in another carnivorous teleost, the gilthead seabream (76) . Conclusions and future perspectives In summary, feeding 2 % inulin to different lines of rainbow trout has a strong effect on the expression of several metabolic genes depending on the tissue. In the liver, the expression of several metabolic pathways is influenced by the line and the interaction between line and inulin, while in the intestine, both inulin and line influence the expression of metabolic genes. The overall expression in muscle was also influenced by line. From the present study, microbial communities differed drastically in mucus and contents and line-specific and inulin-specific abundance profiles were observed only in contents. The high abundance of specific genera among Proteobacteria (in mucus) and Firmicutes (in contents) indicates their metabolic adaptation to the specific intestinal microenvironment. The high abundance of OTU among Firmicutes in fish fed 2 % inulin may indicate their ability to degrade inulin, and the genomes of these groups need to be further studied. The association of an OTU belonging to the Ruminococcaceae with the selected line (suave) is interesting, and its involvement in the better utilisation of plant ingredients in suave needs further investigation. The correlation between several members of the Firmicutes and Proteobacteria and the expression of genes involved in different metabolic pathways in both the liver and the intestine (especially gluconeogenesis in the liver and amino acid metabolism and energy metabolism in the intestine) could be indicative of the diet-microbiome-host axis and should be a focus of future research.
2023-01-25T06:18:11.443Z
2023-01-24T00:00:00.000
{ "year": 2023, "sha1": "03b347992fdd3c789045540a4cd5f1402b5d212e", "oa_license": "CCBY", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/C58AE4B212BC71C62BD7AFF02751B62D/S0007114523000120a.pdf/div-class-title-interaction-between-genetics-and-inulin-affects-host-metabolism-in-rainbow-trout-fed-a-sustainable-all-plant-based-diet-div.pdf", "oa_status": "HYBRID", "pdf_src": "Cambridge", "pdf_hash": "f5e39247cee303ca78083529351aa601bb4f023a", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
219105222
pes2o/s2orc
v3-fos-license
The impact of TMS and PNS frequencies on MEP potentiation in PAS with high-frequency peripheral component Paired associative stimulation (PAS) combines transcranial magnetic stimulation (TMS) and peripheral nerve stimulation (PNS) to induce plastic changes in the corticospinal tract. PAS employing single 0.2-Hz TMS pulses synchronized with the first pulse of 50–100 Hz PNS trains potentiates motor-evoked potentials (MEPs) in a stable manner in healthy participants and enhances voluntary motor output in spinal cord injury (SCI) patients. We further investigated the impact of settings of this PAS variant on MEP potentiation in healthy subjects. In experiment 1, we compared 0.2-Hz vs 0.4-Hz PAS. In experiment 2, PNS frequencies of 100 Hz, 200 Hz, and 400 Hz were compared. In experiment 3, we added a second TMS pulse. When compared with 0.4-Hz PAS, 0.2-Hz PAS was significantly more effective after 30 minutes (p = 0.05) and 60 minutes (p = 0.014). MEP potentiation by PAS with 100-Hz and 200-Hz PNS did not differ. PAS with 400-Hz PNS was less effective than 100-Hz (p = 0.023) and 200-Hz (p = 0.013) PNS. Adding an extra TMS pulse rendered PAS strongly inhibitory. These negative findings demonstrate that the 0.2-Hz PAS with 100-Hz PNS previously used in clinical studies is optimal and the modifications employed here do not enhance its efficacy. Introduction Several studies have recently demonstrated the potential therapeutic applications of non-invasive brain stimulation, including transcranial magnetic stimulation (TMS) [1,2]. Non-invasive depolarization of neuronal membranes by TMS initiates action potentials and enables targeting and modulation of activity of cortical neuronal ensembles. Activation or suppression of neuronal activity provides therapeutic opportunities for numerous neurological conditions [1]. Paired associative stimulation (PAS) combines TMS of the primary motor cortex (M1) with electrical peripheral nerve stimulation (PNS) of the contralateral extremities. The potential of PAS as a therapeutic tool has been studied in stroke [3], neurodegenerative disorders [4] and spinal cord injury [5] patients, among others [6]. Long-term-potentiation (LTP) is a cellular mechanism that induces long-lasting increase of synaptic efficacy and neuroplasticity [7]. LTP occurs as a consequence of simultaneous activity of pre-and postsynaptic cells [8]. N-methyl-d-aspartate (NMDA) channel-dependent LTP provides an attractive cellular model of learning and memory and may play an essential role in developing functional neural networks [9,10]. The aim of PAS is to create the conditions that can contribute to induction of LTP in vivo. If the timing between the two stimuli (inter-stimulus interval, ISI) is appropriate, PNS signals that ascend via sensory volley to M1 coincide with the TMS-induced neural impulses from M1. This coincidence can transiently increase the corticospinal excitability. [11,12] In spinal PAS, antidromic and orthodromic signals are timed to occur simultaneously at the spinal cord level. This repeated pairing of signals is thought to induce an LTP-like effect at the corticospinal-motoneuronal synapses [13][14][15]. TMS and PAS protocols can engage corticospinal plasticity and are under investigation as a tool to enhance motor function after spinal cord injury (SCI), which is rarely complete [5]. We have shown in several case reports and series that PAS with a high-frequency peripheral component (0.2-Hz TMS paired with 100-Hz PNS) enhances motor output of paretic or paralytic muscles in patients with chronic incomplete SCI [16][17][18][19]. At the moment this is the only PAS protocol variant that has produced clinically meaningful and long-lasting improvements in patients with SCI. Previous studies have applied PAS to spinal cord injury patients as single sessions only [13][14][15]. In stroke patients, conventional PAS applied for 4 weeks improved some neurophysiological and functional measures [20]. The potential for PAS to increase or decrease excitability strongly depends on the interval between TMS and PNS pulses [12] [21]. Therefore, precise timing between the two stimuli is crucial. Conventional PAS (single-pulse TMS combined with single pulse or 10-Hz PNS) protocols employ either fixed ISI (across participants) or individually determined ISIs [21] [6]. The variable outcomes of conventional PAS reflect its dependence on multiple technical and individual factors such as time of day, pre-PAS activity, and subject characteristics [6]. Patients with SCI may have longer neuronal conduction times in both orthodromic and antidromic pathways, whose conductivity may also change during the rehabilitation and time since injury. Therefore, finding the precise ISI and most optimal parameters of PAS protocol can be particularly challenging. Employing a PAS protocol at 0.2-Hz with high-intensity TMS (100% of the stimulator output) and high-frequency peripheral stimulation leads reliably to robust motorevoked potential (MEP) potentiation at a wide range of ISIs, plausibly due to increase in collision events between TMS-and PNS-induced neuronal impulse volleys [16,17] PAS with a 100-Hz PNS appears to be the most effective [16] [22,23]. Further development of PAS variants with a high-frequency peripheral component is of clinical interest. We searched for an increase of the excitatory effect and decrease of the time required for PAS by modifying the previously employed"standard" protocol (0.2-Hz single pulse TMS, 240 stimuli in 20 minutes on the M1, paired with 100-Hz PNS to the right tibial nerve). We wanted to achieve the same or higher MEP potentiation in a more time-efficient manner and compared the potentiation induced with several modified versions with the effects of the standard protocol. The rationale of this study was to test whether increasing the frequency of either PAS or the PNS component of PAS or doubling the amount of TMS pulses would enhance the efficacy, feasibility, or both of the protocol that we have used in clinical studies. The aim of all experiments was either to show the superiority of new PAS modifications or to conclude that the current version of PAS with a high-frequency peripheral component (currently under investigation for clinical use) is currently the most optimal choice for PAS. Increasing the PAS frequency would reduce the time of the PAS protocol and render it more feasible for clinical use and may also increase its efficacy. Since PAS aims at the coincidence of ascending and descending volleys at the spinal cord level, we also hypothesized that increasing the frequency of PNS component, or increasing the number of TMS pulses, could further increase PAS efficacy by enhancing the number of coinciding volleys. A single high-intensity TMS pulse produces several descending volleys, a D-wave, and four I-waves at a frequency of approximately 500-660 Hz [24]. We have previously shown that increasing the frequency of the PNS component from 50 Hz to 100 Hz enhances the efficacy of PAS. Transcranial magnetic stimulation TMS pulses were generated with an eXimia magnetic stimulator employing a figure-of-eight coil (Nexstim Ltd., Helsinki, Finland). We applied MRI-guided TMS navigation (Navigated Brain Stimulation 4.3 [NBS 4.3], Nexstim Ltd., Helsinki, Finland) based on 3D models of the individual 3T T1 MRI images. In a prospective series of patients, comparison of the preoperative and intraoperative localization of hand motor cortex yielded distances of 4-14 mm between nTMS and direct cortical stimulation. [26][27][28][29] Navigation guarantees the accurate localisation of M1 and the precise repetition of the same cortical location with exactly the same coil positioning and orientation, securing the same induced electric field throughout the whole session and between different experimental sessions. The TMS coil was positioned over the left primary M1 to activate the "hotspot" of the right abductor hallucis muscle. During the mapping, we systematically recorded MEPs from the whole motor representation area of the distal lower limb. We defined the hotspot as a site where TMS pulses provided the maximal and most consistent MEPs from the right abductor hallucis muscle and induced a plantar flexion. MEPs were recorded and analysed with an EMG device integrated in the eXimia stimulator. The resting motor threshold (RMT) of the contralateral abductor hallucis muscle was defined as the minimum TMS intensity required to evoke a MEP of >50 μV in at least 5 of 10 trials over the "hotspot". During PAS, an intensity of 100% of maximum stimulator output (MSO) was used to mimic the conditions of studies where this protocol was applied to SCI patients [16][17][18][19]. The MEP measurements were performed with 120% of individual RMT. Individual RMTs of the participants are presented in Table 1. RMTs in the three experiments did not differ significantly (p = 0.114 by Kruskal-Wallis test). MEP latency was calculated from an average of 10 MEPs elicited at an interval of 3.3 s at 120% RMT. The average of MEP latencies was used to calculate the ISI (F-MEP average ) [30] between the TMS and PNS pulses. Electrical peripheral nerve stimulation PNS was delivered using a Dantec Keypoint electroneuromyography device (Natus Medical Inc., Pleasanton, CA, USA). The tibial nerve was stimulated with two surface electrodes (Neuroline 720, AMBU A/S, Ballerup, Denmark) positioned at the medial side of the ankle, between the medial malleolus and the Achilles tendon. Before stimulation, EMLA Cream (lidocaine 2.5% and prilocaine 2.5%) was applied locally at the stimulation site for 16 participants to reduce the sensations produced by PNS. Although all participants were offered EMLA, only 16 chose to use it. EMLA penetrates 3-5 mm into the skin [31] and thus does not affect the conductivity of the tibial nerve. The same surface electrodes were employed for the F-response recording. The recording electrode was placed over the belly of abductor hallucis muscle and the reference electrode on the medial side of the hallux. Ten F-responses were recorded with a single 0.2-ms stimulation at supramaximal intensity. From these responses, the one with the shortest F-latency was selected and used for ISI calculation (F-MEP average ). Square wave pulses of 1 ms were applied to identify the individual minimum intensity evoking the F-response. This intensity was used for PNS in PAS. PNS intensities of each participant are presented in Table 1. PNS intensities in three experiments did not differ significantly (p = 0.196 by Kruskal-Wallis test). Trains of six 1-ms square wave pulses were delivered at 100-400 Hz. Paired associative stimulation PNS and TMS were triggered by Presentation 1 software (Neurobehavioral Systems Inc., Albany, NY, USA) to ensure their precise timing. Each TMS pulse was paired with a PNS train. ISIs between the TMS and the first pulse of the PNS train were calculated with the formula (F-MEP average ) as described previously [30]. To mimic the conditions of studies where this protocol was applied to SCI patients [16][17][18][19], all participants were asked to imagine plantar flexion of the right foot during the PAS session. Experimental design Experiment 1 (Fig 1A) compared the 20-min 0.2-Hz TMS protocol with the 10-min 0.4-Hz protocol on MEP potentiation at 0, 30, and 60 min after PAS. A total of 240 single TMS pulses were delivered in both protocols (once every 5 s or 2.5 s, respectively). Nine healthy participants were recruited (6 females, age range 22-42 years, mean age 32 years). Each participant had a PAS session on two different days separated by at least 7 days. The two protocols were applied in a random order. PLOS ONE Experiment 2 (Fig 1B) compared 0.2-Hz PAS with 100-Hz, 200-Hz, and 400-Hz PNS components on MEP potentiation at 0, 30, and 60 min after PAS. Ten healthy participants were recruited (5 females, age range 22-46 years, mean age 37 years). Each participant had a PAS session on three different days separated by at least 7 days. The three protocols were applied in a random order. In Experiment 3 (Fig 1C), we added a second TMS pulse 50 ms after the first one, pairing the first and the second TMS pulses with the first and sixth PNS pulses, respectively, at the level of the spinal cord. Both pulses were given at 96% of MSO due to safety limitations of the TMS device. We examined whether the increase in the number of orthodromic volleys could further enhance MEP potentiation. Five healthy participants (three females, age range 30-39, mean age 34) were enrolled. Each participant underwent one session of PAS. In all experiments, the MEP amplitude changes were assessed from an average of 30 MEPs elicited with TMS delivered to the hotspot of right abductor hallucis muscle once every 3.3 s at 120% of RMT. Assessments were conducted immediately prior to the PAS session, immediately post-session (0 min), 30 min post-session, and 60 min post-session. MEP potentiation was calculated as a percent ratio of an average of post-PAS normalized to pre-PAS MEP amplitudes. EMG was recorded continuously and analysed 200 ms prior to MEPs to detect muscle preactivation. MEPs with preactivation were excluded from the analysis. Statistical analysis Statistical analysis was performed using SPSS 25.0. An average of 30 MEPs was calculated at each timepoint post-PAS and compared with the averaged value of amplitudes from 30 MEPs measured before the PAS session; percent ratios post-PAS/pre-PAS were defined. Data were assessed with Wilcoxon signed-rank test and with Friedman test for multiple comparisons. Joint analysis of 0.2-Hz PAS with 100-Hz PNS (the "standard" protocol) from experiments 1 and 2 (Fig 4, n = 19 measurements) showed a significant MEP potentiation at all timepoints. According to the Friedman test, MEP potentiation was significantly different between timepoints (p < 0.0001). Significant differences were found with post-hoc analysis by Wilcoxon signed-rank tests between pre-PAS and all other timepoints (0 min, p = 0.001, 195 ±25%; 30 min, p < 0.0001, 183±19%; 60 min, p = 0.002, 147±10%), consistent with our previous results [23]. Experiment 3 examined whether the MEP potentiation of the 0.2-Hz protocol can be further increased by adding an another TMS pulse. We found a clear inhibitory effect when this Discussion The aim of this study was to investigate whether any of the PAS variants studied here is superior to the previously applied, clinically beneficial protocol [16][17][18][19]. Negative results were obtained, showing that the clinical protocol is the most efficient; neither non-superiority nor inferiority of the new PAS protocols presented here can be concluded from the obtained data. In experiment 1, significantly weaker MEPs at 30 min and 60 min were detected after applying the shorter than clinical PAS protocol. In experiment 2, a significant difference was detected between all PAS protocols. Specifically, the difference was detected between the 100-Hz and 400-Hz PAS protocols; the 400-Hz protocol elicited significantly weaker MEPs than the 100-Hz protocol. In experiment 3, a clear significant MEP inhibition by adding a second TMS pulse was found. Therefore, the protocol version of PAS with a high-frequency peripheral component, currently applied in clinical studies, is currently the most efficient protocol. The temporal relationship between the activations of pre-and postsynaptic neurons appear to dictate the extent and polarity of plastic changes, known as spike timing-dependent plasticity (STDP) [32]. LTP also may depend on firing rate [7] or combination of firing rate, spike timing, and co-cooperativity among the inputs [33]. The situation in vivo is substantially more complex than in cellular models, as complex patterns of neural activity of the motor cortex are involved. This leads to variable outcomes in conventional PAS protocols, thus highlighting strong dependence on external conditions [6]. In designing PAS protocols that are feasible for neurological rehabilitation, clinical challenges must be considered. Changes in signal conduction time in both orthodromic and antidromic pathways [34] and measurement inaccuracies in MEPs due to muscle spasticity are expected in patients with SCI. Considering all these factors, optimising the PAS protocol for SCI patients is very challenging. We have previously compared the 0.2-Hz PAS protocol with 50-Hz PNS that was employed in studies involving patients with incomplete chronic SCI [16][17] with a protocol involving 100-Hz PNS that was shown to be more effective [23]. Here, we wanted to investigate whether similar efficacy can be obtained in a shorter time or improved by further modification of PNS. In Experiment 1 we halved the duration of the PAS session by increasing the PAS frequency to 0.4 Hz. This modification induced significantly weaker MEPs at 30 and 60 min after PAS than the 0.2-Hz protocol. The results of Experiment 1 might be due to the impact of stimulation duration, its frequency, or both. Some studies did apply facilitating PAS with a duration of 10 min or shorter to induce LTP at the cortical [35] and spinal level [13]. However, the efficacy of these protocols was not examined at 30 and 60 min after the PAS. In in vitro experiments, the high-frequency stimulation induces an activity-dependent release of brain-derived neurotrophic factor (BDNF), known to play a crucial role in the LTP induction [36][37][38]. In a study where vagus nerve stimulation (VNS) was paired with an auditory stimulus for inducing recovery-promoting plasticity in the auditory cortex [39], shortening the interval between VNS tone-pairing events also reduced the plastic response and led to loss of the therapeutic effect of the stimulation [39]. The authors concluded that longer intervals between VNS tone-pairing events generate more plasticity and better recovery because the structural changes that underlie these improvements require many seconds to minutes to develop [39]. The possible role of activity-dependent plasticity-inducing molecules in the PAS effect might explain why a sufficiently low frequency of PAS is required. A frequency that is too high might deplete relevant components of the neurotrophin release machinery, such as vesicles and calcium stores, not allowing sufficient time for plastic response to occur, and therefore rendering particularly the long-term plasticity less effective. Experiment 2 demonstrated that although 100-Hz PNS is more efficient than 50-and 25-Hz PNS [23], a further increase in frequency of PNS does not provide additional efficacy. Thus, bringing the PNS frequency closer to the frequency of I-waves [24] does not produce stronger potentiation; the exact coincidence of each PNS pulse with each TMS-induced volley does not appear to be the strongest determining factor for MEP potentiation. Rather, the specific pattern of PNS appears to be important, although PNS by itself does not produce MEP potentiation [23]. Consistent with the result of Experiment 1, the highest frequency is not the most efficient. Activity-dependent release from the peripheral motoneuronal pool of neurotrophic factors such as BDNF is known to occur at 50-100 Hz [36][37][38]. Frequencies higher than 100 Hz might not be as effective due to depletion of relevant components of the neurotrophin release machinery, as mentioned above. During selection of TMS parameters in Experiment 3, we aimed at a precise pairing of the second TMS pulse with one of the PNS pulses of the PNS train. In addition, we aimed to apply the same or similar stimulation intensity that was used in the 0.2-Hz protocol (100% MSO) to ensure the comparability of the results. The 20-Hz TMS was selected to achieve as high a TMS intensity as feasible. The safety guidelines of our TMS device requires a reduction of intensity as the applied frequency increases. Employing this frequency enabled a maximum intensity at 96% of MSO and a precise pairing also with the sixth stimulus of the PNS train. The inhibitory effect found in Experiment 3 most probably reflects the long interval intra-cortical inhibition (LICI) that occurs by employing paired pulse TMS with ISIs between 50 and 200 ms, generally considered to be mediated by cortical GABAb receptors [40]. The cortical LICI effect most probably induces metaplastic change [41] in the motor pathways, preventing PAS facilitatory effects. It has been suggested previously by pharmacological studies that the GABAb receptor agonist baclofen decreases PAS-induced LTP-like plasticity in the human motor cortex [42]. GABAb inhibitory postsynaptic potentials may explain why we observed a negative impact on the MEP amplitudes in this study. However, the increase in the number of orthodromic volleys by applying several TMS pulses might contribute to a more effective PAS protocol when delivered with other intervals and needs further examination. A limitation of this study is that no significant difference was detected between the 100-Hz and 200-Hz protocols in Experiment 2. However, it is evident from Fig 3 that the 200-Hz protocol is not superior to the 100-Hz protocol. However, it is not clear if the 200-Hz PNS is significantly inferior to the 100-Hz PNS; a larger sample size is required to answer this question. As our aim was to find protocols superior to PAS with a 100-Hz PNS component, this question is not clinically urgent. Moreover, we investigated only one modification of PAS frequency and two modifications of PNS frequency. Although our clinical test protocol of PAS with highfrequency PNS is the most efficient protocol among the options studied here, it remains open whether this protocol can nevertheless be further optimized. Further studies that reveal the exact mechanisms of action of the protocols and include more protocol variants with modifications of TMS, PNS, and PAS frequencies and intensities are needed. These results combined with clinical studies [16][17][18][19] suggest that 0.2-Hz PAS with 100-Hz PNS can be applied for patients with incomplete SCI to improve their motor function. More studies are needed to optimize the timing and duration of the treatment and patient selection. Current data [16][17][18][19] suggest that a longer stimulation time, earlier initiation of treatment, and milder injury may be associated with better outcomes. More research is needed to confirm these hypotheses and to further optimize the applied PAS protocol. Conclusions None of the modified paired-associative protocols that we examined in this study could provide a stronger long-term MEP potentiation than the one we have applied previously [16][17][18][19]. Our findings indicate that 0.2-Hz TMS PAS employing 240 single TMS stimuli on M1 paired with 100-Hz PNS is the most effective protocol of PAS employing high-frequency PNS.
2020-05-31T13:05:13.550Z
2020-05-29T00:00:00.000
{ "year": 2020, "sha1": "dd7c7ec3a168d7fa88253de5905c9d6fd09c1d0d", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0233999&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4490ae3d29cc6330fbfcd778e9028a194b3f2fc9", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
244682017
pes2o/s2orc
v3-fos-license
Controlling invasive alien species Vachellia nilotica with triclopyr herbicide in Baluran National Park Vachellia nilotica (Acacia nilotica), as an invasive alien species (IAS), was introduced to Baluran National Park from the Bogor Botanical Gardens in 1969. The purpose was for firebreak to prevent jumping fires from the savanna to the teak forests plantation. However, unexpectedly V. nilotica growth was uncontrollable and invaded the 6000 ha savanna. The rapid growth of this weed has killed the grass in the savanna leading to a decline in the Banteng population in Baluran National Park from 325 in 1998 to 22 in 2011. Since the 1980s, researche on V. nilotica control has been carried out by various universities and research institutions in Indonesia but has not yet obtained an effective and efficient control method. The study aimed to investigate the efficacy of herbicide with the active ingredient of triclopyr by stump brushing to control V. nilotica. Ten triclopyr herbicide concentrations with a solution of diesel and water were tested. The results showed that 1% triclopyr concentration in diesel oil could control 100% of V. nilotica weeds, while water solutions could only control 50% of V. nilotica weeds. Introduction Vachellia nilotica (L) P. J. H. Hurter & Mabb (synonym Acacia nilotica (L.) Willd. Ex Del.) is commonly known as babul or kikar. It is an Arabic gum-producing plant and has been known worldwide as a multipurpose plant [1][2][3]. This plant is endemic to dry areas in Africa, West Asia, India, Myanmar, and Sri Lanka [4]. V. nilotica was first introduced from the Indian Calcutta Botanical Gardens to the Bogor Botanical Gardens in Indonesia in 1850 to produce gums. However, during its development in Bogor, this plant only yields very little gum [5]. In 1969 V. nilotica was introduced to Baluran National Park for fire breaks to protect fire jumps from the savanna to the Perum Perhutani (Government-owned teak forest) teak forest bordering the Baluran National Park. Besides, V. nilotica was also introduced to West Bali National Park and South Sulawesi, yet its growth and development are not invasive in these two areas [6][7]. The growth and development of V. nilotica in Baluran National Park became so invasive that this plant became an invasive foreign weed and invaded the savanna area of 6000 ha from the total savanna 2 area of 12,000 ha. The rapid growth and development in Baluran National Park are caused by the biological characteristics of the plant for resistance to fire, drought, and rapid seed dispersal. The fallen ripe pods in the dry period were eaten by the mammals in Baluran National Park, such as wild buffaloes, bulls, and deer, enter the digestive tract of animals. Yet, the defecated seeds do not lose viability. In 100 grams of wild buffalo feces, there are 45 ± 26 V. nilotica seeds; in bull faces, there are 62 ± 42 seeds; and in deer feces, there are 11±9 seeds [8]. Other than through the help of mammals, rainwater run-off possibly disperses V. nilotica seeds over a considerable distance. Invasion of V. nilotica in the savanna area resulted in very little grass remaining lead to a smaller feed carrying capacity for mammals and finally decreased Banteng (Bos javanicus) populations. The population of Banteng in 1998 was still 325, but thirteen years later, in 2011, only 22 [9]. In its expansion in the field, V. nilotica associates with beneficial soil microbes, Rhizobium sp. and Arbuscular Mycorrhizae Fungi (AMF), to accelerate plant growth [3,10,11]. Rhizobium is a bacteria that can fix nitrogen from the air and be used by the host plant to accelerate plant growth. AMF grows symbiotic mutualism with its host plant and helps absorb nutrients, especially P and other elements such as N, K, Ca, Mg, and increases plant growth [3,[12][13][14][15]. In the area of forest land that has been overgrown by V. nilotica, only a few vegetation grows on the forest floor and around the V. nilotica plant due to its allelopathic factor. Studies showed that allelopathy inhibits germination and growth of many species such as corn, peanut, wheat, and green beans [10,16,17] and also Trigonella foenumgraecum L [18]. The V. nilotica plant is one of Australia's worst invasive alien species (IAS) due to its invasive character, potential distribution, and damaging economic and environmental effects. It invades an area of 6.6 million ha in the arid and semi-arid zone of Queensland [19]. Controlling V. nilotica in Baluran National Park has been carried out since the 1980s, including physical such as logging, demolition, burning; mechanical such as bulldozer; and chemicals such as herbicides [20]. However, none of these controls are effective and efficient in controlling V. nilotica in Baluran National Park. The purpose of this study was to determine the efficacy of herbicide with active ingredient triclopyr to control invasive foreign plants V. nilotica in Baluran National Park. Place and time of research The study was conducted from May 2011 to October 2012 in Kramat, about 2 km east of the Bekol section office and 12 km from the Baluran National Park office in Batangan, Banyuputih Sub-District, Situbondo. Materials and tools The materials used in this study were herbicides with the active ingredient triclopyr (commercial name Garlon 670 EC, equivalent to 480 g l triclopyr -1 ) [24], diesel oil, water, nine years old V. nilotica tree with a diameter range of 9-12 cm. Meanwhile, the equipment used was a 2 L bucket, a 2" paintbrush, a tape measure, and a chain saw. Stump brushing procedure Nine-year-old V. nilotica trees with a diameter range of 9-12 cm were cut at a trunk of 10 cm above the ground to ease movement of the chain saw when cutting tree trunks. The stems and twigs of the felled are cut into short pieces with a length of 1.5 m. Those pieces were then collected at the edge of the research plot area. The top surface and bark of V. nilotica stumps were smeared with triclopyr in a diesel oil solution (stump brushing) with a concentration of 0 g triclopyr L -1 diesel oil as much as 60 ml. This treatment was repeated 20 times. After that, the same thing was repeated with a concentration of 0.96 g triclopyr 3 L -1 diesel oil, 4.8 g triclopyr L -1 diesel oil, 32.4 g triclopyr L -1 diesel oil, 60 g triclopyr L -1 diesel oil, 120 g triclopyr L -1 diesel oil, and 240 g triclopyr L -1 diesel oil (Table 1). In addition, Tricyclopir dissolved in water was also treated with a concentration of 0 g triclopyr L -1 water), 0.96 g triclopyr L -1 water, and 4.8 g triclopyr L -1 water. The concentration of the active ingredient triclopyr and the formulation used in this study are presented in Table 1. The parameters observed The parameters observed were the percentage of tree mortality and the percentage of shoots on V. nilotica trees for six months. Research design and data analysis The research design was a completely randomized design with ten (10) treatments and 20 replication per treatment (Table 1). Data were analyzed with statistical software of JMP Start Statistics 14, and data that showed significant differences were further tested by the Duncan test. Planting grass At the end of the 6 th month (after the observation of the efficacy ended), shoots of the survival V. nilotica plants were cut, and the stumps were smeared with a 1% concentration of triclopyr in a diesel oil solution. The grass that grows in the study area is sprayed with Roundup herbicide with a concentration of 5 ml/L of water. Grasses in the study area were killed to facilitate one grass species, Dichantium caricosum growing without competition with other grasses. At the beginning of the rainy season, the area was planted with D. caricosum grass with the spacing of 1 x 1 m with vegetative materials measuring 20 x 20 cm and a soil thickness of 10 cm. Results and Discussion The treatment of herbicide triclopyr on the stump brushing with concentrations of 1, 6.75, 12.5, 25, and 50% significantly killed 100% of the V. Nilotica. V. nilotica was compared to the control after six months of the treatment (Table 1). Meanwhile, triclopyr treatment with water solution was less effective in controlling V. nilotica with the percentage of plant mortality below 50% ( Table 1). The 1% triclopyr concentration dissolved in diesel fuel significantly killed plant samples (100%) compared to water solutions (50%) ( Table 1). It was most likely due to diesel oil, one carrier (solvent) that allows the herbicide to penetrate the plant barks. The surface area of the smeared plant bark (meristem surface area) in the stump brushing method was wider than the surface area of the smeared stump. It was almost similar to the stem brushing technique, where the efficacy was greatly influenced by the applied surface area and the plant diameter [25]. Chemical control of woody plants in the forestry sector besides the stump brushing mentioned above, there are two other techniques often used: stem injection (herbicide injected into the tree trunk) and stem brushing, spraying herbicide to the basal tree [26][27]. Chemical control with herbicides is one way of controlling, especially for large areas and low labor resources [27]. The herbicide with the active ingredient triclopyr is absorbed by the bark and surface of the cut stem and translocated throughout the plant tissue. Then triclopyr is accumulated in the meristem growth area [28,29]. Furthermore, triclopyr can also be absorbed by plant leaves and roots. Because this herbicide is systemic, the active ingredient is translocated throughout the plant tissue and will kill the plant by disrupting the auxin hormone [29]. One thing to consider in controlling V. nilotica by applying tree stumps is that the herbicide solution should be evenly distributed on the surface of the cambium/bark of the cut stump surface and the surface of the stump bark. It is shown clearly in the area where the shoot grows, namely on the upper stump bark and the bark of the plant from the soil surface to the cutting surface (10 cm) in the control treatment as well as in other treatments where the bud is still growing. Applying a mixture of diesel oil and water for the herbicide triclopyr, one thing to consider is the interval between cutting and brushing. Diesel oil solvent can be used immediately after cutting the stems or after several days of cutting, while the water mixer can only be used shortly after cutting the stems [20]. Stump brushing combines physical and chemical control, which generally grasps a 95-100 % success rate. The cut and brush method's efficiency is independent of seasonality and humidity and requires only a small amount of herbicide per tree. The main problem is that it requires human resources and diesel oil in large quantities. The solution to this problem is to modify the stump brushing tool into a more efficient control tool by combining cutting and brushing into a single piece of equipment. Therefore, after cutting, the tools immediately spray herbicide. Solar oil is another significant input cost so, using used diesel should overcome this problem [25]. Triclopyr is a systemic and selective herbicide used to control woody and herbicidal broadleaf plants along roads, forests, savanna, and parks [19,29]. The selectivity property makes triclopyr often applied to savanna areas because this herbicide does not kill the grasses, the main crop in the area [19,29]. Triclopyr is thought to have only a low level of poisoning to birds and mammals [29]. Triclopyr would not be present in sufficient quantities in animal feed which could have acute or chronic effects [30]. The content of esters and amines in the herbicide triclopyr is degraded by sunlight, metabolites, and microbial hydrolysis. The acid and amine formulations of triclopyr will be tightly bound in the soil so that the two compounds are not mobile. Based on the observations of the author and Baluran National Park rangers, the advantage of controlling stump brushing compared to stem brushing is that the grass could rapidly grow because there is no shade. In addition, mammals can also run freely without the risk of crashing into trees when there are outside disturbances. At the end of the observation, six months after the brushing treatment, the surviving V. nilotica trees were cut, and their stumps were smeared with a 1% concentration of triclopyr with solar solvent to kill all remaining trees. After that, the roundup was sprayed in the sites to kill all the grasses, so the planted Lamuran Putih grass (D. caricosum) grows and develops without the competition of other grasses. Then, at the beginning of the rainy season, lamuran putih grass should be planted with a block sod size of 20 x 20 cm and a thickness of 10 cm with a spacing of 1 x 1 m. Finally, the 8 th month after the planting, the study site was covered with Lamuran Putih grass (Figure 1). Figure 1. Growth of 6-month-old D. caricosum grass in Kramat Baluran National Park. Conclusion Stump brushing application of triclopyr herbicide at concentrations 1, 6.75, 12.5, 25, and 50% with diesel solvent can kill V. nilotica by 100%, while triclopyr herbicide with water solvent is less effective in controlling V. nilotica.
2021-11-27T20:06:55.961Z
2021-11-01T00:00:00.000
{ "year": 2021, "sha1": "2d6cd8926a20b72ef39021f332ebddab5b9db70e", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/914/1/012048", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "2d6cd8926a20b72ef39021f332ebddab5b9db70e", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Physics" ] }
246474377
pes2o/s2orc
v3-fos-license
PROTOCOL FOR LIVER TRANSPLANTATION IN UNRESECTABLE COLORECTAL METASTASIS ABSTRACT - BACKGROUND: Colorectal cancer (CRC) is the third most common neoplasm, and half of the patients with CRC develop liver metastasis. The best prognostic factor for colorectal liver metastasis (CRLM) is the possibility of performing a resection with free margins; however, most of them remain unresectable. The justification for performing liver transplantation (LT) in patients with CRLM regards an increase in the number of resectable patients by performing total hepatectomy. AIM: The aim of this study was to provide a Brazilian protocol for LT in patients with unresectable CRLM. METHOD: The protocol was carried out by two Brazilian institutions, which perform a large volume of resections and LTs, based on the study carried out at the University of Oslo. The elaboration of the protocol was conducted in four stages. RESULT: A protocol proposal for this disease is presented, which needs to be validated for clinical use. CONCLUSION: The development of an LT protocol for unresectable CRLM aims to standardize the treatment and to enable a better evaluation of surgical results. RESUMO - ABSTRACT -Background: The treatment of choice for patients with schistosomiasis with previous episode of varices is bleeding esophagogastric devascularization and splenectomy (EGDS) in association with postoperative endoscopic therapy. However, studies have shown varices recurrence especially after long-term follow-up. Aim: To assess the impact on behavior of esophageal varices and bleeding recurrence after post-operative endoscopic treatment of patients submitted to EGDS. Methods: Thirty-six patients submitted to EGDS were followed for more than five years. They were divided into two groups, according to the portal pressure drop, more or less than 30%, and compared with the behavior of esophageal varices and the rate of bleeding recurrence. Results: A significant reduction on the early and late post-operative varices caliber when compared the pre-operative data was observed despite an increase in diameter during follow-up that was controlled by endoscopic therapy. Conclusion: The drop in portal pressure did not significantly influence the variation of variceal calibers when comparing pre-operative and early or late post-operative diameters. The comparison between the portal pressure drop and the rebleeding rates was also not significant. HEADINGS: Schistosomiasis mansoni. Portal hypertension. Surgery. Portal pressure. Esophageal and gastric varices. Perspective This protocol aims to standardize the operating procedures for liver transplantation in patients with unresectable colorectal metastasis and to enable a better assessment of surgical results, disease-free survival, and overall survival. The regulation of this protocol is currently in progress in the National Transplant System (SNT-Sistema Nacional de Transplantes) of the Brazilian Ministry of Health. Central Message Liver transplantation in patients with unresectable colorectal metastasis achieves good results when a careful preoperative selection is carried out. This protocol aims to standardize the operating procedures for liver transplantation in patients with unresectable colorectal metastasis. Protocolo de transplante hepático para metástase colorretal irressecável examinations, size and number of tumors, previous chemotherapy, response to chemotherapy, anatomopathological analysis of the primary tumor, time between diagnosis of CRC and LT, and type of LT (deceased donor or living donor). The number of patients referred for the LT evaluation, as well as the number of patients who effectively met the criteria and were included for undergoing LT and those who were excluded before the LT (due to not meeting the criteria) were also assessed. After the LT, disease-free survival and overall survival rate in 1, 3, and 5 years, immunosuppression protocol, rejection episodes, and need for retransplantation were analyzed. Figure 1 shows the LT protocol for CRLM proposed in this study by the authors. Figure 2 shows the document of SNT to be filled in to request a special situation for CRLM. DISCUSSION In the past two decades, there has been improvement in the survival rates after LT by 20-30% and improvement in the imaging examinations; there was also the introduction of immunosuppressants with antineoplastic action (mTOR inhibitors) 15,17 . This technical progress combined with the peculiar transplantation scenario in Norway, which has more organ donors than recipients on the list, provided the ideal scenario for performing LT in CRLM. In the SECA I study, conducted from 2006 to 2011 at the University of Oslo, 21 patients underwent LT for CRLM, whose main inclusion criteria at the time consisted of good clinical performance (ECOG 0 or 1), complete resection of the primary tumor, and a minimum of 6 weeks of chemotherapy. The authors obtained an overall survival rate of 60% in 5 years and identified four clinical variables associated with a worse prognosis (Oslo criteria): tumor diameter >5.5 cm, CEA > 80 ng/ ml, interval between resection and LT <2 years, and progression of the disease during chemotherapy 9 . The same group from Oslo continued the investigation of LT for CRLM through the SECA II study. From 2012 to 2016, 15 patients were transplanted with restrictive criteria in order to obtain a result similar to other causes of LT. Several criteria were included for the performance of LT, such as the Oslo criteria, the nonresectability due to partial hepatectomy, and the radiological tumor response after chemotherapy. The authors obtained an overall survival rate of 100% in 1 year, 83% in 3 years, and 83% in 5 years. Disease-free survival rate obtained was 53% in 1 year, 44% in 3 years, and 35% in 3 years. The main site of recurrence was pulmonary and most of them were fit for resection; therefore, the high rates of recurrence had less influence on the survival rate of patients 4 . Both the aforementioned Norwegian studies have major importance in the "Transplant Oncology," a term used to describe LT as a treatment option for hepatobiliopancreatic neoplasms. Recently, multiple centers in Europe and in the United States have started to perform LT for CRLM 11,19 . Fernandes et al. were pioneers in performing the first LT with a living donor in Latin America in a patient with CRLM, in agreement with the Oslo criteria 6 . The exclusion of transplantation in cases of right colon tumor and/or the presence of positive BRAF is a topic to be discussed. Mutation-positive BRAF is considered a risk factor and is associated with worse outcomes after transplantation. Tumors of the right colon also have a worse prognosis, precisely due to their higher frequency of positive BRAF 16 . Clinical studies INTRODUCTION C olorectal cancer (CRC) is the third most common type of cancer in both genders. At the time of diagnosis, nearly 25% of patients have metastasis and the liver is the most affected organ (present in 80% of cases); it is estimated that half of the patients with CRC will develop liver metastasis at some point in the course of the disease 13,14 . Currently, the treatment of metastatic CRC (stage IV) is based on a multidisciplinary and multimodal approach 13,18 . The possibility of performing a resection with free margins is the best prognostic factor for colorectal liver metastasis (CRLM) 14 . In this scenario, hepatectomy has become the main treatment of CRLM, having an overall survival rate of 30-55% in 5 years and 20-25% in 10 years 1,7,14,18 . Several strategies have been used to expand the possibility of resection and to ensure adequate liver remnants, such as parenchyma-preserving techniques, portal vein embolization, two-stage liver resection (LR), and ALLPS (Associating Liver Partition and Portal Vein Ligation for Staged Hepatectomy). Even with the use of these strategies, most patients with CRLM remain functionally or anatomically unresectable 20 . The justification for performing liver transplantation (LT) in patients with CRLM regards the increase in the number of resectable patients by performing a total hepatectomy. However, LT in patients with CRLM was considered an absolute contraindication before 1995, due to unacceptable results obtained at the time. The first experience was reported by the European Liver Transplant Registry (ELTR), presenting survival rate of 62% in 1 year and 18% in 5 years 5 . It is worth mentioning that both the perioperative results of LT and the chemotherapy drugs available for the treatment of CRC in the late 1980s and early 1990s justify these aforementioned negative results. The poor results associated with organ scarcity resulted in the discontinuation of LT for CRLM 8,12 . Based on the data currently available, the International Liver Transplant Society (ILTS) is recommended to perform LT with a specific protocol for CLRM 11 . Therefore, the aim of this study was to present a protocol proposal to guide the clinical use of LT in CRLM. This protocol needs to be validated in future studies. METHODS This protocol was performed by two high-volume centers of LT and LR in Brazil: University Hospital of the Medical School of the University of São Paulo (HCFMUSP) and Hospital Adventista Silvestre/Hospital São Lucas. The elaboration of the protocol was conducted in four stages. In the first stage, a search in the literature was performed in order to obtain the main published studies regarding LT for CRLM to date. In the second stage, an outline of the protocol was designed by the first two authors and the last author, based on the SECA trials from the University of Oslo 4,9 . In the third stage, 10 experts elaborated the last version of the protocol, adapted to the Brazilian reality. The fourth stage consisted of the protocol submitted for approval in the National Transplant System (SNT-Sistema Nacional de Transplantes) of the Brazilian Ministry of Health. Brazilian centers were selected for inclusion in the multicentric research project, and a total of 30 patients underwent transplantation according to the criteria of this protocol and were referred to these centers by the SNT. Preoperative, intraoperative, and postoperative data were prospectively recorded on the REDCap platform 10 . The following pretransplantation data were analyzed: age, gender, body mass index (BMI), clinical performance, comorbidities, laboratory examinations, staging ORIGINAL ARTICLE -TECHNIQUE LT centers Specialized centers selected by SNT. LT standardization Performed based on a protocol under SNT supervision in order to evaluate outcomes. LT approval Approval in a multidisciplinary meeting at local institution with mandatory presence of radiologist, clinical oncologist, hepatopancreatobiliary surgeon, and transplantation surgeon. LT notification All cases must be referred to the SNT for evaluation and final approval. LT INDICATION Patient selection Synchronic or metachronic CRLM, restricted to the liver, unresectable, with a time interval superior to 12 or 24 months between the diagnosis of primary tumor and the date of listing for transplantation. Official authorization After SNT approval, patients will be included in the list for LT with a special MELD score (MELD 30). LDLT Follow the same inclusion and exclusion criteria of DDLT. They will only have coverage by the Brazilian Ministry of Health if meeting the established criteria. Oslo criteria (to be fulfilled within 90 days before LT) CEA level <80 ng/ml. The larger hepatic lesion must be <5.5 cm. Response to chemotherapy. Time interval superior to 12 or 24 months between the diagnosis of primary tumor and the date of listing for transplantation. INCLUSION AND EXCLUSION CRITERIA Inclusion criteria (all the described criteria must be met for inclusion): Histologically confirmed adenocarcinoma of the colon or rectum. Standard surgical procedure with adequate primary tumor resection margins, including circumferential resection margins (CRM) of at least ≥2 mm for patients with rectal cancer. Synchronic or metachronic CRLM, unresectable, restricted to the liver, not eligible for curative resection of the liver after compliance with the other described items. Previous treatment with first-line chemotherapy. Before starting first-line chemotherapy, no lesions should be >10 cm and the total number of lesions must be £20. If there are >20 nodules, the biggest lesion should have a maximum size of 5 cm. Response to chemotherapy (at least 10% response according to the RECIST criteria until third-line chemotherapy). Patients should be accepted for transplantation if there is no progression of the disease while undergoing chemotherapy. Absence of signs of extrahepatic metastatic disease or local recurrence of the primary tumor according to CT or MRI (chest, abdomen, and pelvis) within 4 weeks before the multidisciplinary meeting. Absence of signs of extrahepatic metastatic disease or local recurrence of the primary tumor according to fullbody PET-CT within 3 months before the multidisciplinary meeting. Normal colonoscopy (no sign of local recurrence) in the previous 12 months before transplantation. Age ≥18 years old. Good clinical performance (ECOG 0 or 1). Satisfactory blood examinations Hb >10 g/dl, Bilirubin <2× upper limit of normality, AST, ALT <5× upper limit of normality, creatinine <1.25× upper limit of normality. Exclusion criteria (meeting one criteria is enough for exclusion ): Weight loss >10% in the past 6 months. Previous extrahepatic metastatic disease or local recurrence of the primary tumor. Patients who did not receive standard preoperative, perioperative, or postoperative treatment for primary CRC. Palliative resection of the primary CRC (compromised margins and/or presence of <12 lymph nodes assessed in the surgical specimen). BMI > 35. Other malignant diseases in the past 5 years (except skin and cervical neoplasm, which will be analyzed by the pathologist). Hypersensitivity to the mammalian target of rapamycin (mTOR) inhibitors (everolimus and/or sirolimus). Pregnant or breastfeeding women. Recurrence of liver metastases after LT for CRLM. Clinical follow-up Postoperative follow-up with outpatient appointments, CT (chest, abdomen, and pelvis), and CEA levels every 3 months in the first year. From the second year forward, perform the same postoperative follow-up every 6 months. Immunosuppression Immunosuppression will be performed according to the institutional protocol of each center. It is mandatory to use an mTOR inhibitor (everolimus or sirolimus) 1 month after the LT or as soon as possible after this period, depending on postoperative complications. Rejection treatment Performed according to the institutional protocol of each center. Retransplantation for recurrent CRLM Not allowed. Retransplantation in other situations Allowed on primary graft dysfunction, hepatic artery thrombosis, and chronic rejection. Standard surgical procedure with adequate primary tumor resection margins, including circumferential resection margins (CRM) of at least ≥2 mm for patients with rectal cancer. CRLM synchronic or metachronic, unresectable, restricted to the liver, not eligible for curative resection. *Before starting first-line chemotherapy, no lesions should be >10 cm and the total number of lesions must be 20 or less. If there are >20 nodules, the biggest lesion should have a maximum size of 5 cm. Attach the medical and examination reports. that are still in progress present heterogeneity regarding these items and, therefore, we chose to maintain them in our protocol until further studies. In Norway (NCT01479608, NCT02215889, and NCT03494946) and Germany (NCT03488953), the studies do not adopt these exclusion criteria, while in France (NCT02597348), Canada (NCT02864485), and Italy (NCT03803436), positive BRAF is the exclusion criteria 3,11 . The regulation of this protocol is in progress in the SNT for validation in the Brazilian national territory 2 . CONCLUSION An LT protocol for colorectal unresectable metastasis was created to standardize the treatment and to enable a better evaluation of not only surgical results but also disease-free survival and overall survival of patients with CRLM.
2022-02-03T06:23:53.797Z
2022-01-31T00:00:00.000
{ "year": 2021, "sha1": "96c9f66c69fc620293ddefb5c5bf98cfee446b9a", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/abcd/a/wR6jB7wwkZGqFM7sJtw7cYh/?format=pdf&lang=pt", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "65246386e7600bd33ccc378d68a492c9d91ef614", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
248951703
pes2o/s2orc
v3-fos-license
“Never Trust the Skin”: A Rationale for Using Polydioxanone Internal Support Matrix to Minimize Scarring in Primary Mastopexy-Augmentation—An Observational Study Abstract Background The process of scar formation is complex and multi-factorial. Basic plastic surgery tenets focus on tension-free techniques to optimize aesthetic outcomes and minimize scarring. Objectives Prophylactic use of a polydioxanone (PDO) internal support matrix in cosmetic mastopexy-augmentation to decrease scar burden has never before been described. Methods A high volume (n = 41) single-surgeon mastopexy-augmentation experience (S.S.K.) followed scar quality in consecutive cases from June 2020 to July 2021. A minimum of 6 months of postoperative evaluation was required to assess scar quality. Fitzpatrick scores were also evaluated and compared. All surgeries in this study were performed in the dual plane using silicone gel implants, a superior or superomedial dermal pedicle blood supply, and a wise-pattern or vertical scar. Scar quality was evaluated by photography and scored according to an internally developed scar quality scale. Results There have been no cases of hypertrophic or keloid scarring. All patients receiving mastopexy-augmentation with prophylactic PDO mesh have a favorable appearance with fine line scars, and the mean scar quality scale score across the cohort was 4.341/5. The mean Fitzpatrick scale score across the cohort was 2.97, and, of the patients who scored a 5 on the scar quality index, the mean Fitzpatrick scale score was 3.545. Conclusions Prophylactic use of PDO internal support matrix in silicone gel mastopexy-augmentation offers further protection against poor scarring in patients across the Fitzpatrick scale, with varying degrees of skin quality, and across medium to high-volume implant augmentations. Patients who received PDO prophylaxis demonstrated a better-than-average scar appearance. Level of Evidence: 4 ptosis remains unimproved or their breast aesthetics are worsened in appearance by an implant. These patients either remain unhappy with their breast augmentation or end up receiving a separate mastopexy down the line. The process of scar formation is complex and multifactorial. Though much of scar healing and appearance is dependent on a patient's genetics, connective tissue health, and compliance with postoperative instructions (such as avoiding ultraviolet exposure and consistent scar treatment usage when prescribed), surgeon technique is undoubtedly crucial to an aesthetic result. 1 Basic plastic surgery tenets focus on tension-free techniques to optimize aesthetic outcomes and minimize scarring. This has commonly come to mean utilizing multilayered suture closure and a thoughtful selection of implant size. In patients presenting with thin, poorly toned skin or inadequate soft tissue stores, more support may be necessary. Additional soft tissue reinforcement through the use of internal support matrices (hereinafter referred to as "mesh") was hypothesized to decrease the pressure of the implant volume on the mastopexy flaps, which could contribute to better scar formation. Mesh has been used in breast surgery for reinforcement of soft tissue and is available in a variety of materials. 2,3 DuraSorb (SIA, Chicago, IL) polydioxanone (PDO) mesh (a synthetic absorbable polymer similar to polydioxanone [PDS] suture) was cleared by the FDA in 2018 for soft tissue support. 4 Its early tissue integration and absorption profile were hypothesized to be an excellent option for decreasing tension on the mastopexy incision during the crucial first months of wound healing. The prophylactic use of a PDO internal support matrix in cosmetic mastopexyaugmentation to decrease scar burden has never before been described. METHODS A retrospective cohort analysis was conducted using data collected from 41 consecutive primary mastopexyaugmentation surgeries performed between September 2020 and July 2021 utilizing bilateral smooth silicone gel breast implants plus PDO internal support matrix. A minimum of 3 months of postoperative evaluation was required to assess scar quality. The surgeries were performed by the senior author (S.S.K.) in Newport Beach, California. All mastopexy augmentations in this study were performed in the dual plane using a superior or superomedial dermal pedicle blood supply and with a wise-pattern or vertical mastopexy scar. Patients with silicone gel breast implants of all sizes with a smooth, round shell were included in this study, ranging from 350 to 700 ccs. The patients were similarly healthy, and all patients were female ranging from 19 to 64 years of age, with the average age in the cohort being 33.6 years. Written consent was provided, by which the patients agreed to the use and analysis of their data. Three-layered suture closure was used for all cases (fascial layer to cover the implant, deep dermis, and a subcuticular layer). Drains were never used. Patients consented to retrospective and prospective review of their case data at the time of their preoperative appointment. While developing the dual-plane pockets, the monofilament mesh was removed from its sterile packaging and soaked in a triple antibiotic irrigation solution consisting of 50,000 units of bacitracin, 1 g of cefazolin, 80 mg of gentamicin, 1 liter of normal saline, and 1 liter of povidone-iodine solution. After dissection, the mesh was removed from the solution and cut in half. Each half was oriented such that the smooth surface would be facing toward the patient's breast implants and the rough surface toward the breast tissue. The mesh was then contoured to the confines of the breast implant pocket and inset to the periosteum of the rib and the Scarpa fascia using 2-0 Vicryl (Ethicon, Raritan, NJ) sutures in an interrupted fashion along its inferior edge, going from medial to lateral along the inframammary fold border. In a pure vertical mastopexyaugmentation, the breast implants were inserted first using an introduction sleeve, and then the mesh was inset as described above, with the smooth surface of the mesh placed against the breast implant and its rough surface toward the breast tissue. The superior border of the mesh covered at least the lower half of the breast implant and did not need any sutures to suspend its superior border. 2-0 Vicryl sutures in running simple and locking fashion were used to reapproximate the fascia of the breast gland over the implant and mesh. In cases where a wise-pattern mastopexy-augmentation was performed, the implants were placed, and the breast fascia was then closed to have total implant coverage. Then, the mastopexy was carried out, flaps were elevated, and the mesh was sewn outside the breast implant pocket, but still with its most inferior border securing the inframammary fold border. While the mesh was still oriented with the smooth surface toward the breast implant and its rough surface toward the mastopexy flaps, the main difference was that its superior border and any dead space were quilted with light 2-0 Vicryl interrupted sutures tacked down to the underlying soft tissue being sure not to go too deep and inadvertently puncture the underlying implant. If the patient's soft tissue was very thin, the mesh would sometimes be placed inside of the breast implant pocket similar to what is done in a vertical mastopexy in order to have more tissue coverage over the mesh. Otherwise, the senior author would perform a routine mastopexy with tailor-tacking, marking, release of staples, de-epithelialization of the intended blood supplying pedicle, excision of excess skin and subcutaneous fat, debulking of excess lower pole breast volume, and 3-layered suture closure. Patients were monitored with the typical in-person follow-up schedule of 1 week postsurgery, 1 month postsurgery, 3 months postsurgery, 6 months postsurgery, and yearly follow-up appointments for each subsequent anniversary. Patient photographs were reviewed at a minimum of 6 months follow-up and beyond to assess final scar quality. Patients were given a Fitzpatrick scale score as gauged by their natural skin tone without sun exposure/artificial tanning and natural hair color 5 (Table 1). Scar quality/appearance was evaluated by photography and then scored by an independent observer on an internally developed Likert-type scale ( Figure 1). RESULTS All patients who received primary mastopexy-augmentation with the prophylactic placement of PDO internal support matrix had a favorable result with fine line scars. There have been no cases of hypertrophic or keloid scarring. Wise-pattern scar appearance scores in primary mastopexy augmentations with prophylactic polydioxanone mesh placement (n = 40). Final scar scores were coded by an independent observer at uniform follow-up postsurgery. Figure 1. The 5-category inventory used to scale wise-pattern mastopexy scar appearance across the study cohort displays examples of the prototypical scare for each score, categorical criteria, and data on example photographs shown. This reference was used to create a calibrated scale against which the cohort with mesh placement could be compared with gauge scar appearance. The average follow-up time was 8.4 months. The mean scar quality scaled score across the cohort was 4.341 (Good-Excellent). No patients scored below 3 on the scar quality index ( Table 2). The mean Fitzpatrick scale score across the cohort was 2.97 ( Figure 2). Of the patients who scored a 5 on the scar quality index, the mean Fitzpatrick scale score was 3.545 ( Figure 3). DISCUSSION Scar formation is governed by a multitude of factors and is often a major concern for patients undergoing elective aesthetic surgery. As such, scar management has become a multi-billion dollar industry in the United States, 6 yet we lack a conclusive mechanism of fibrotic hypertrophic scar formation or a standardized protocol for plastic surgeons to reduce and manage scarring in patients of all ethnicities. Current research on aberrant wound healing points to an exaggerated immune response, epithelial abnormalities, and the individual's connective tissue tensile strength as the main contributors to poor scarring results. 7 Scarring is known to be at least partially driven by genetics; Asian skin has a tendency toward hyperpigmentation with injury. 8 Studies on melanocyte proliferation have also shown darker-skinned patients (such as Black, Hispanic, and some Asian populations) to be more susceptible to keloid scarring than Caucasians. 9 Given this wide variation in scarring patterns, the authors sought to investigate a prophylactic method that would serve a protective effect in all demographics and provide a more predictable healing trajectory for mastopexy patients. Plastic surgeons have long focused on using tension-free techniques to minimize scarring and wound dehiscence. The mastopexy incision often poses a challenge to proper wound healing, as the tight closure over an implant after the removal of excess skin in the vertical and/or horizontal aspect creates an incision under considerable tension. This strain sometimes manifests in superficial dehiscences or pinhole openings of the wound at the T-junction which typically close on their own without complication but can result in worsened scar appearance in their place. Given these trends, we hypothesized that lowering the internal tension force on the mastopexy scar will result in improved scar appearance during healing and beyond. By providing additional soft tissue support, it was predicted that the mesh would act as a scaffolding to reduce pressure on the incision closure, lowering the risk of the scar stretching or widening and also promoting connective tissue health. These combined benefits from the mesh would create an optimal environment for the best scarring outcome; thus, the long-term appearance of the mastopexy scar would largely be determined by the internal mechanisms of tissue repair during the first crucial weeks of healing. The authors realize that, in most cases, natural physiology dictates that scars tend to improve in appearance over time, usually over 1 to 2 years. At 6 months postsurgery, they tend to be more noticeable, and this is why many of the patients in this study were assessed for scar quality at less than 1 year postsurgery. In our findings, we discovered that scars with mesh at 6 months often appeared similar to scars we had seen without mesh at longer-term stages of healing. A careful rationale underlies our original interest in mesh as a prophylactic measure in cosmetic breast cases, which, as an application, has been virtually unheard of. Mesh exists in a variety of biological and synthetic options on the market and has been utilized for years in breast surgeries but up until recently was used almost exclusively in reconstructive rather than aesthetic cases. 10 Biological meshes or acellular dermal matrices (ADMs) are derived from human cadaveric, porcine, or bovine dermis that has been aseptically processed to remove cells and preserve an extracellular matrix scaffolding. 11 These meshes are effective, yet are costly and carry additional risks with their implantation including bacterial infection, seroma, and other postoperative complications. 12 Synthetic meshes were created as a more cost-effective and inert option for reinforcement and also exist in a variety of materials and absorption profiles (Figure 4). [12][13][14][15] Synthetic mesh options range from permanent, non-absorbable matrices 13 such as titanium-coated (TiLoop, FM Medical, Carlsbad, CA) or gel-coated (C-QUR, Atrium Medical, Merrimack, NH) polypropylene, slow-absorbing meshes made of polyethylene or filaments with mixed absorption profiles 14 (TIGR, Novus Scientific, Uppsala, SWE and Proflex Omnia, Clovis, CA), or fully absorbable synthetic matrices like the DuraSorb polydioxanone matrix used in this study. 15 Synthetic meshes all carry less cost than ADMs but still vary widely in cost by region and manufacturer. Of the above categories, synthetic fast-absorbing matrices tend to be the least expensive option but are still underutilized in cosmetic cases. Mesh is still viewed by most aesthetic plastic surgeons as a product to be used in "bail-out" revision cases rather than a tool in their armamentarium to provide durable, stable long-term results to cosmetic patients. Thus, mesh use in primary augmentations and primary mastopexyaugmentations is relatively novel and not yet described in research. When designing patient selection criteria for this study cohort in order to best assess mesh's potential in a prophylactic application, the authors considered a variety of factors before deciding to use DuraSorb mesh in all primary mastopexy-augmentation patients during the study time frame. Though we originally considered using mesh only in patients receiving high-volume implants (≤450 cc) with their mastopexy, mesh was ultimately extended to use in all mastopexy-augmentation patients of S.S.K. This was so that we could fully consider the protective benefit, if any, of mesh against poor mastopexy scarring regardless of selected implant size (which is governed by both patient anatomy and individual taste). Limiting mesh use to only high-volume implant cases may have unduly limited the study's demographic. Additionally, there are little data to suggest that the size (in cc's) of the implant used in mastopexy-augmentation has a significant impact on scar appearance, which appears to be more dependent on surgeon technique, tension-free closure, and incision aftercare. Tight closure of a mastopexy-augmentation with any volume of implant underneath contributes additional tension to the wound that is not present in a mastopexy without implants. Further research is needed before concluding that large implants are a risk factor in unfavorable mastopexy scarring. Thus, we decided to use mesh in all mastopexy-augmentation patients regardless of the size of the smooth round implant they were receiving. The average implant volume used across the cohort was 516.9 cc's. Another reason for the broad cohort selection was the diverse patient population served by the practice and in Southern California. Considering the aforementioned genetic and phenotypic associations with poor scarring outcomes, black, Hispanic, and Asian patients were hypothesized to be at a higher risk for scarring than Caucasian patients, and we wished to capture as many of these high-risk patients as possible within the study set in hopes of improving their aesthetic outcomes. S.S.K. serves a heterogeneous patient population in Orange County, California-many patients express concern over prominent mastopexy scars at their consultations, citing a variety of factors including revealing clothing trends, a subjective importance placed on their breast's appearance both in and out of clothing, and a general Californian cultural focus on "looking good." Therefore, all primary mastopexyaugmentation patients were deemed candidates for this prophylactic intervention and were counseled extensively during their 1-hour consultation appointments on their options. Patients were educated on the PDO mesh to be used, its novel nature in this application of breast surgery, its costs and benefits, and its hypothesized protective effect against poor scarring results that was being studied. All patients who were given the option to receive mesh implantation as part of their surgery opted to receive it, with no patients asking to receive mastopexy-augmentation alone after counseling. Ultimately, it was our observation that prophylactic placement of PDO mesh in the lower pole of the breast pocket provided good to excellent protective results against scarring throughout the cohort. It is our judgment that this added soft tissue support was effective at taking pressure off the wise-pattern incisions over the first 3 months of its absorption profile, providing tension-free healing that allowed for a better-than-average scar appearance early in the healing process. These results also held up well across the Fitzpatrick scale, indicating that this prophylactic measure does not only benefit Caucasian and lightskinned patients who might have scarred well regardless, but even more so in patients with high Fitzpatrick scores. For example, one of the darkest-complexioned patients (Fitzpatrick VI) in the study cohort presented with bilateral striae to the breasts after pregnancy and overall poor skin quality that we noted as a risk factor for mastopexy scarring before surgery. This patient's results scored a 5-Excellent on the Likert scale measure. We believe that the mesh's soft tissue reinforcement in this patient was the major contributing factor in her optimal aesthetic results along with others with naturally darker skin tones. This was a small study looking at the impact of mesh on a subset of patients at our practice ( Figures 5-10). The limitations of this study were the use of a nonvalidated scale for scar assessment (the study scar scale was internally developed to help guide patient care, but since its development was never published, it is non-validated in the sci, evaluation of the complete vertical scar with the patient in an erect position with their hands down, and the use of a single independent observer. Mesh appeared to have a positive effect and had no negative effect on scar quality or healing. Many of the patients in this cohort received aggressive mastopexies with implants above 500 mL and did well. The additional soft tissue reinforcement provided by mesh allowed us to feel more confident using larger-volume implants in patients receiving a mastopexy, allowing us to meet patients' aesthetic expectations. While this study only examined primary mastopexy-augmentations, the authors of this study were also utilizing PDO mesh in breast augmentations 16 and breast revisional cases as well and saw excellent results and improved scarring in females who had already undergone previous mastopexy-augmentations with different surgeons. While outside the scope of this paper, further cohort or case studies may be warranted to examine the use of mesh in these patient groups as well as to improve the quality of scarring and durability of results. CONCLUSIONS This prospective cohort study found that the prophylactic use of PDO internal support matrix in silicone gel mastopexy-augmentation offers further protection against poor scarring. The protective effects of mesh were seen in patients of fair to dark skin tones, with varying degrees of skin quality, patients with and without children, and patients receiving lower and high-volume silicone gel implants. We, therefore, conclude that the prophylactic placement of PDO mesh is a safe, cost-effective, and multibeneficial technique in primary mastopexy-augmentations for patients desiring optimal scarring, durable pocket control, and excellent long-term aesthetic results. Disclosures Dr Kelishadi has been an SIA (Chicago, IL) shareholder since June 2021. The remaining author declared no potential conflicts of interest with respect to the research, authorship, and publication of this article.
2022-05-22T15:07:44.800Z
2022-05-19T00:00:00.000
{ "year": 2022, "sha1": "35493457af3e459b24ee0796b76a1a84a381bda1", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/asjopenforum/advance-article-pdf/doi/10.1093/asjof/ojac048/43772218/ojac048.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2a4dfbe2c52ec4774fae0c2469f44d758859bc19", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
237229968
pes2o/s2orc
v3-fos-license
Rapid Detection and Identification of Respiratory Viruses by Direct Immunofluorescence The use of fluorescein-conjugated antiserum against respiratory syncytial (RS) and parainfluenza 1 and 3 viruses was compared with conventional techniques in the rapid detection of virus in tissue cultures inoculated with pharyngeal specimens known to contain these viruses. Twenty-three specimens were tested: 9 RS, 8 parainfluenza 1, and 6 parainfluenza 3. The fluorescent-antibody technique (FA) detected virus in 52% of the tissue cultures in 24 hr, and, by 72 hr, 22 of the 23 cultures were FA-positive whereas only 5 were positive by conventional techniques. Additionally, conjugated antisera were prepared against herpes simplex, influenza A2, and adenovirus type 5. All conjugates stained only the homologous virus and were 100- to 10,000-fold more sensitive than conventional techniques in detecting descending dilutions of virus inocula by 24 hr. With the procedures described, several antisera could be conjugated and ready for use within 24 hr. Serum fractionation was by ammonium sulfate precipitation, and with the procedure outlined virtually complete recovery of the globulin fraction and elimination of all of the albumin were accomplished. Despite the report by Liu (15) in 1956 that influenza infection can be directly demonstrated by immunofluorescence, the fluorescent-antibody technique (FA) has not been widely applied to infections with other respiratory viruses even though it has great potential for rapid and inexpensive diagnosis. One of the most likely reasons for this lack of application is the absence of generally accepted standard procedures for its use. A wide array of methods have been reported for serum fractionation (1,4,13), fluorescein isothiocyanate (FITC) labeling (3,18,21), and purification of the fluorescein-protein conjugates (8,24). Many of the techniques described are fairly complex and time consuming and are therefore not readily adaptable to a diagnostic laboratory. The purpose of the study reported here was to develop a rapid and relatively simple means of producing high-quality fluorescein-labeled antiserum by selecting and modifying, when necessary, those procedures which best fulfilled these criteria and then to test the usefulness of these labeled antisera in the rapid detection of viruses in clinical specimens. A practical ammonium sulfate precipitation technique was developed which yields nearly quantitative recovery of albumin-free globulins and, when combined with the rapid conjugation method of Spendlove (21), permits simultaneous preparation of several sera within 24 hr. The sensitivity of such sera in the detection of inocula containing descending dilutions of each of six common respiratory viruses was compared with conventional techniques. Finally, the rapid FA diagnosis of respiratory syncytial (RS) virus infection previously reported by others (9,17) was confirmed, and the technique was extended to parainfluenza types 1 and 3. MATERIALS AND METHODS Antigen preparation. The antigens against which antiserum was prepared were low passage (no greater than four) strains of RS, parainfluenza types 1 and 3, herpes simplex, and adenovirus type 5. The details of the tissue culture procedures of this laboratory have been described recently (5). Briefly, the tissue cultures (primary rhesus monkey kidney for the parainfluenza viruses, HEp-2 for the others) were grown in 32-oz (ca. 900 ml) prescription bottles by using 50 ml of Eagle's minimal essential medium (MEM) in Earles salt solution supplemented to 10% with bovine serum. Twenty-four hours before inoculation, the tissue cultures were washed three times with Hanks balanced salt solution (BSS) and maintained with 45 ml of a mixture of 50% MEM and 50% medium 199 in BSS. For the HEp-2 cells, this medium was supplemented to 3% with agamma calf serum. The tissue cultures were inoculated with sufficient virus suspension (3 to 5 ml) to produce, within 7 days, 75 to 100% cell destruction or a hemagglutinin titer of at least 1: 128. Then the medium was removed and centrifuged (International centrifuge type SB, size 1) at 2,500 rev/min for 30 min, and the supernatant fluid was used for animal inoculation. To increase the harvest of adenovirus T5, the tissue culture cells were disrupted by freezing and thawing three times before centrifugation. Influenza A2/Hong Kong/68 was grown in 11-day-old embryonated eggs and harvested at 48 hr. Antisera production. All antisera were prepared in female goats. With the exception of influenza A2 and herpes simplex viruses, the goats were initially inoculated subcutaneously with 5.0 ml of antigen mixed with 5.0 ml of Freund's complete adjuvant and intravenously with 10.0 ml of undiluted antigen. Intravenous inoculations were continued at weekly intervals until a neutralization antibody titer of at least 1:90 was obtained, usually in 3 to 4 weeks. The herpes simplex antiserum was produced in a similar manner but using 2.0 ml of antigen for the initial subcutaneous injection and 2.0 ml for all intravenous injections. Influenza A2 antisera were prepared by intravenous inoculation only, by using 5 ml of antigen diluted 1:10 in buffered BSS at weekly intervals for 21 days. Separation and purification of serum globulins. A 542-g amount of (NH4)2SO4 was dissolved to 1 liter in glass-distilled water and cooled to 4 C, and the pH was adjusted to 6.2 to 6.5 with 1 N NaOH. Just before use, the ammonium sulfate solution was warmed in a 56 C water bath until the precipitated crystals dissolved and then was cooled to room temperature. A volume of ammonium sulfate equal to that of the serum being precipitated was added drop-wise at room temperature under constant slow stirring. The mixture was placed at 4 C, stirred for an additional 2.5 hr, and then sedimented at 10,000 rev/min for 30 min at 4 C in a Spinco centrifuge using a number 30 head. The supernatant was removed, and the precipitate was redissolved in sufficient glass-distilled water (4 C adjusted to pH 7.3 with 1 N NaOH) to bring it to the original serum volume. Two additional precipitations were performed as above, except on the second and third precipitations the ammonium sulfate solution was used at 4 C rather than room temperature and the precipitate was sedimented immediately after the addition of ammonium sulfate. The final precipitate was dissolved in a volume of distilled water approximately one-third the original serum volume. Ten to 15 ml of the globulin solution was placed in a dialysis bag previously boiled for 10 min and dialyzed overnight in 3 liters of 0.85% NaCl (pH 7.3) at 4 C with constant agitation by a magnetic stirrer. Kaufman and Cherry (12) have shown that dialysis for 4 hr, in similar proportions of globulin solution to saline, eliminates interference with biuret determinations or fluorescein labeling by ammonium sulfate contamination. To be certain that removal of (NH4)2S04 was complete, in several instances the saline was changed after 18 hr and dialysis was continued for an additional 6 hr. At 2 and 6 hr, 0.5 ml of the dialysate was added to a similar volume of a saturated solution of barium chloride. In all instances, this test was negative, indicating the removal of significant (NH4)2S04 contamination. Conjugation with FlTC. With minor modifications, the conjugation procedure followed was that described by Spendlove (21) mg of FITC/ml) to produce a ratio of 20 mg of FITC per g of protein as measured by the biuret method. After addition of fluorescein, the pH was immediately raised to 9.5 with 0.04 N NaOH and conjugation was continued at room temperature for 30 min. Unreacted FITC was removed by passage through G-50 medium-grain Sephadex at 4 C. Conjugation and passage through Sephadex usually resulted in a threeto fivefold dilution of the original serum volume. On the whole serum, after fractionation and after conjugation, neutralizing antibody titer (5, 6), total protein concentration, electrophoretic pattern, and fluorescein to protein ratio (F:P), in micrograms per milligram, were determined. Electrophoresis was done on a Beckman microsome system (model R101) with the Analystral modified to take microsome cellulose acetate strips. Total protein and F:P of the conjugates were determined by methods described by Wells et al. (23). Tissue culture preparation. HEp-2 and primary monkey kidney cells were grown on cover slips in Leighton tubes. The cover slips were fixed in acetone at room temperature for 10 min after washing twice with 0.01 M phosphate-buffered saline (pH 7.6; PBS) and air drying. After fixation, the cover slips could be stained immediately or stored for several months at -20 C. Staining. For staining, 0.1 ml of conjugate was applied to the coverslip, incubated for 45 min at 37 C in a moist chamber, and then washed in PBS for 10 min. The cover slips were air-dried and mounted on slides with fluid comprised of nine parts glycerol and one part 0.067 M phosphate buffer, pH 8.5. Nonspecific staining (NSS) was eliminated by simultaneously determining the optimal dilution of the conjugate and the concentration of rhodamine (reference 20; FA rhodamine counterstain, Difco) which gave the brightest specific staining without NSS. This was done by making four sets of serial twofold dilutions of conjugate with PBS to which an equal volume of rhodamine, prepared with PBS and 10% fetal calf serum, was added at 2, 4, 6, or 8% concentrations. Each dilution was then tested against known homologous antigen. This procedure did not eliminate NSS sufficiently in the parainfluenza 1 and 3, RS, and herpes simplex conjugates. Absorption with tissue culture cells of the type on which they were to be used, in the ratio of 2 to 4 ml of conjugate to 1 ml of packed cells for 1 hr at 4 C, successfully removed all remaining NSS. The specificity of each conjugate was tested against a number of heterologous antigens and homologus viruses previously identified by conventional techniques. Positive controls and uninfected cover slip tissue cultures of the same batch used for growing the antigen were included in every test. The RESULTS Each of the six fluorescein-labeled antisera was monitored throughout preparation for the amount and types of protein in each fraction, preservation of antibody activity, and the extent of fluorescein-protein binding ( Table 1). The ammonium sulfate precipitation procedure described above resulted in complete recovery of the globulin fraction and elimination of all albumin as determined by electrophoresis. It should be noted that, in the last four conjugates in Table 1, the gamma globulin concentrations and neutralizing antibody titers in the (NH4)2SO4 fraction were slightly higher than those present in the original serum. This resulted from concentration of the globulin solution during the fractionation procedure. The conjugation method used produced quite consistent fluorescein labeling of all antisera with a narrow range of F:P between 11.5 and 14.5 jig/mg. Although attempts were made to produce sera with high levels of antibody, the optimal staining dilutions in the last line of Table 1 did not correlate well with the height of the neutralizing antibody titers. Conjugate dilution in conjunction with rhodamine counterstain and, where necessary, tissue culture cell absorption successfully eliminated all nonspecific staining, and no cross-reaction was seen with any of the heterologous antigens tested. However, in evaluating the parainfluenza 3 conjugate, one batch of uninfected monkey kidney controls showed definite specific staining characteristic of a myxovirus. This conjugate was subsequently tested against cells infected with parainfluenza 2 and simian virus 5 and neither antigen was stained. The identity of this contaminating agent was not definitely determined. The influenza A2 antiserum was prepared with the Hong Kong strain but contained antibody against the soluble antigen and gave equally bright specific staining with all type A strains used which included A2/Taiwan/1 /64, A2/Japan/ 20,1970 conjugate is another preparation which may contain antibody to soluble antigen, but we have not as yet determined its capacity for heterotypic staining. To compare the sensitivity of the FA technique with the conventional detection methods of reading cytopathic effect or, in the case of the myxoviruses, testing hemagglutination, tissue cultures were infected with 0.1 ml of descending 10-fold dilutions of laboratory-grown virus inocula ( Table 2). The FA technique showed markedly increased sensitivity, detecting virus on day 1 at dilutions 100 to 10,000 times higher than conventional methods. By day 3, all viruses had been detected by FA at the highest dilution done. This is illustrated in Fig. 1, which shows influenza A2(HK) at a 10-dilution 48 hr after infec-tion. This increased sensitivity was especially evident with RS and adenovirus T5, in which an inoculum of approximately 1 TCID50 was detected within 24 hr. In light of this ability to detect rapidly small virus inocula, the RS and parainfluenza 1 and 3 conjugates were tested by using pharyngeal washings from patients with respiratory infections collected in our earlier epidemiological studies (5). These specimens had previously been shown to contain one of these three viruses. 14 Negative at day 14c a Nine individual garglings from which RSV had been previously isolated in HEp-2. I Not done. c Three tubes were inoculated with each specimen; in specimen 9, two of the three tubes were contaminated. into five cover slip tissue cultures and three regular tissue culture tubes. As shown in Table 3, by 48 hr 50% of the parainfluenza 1 and 100% of the parainfluenza 3 were FA-positive (Fig. 2). On day 3, seven of eight parainfluenza 1 specimens were FA positive, whereas only 50% of these specimens showed hemagglutination. Hemadsorption done on the standard tube tissue cultures on day 5 and 9 confirmed that virus was present in all throat specimens done. With the nine clinical specimens containing RS virus, FA detection was equally rapid ( Table 4). The FA-positive tissue cultures on day 1 were very lightly infected with only several positive cells or nests per monolayer. By 48 hr, however, all specimens were easily read as positive (Fig. 3). Cytopathic effect was apparent on day 3 in only one specimen with the other specimens becoming positive at various times after that, except for specimens 8 and 9 which were still negative at 14 days. DISCUSSION The techniques described above allow the production of specific, sensitive fluorescein-labeled antiserum within 24 hr, and several sera can be done simultaneously. Rhodamine counterstain and tissue culture cell absorption are quite effective in eliminating NSS, and this latter procedure does not result in the loss of conjugate volume or antibody which occurs with tissue powder absorption (16). The quantitative recovery of serum globulins free from all albumin by ammonium sulfate precipitation was an unexpected but desirable finding which has not been previously reported. In a careful study of serum fractionation procedures, Lewis et al. (14) found that globulin yield could be increased from 60 to 80% by increasing the molarity of (NH4)2SO4 (14). Labeled albumin also increases problems with NSS (4). Before adopting the procedure outlined, our serum fractions by ammonium sulfate precipitation consistently showed loss of globulin protein as well as 4 to 8 % albumin contamination. The changes made in our original procedure consisted of raising the pH of the ammonium sulfate to 6.5 and then increasing the molarity of the solution by dissolving all of the ammonium sulfate at 56 C and using it at room temperature for the first precipitation. In addition, the time of reaction after the first precipitation was increased from 1 to 2.5 hr at 4 C, and the pH of the distilled water used to redissolve the globulin was raised to 7.3. These changes were made simultaneously and have not yet been studied systematically, so that the factor or factors responsible for the improved results are unknown. We have subsequently fractionated rabbit serum by the same technique, with results exactly like those reported here for goat serum. From our experience, the immunofluorescent technique is immediately applicable to the identification of virus agents previously isolated in tissue culture. Numerous homologous and heterologous viruses, previously identified by conventional techniques, were stained without any falsepositive or negative results. In a small confirmatory study, 25 myxoviruses previously isolated during epidemiological investigations (5) were submitted as unknowns for FA identification. The 10 parainfluenza 1 and 12 parainfluenza 3 strains were correctly identified, and the three influenza B viruses included were negative to both conjugates. Positive results were easily distinguished with no suggestion of crossreactions. Identification takes only 24 hr, which is considerably faster than neutralization tests and simpler than identification by hemagglutination-inhibition. The only difficulty encountered in identification was, as mentioned above, the staining of an unknown agent in one batch of uninfected monkey kidney controls with the parainfluenza 3 conjugate. The identity of this virus was not determined but it is possible that the agent stained was actually parainfluenza 3, since Hsiung (11) showed that 26% of rhesus monkeys studied manifested seroconversion to parainfluenza 3 during the holding period. This experience again confirms the necessity of including, in every test, uninfected controls of the same batch of tissue culture used to grow the virus. A more important potential of this technique than mere identification, however, is the rapid detection of viruses from clinical specimens either directly or after inoculation into tissue culture. Of the 23 throat washings inoculated in this study, virually all of the viruses could be detected and identified within 48 hr after culture and half were positive by 24 hr. This confirms other reports (17,19) with RS virus and adds parainfluenza 1 and 3 as capable of rapid detection. These three viruses account for a large number of the serious lower respiratory tract viral infections in young children (2), and, in many cases of croup, bronchiolitis, and pneumonia, it would be feasible to have an etiologic diagnosis in the same time bacterial cultures take, using only these three conjugates. Gray et al. (9) reported good success in the immediate diagnosis of RS infections by staining smears of pharyngeal mucosal cells much as has been done with influenza (10,15,22). However, the specificity of their data has been questioned (7) and more experience with staining of nasopharyngeal smears for RS virus must be accumulated. In the absence of specific antiviral therapy, there has not been much urgency among clinicians for rapid diagnosis of viral infections. The advent of such therapy seems likely in the near future and this will create the same diagnostic needs as with bacterial infections. Additional work with the application of the immunofluorescent technique in the detection and identification of respiratory viruses is necessary to fully define its usefulness. We feel, however, from our own work and from that reported by others that this method is feasible and holds great potential for making diagnostic virology simpler and more rapid.
2020-12-10T09:04:16.443Z
1970-08-01T00:00:00.000
{ "year": 1970, "sha1": "44a061345fd6c085d75476fa79864793db0787be", "oa_license": null, "oa_url": "https://aem.asm.org/content/aem/20/2/233.full.pdf", "oa_status": "GOLD", "pdf_src": "ASMUSA", "pdf_hash": "20fcd0cae2468db9f1d1ecedc9da48df59776a73", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
234203436
pes2o/s2orc
v3-fos-license
Near‐Infrared Emission by Tuned Aggregation of a Porphyrin Compound in a Host–Guest Light‐Emitting Electrochemical Cell The synthesis of 5,10,15,20‐tetrakis((5,10‐bis((2‐hexyldecyl)oxy)dithieno[3,2‐c:3′,2′‐h][1,5]naphthyridin‐2‐yl)ethynyl)porphyrin zinc(II) (Por4NT), a near‐infrared (NIR) emitting compound, comprising a zinc porphyrin core linked with triple bonds through its meso positions to four 5,10‐bis((2‐hexyldecyl)oxy)dithieno[3,2‐c:3′,2′‐h][1,5]naphthyridine (NT) arms is reported. Por4NT featured high solubility in common non‐polar solvents, which is ideal for easy processing through solution techniques, and high photoluminescence (PL) efficiency of ≈30% in dilute toluene solution. It also exhibited a strong tendency for aggregation because of its flat conformation, and this aggregation resulted in a strong redshifted emission and a drop in PL efficiency. A well‐matched PBDTSi‐BDD‐Py “host” terpolymer is therefore designed, which is capable of mitigating the aggregation of the Por4NT “guest”. An optimized blend of the host, guest, and an ionic‐liquid electrolyte is utilized as the active material in a light‐emitting electrochemical cell (LEC), which delivered strong NIR radiance of 134 µW cm‐2 with a long wavelength maximum at 810 nm at a low drive voltage of 5.0 V. The attainment of the strong NIR emission from the host–guest LEC is attributed to a tuned aggregation of the Por4NT emitter, which resulted in the desired aggregation‐induced redshift of the emission at a reasonably retained efficiency. Introduction Materials that emit in the near-infrared (NIR) range are critical for the realization of a wide range of applications in, for example, optical communication, security authentication, and medicine, [1] with the latter applications in part effectuated by the NIR window between 700-1000 nm of biological tissue. [2] Soluble organic NIR emitters are of particular interest for such applications since they can allow for cost-efficient solution-based fabrication of flexible devices. [3] However, organic NIR emitters have a drawback because they typically exhibit a significantly lower emission efficiency than the corresponding visible emitters, [2] because of the so-called energygap law and their tendency to aggregate into low-emissive H-aggregates. [4] The energy-gap law states that the probability of non-radiative transitions increases with increasing wavelength (decreasing energy), due to a concomitant increase in overlap The synthesis of 5,10,15,20-tetrakis((5, 10 -bis((2-hexyldecyl)oxy)dithieno[3,2c:3′,2′-h][1,5]naphthyridin-2-yl)ethynyl)porphyrin zinc(II) (Por4NT), a nearinfrared (NIR) emitting compound, comprising a zinc porphyrin core linked with triple bonds through its meso positions to four 5,10-bis((2-hexyldecyl) oxy)dithieno[3,2-c:3′,2′-h][1,5]naphthyridine (NT) arms is reported. Por4NT featured high solubility in common non-polar solvents, which is ideal for easy processing through solution techniques, and high photoluminescence (PL) efficiency of ≈30% in dilute toluene solution. It also exhibited a strong tendency for aggregation because of its flat conformation, and this aggregation resulted in a strong redshifted emission and a drop in PL efficiency. A well-matched PBDTSi-BDD-Py "host" terpolymer is therefore designed, which is capable of mitigating the aggregation of the Por4NT "guest". An optimized blend of the host, guest, and an ionic-liquid electrolyte is utilized as the active material in a light-emitting electrochemical cell (LEC), which delivered strong NIR radiance of 134 µW cm -2 with a long wavelength maximum at 810 nm at a low drive voltage of 5.0 V. The attainment of the strong NIR emission from the host-guest LEC is attributed to a tuned aggregation of the Por4NT emitter, which resulted in the desired aggregation-induced redshift of the emission at a reasonably retained efficiency. between the lower vibrational levels of the emissive first excited singlet/triplet state (S 1 /T 1 ) with the higher vibrational levels of the singlet ground state (S 0 ). [5] The formation of face-to-face H-aggregates results in the creation of two excited states with different energies, with the probability of radiative transition from the lower-energy state being very low. [4,6] A practical approach to mitigate the issues pertaining to aggregation and to improve the emission efficiency is to disperse the emitter in a host matrix. [7] Two devices that utilize organic emitters for the practical electroluminescent (i.e., "cold" emission) conversion of electric current to light emission are the organic light-emitting diode (OLED) and the light-emitting electrochemical cell (LEC). The LEC is distinguished from the OLED by the existence of mobile ions in the active material, which allow for in situ electrochemical doping of the electroactive compound (often the emitter) during operation. This process eventually results in the formation of a light-emitting p-n junction in the active material. [8] This particular LEC operation is of interest since it has paved the way for the fabrication of light-weight, [9] flexible, [10] stretchable, [11] fiber-shaped, [12] and large-area LEC devices [13] at very lowcost [14] using scalable printing and coating methods. [10b-d,13a,15] Recently, it was also demonstrated that well-designed host-guest LEC devices can deliver strong luminance at high efficiency. [16] A recent review by Pilkington et al. [19] nicely summarized the current status of NIR-emitting LEC devices. The majority of NIR-emitting LECs to date comprise ionic transition metal complexes (iTMCs) based on Ru [17] and Ir [7c] as the emitter, although NIR-emitting LECs based on Os [18] have also been reported. In 2008, Xun et al. [20] reported a series of Ru-based NIR emitters, which delivered long-wavelength NIR emission at ≈880-900 nm in LEC devices, but which also suffered from a low peak radiance of < 10 µW cm −2 . Ho and co-workers [21] employed an Ir complex and a laser dye in a host-guest LEC, but this device also delivered a very low radiance output of <10 µW cm −2 . More recently a set of NIR-LECs comprising Irbased iTMCs as the emitter was reported, which featured NIR emission from excimers. These devices exhibited high radiance (143-303 µW cm −2 ) and EQE (0.26-0.57%) for an EL peak wavelength >800 nm. [22] However, for many applications, it is preferable to employ emitters that are free from expensive and rare metals in the Pt group, [23] and in this context Pertegas et al. [24] reported LECs comprising two cyanine dyes as the host-guest blend, which featured a radiance of 170 µW cm −2 at a peak wavelength of ≈700 nm and an EQE of 0.44%. Tang and coworkers [25] reported the synthesis of a donor-acceptor copolymer, which was employed as the single-emitter in a NIR-LEC that delivered a radiance of 129 µW cm −2 at a peak wavelength of 705 nm and a low drive voltage of 3.4 V. Finally, Murto et al. [13b] incorporated a set of designed polymeric emitters in host-guest LECs, which exhibited a high radiance of 1500 µW cm −2 at a peak wavelength of 725 nm. Porphyrins are organic compounds employed by nature for a number of different tasks, notably for photosynthesis [26] and for oxygen transport in the blood stream. [27] They have also been employed as emitters in OLEDs. [28] The optical properties of porphyrins can be modified to achieve emission in the NIR region via the selection of the central metal atom and by the modification of the chemical constituents attached to the meso and β positions of the porphyrin core. [4,29] Recently, we reported the syn-thesis of a star-shaped diketopyrrolopyrrole-substituted Zn porphyrin that delivered deep NIR emission with a peak wavelength of 900 nm when introduced in an LEC device. However, the peak radiance and the external quantum efficiency (EQE) attained were quite low at 36 µW cm −2 and 0.028%, respectively. [30] In this work, we report the synthesis and characterization of a NIR emissive 5,10,15,20-tetrakis((5,10-bis((2-hexyldecyl) oxy)dithieno[3,2-c:3′,2′-h][1,5]naphthyridin-2-yl)ethynyl)porphyrin zinc(II) (Por4NT) and a compatible conjugated polymer (PBDTSi-BDD-Py) with a well-matched larger energy-gap. Through systematic experimentation and modeling, we established that Por4NT has a strong tendency to form aggregates in both solution and in the solid-state, and that this aggregation is manifested in a strong redshift and decreased efficiency of the emission. However, we found that PBDTSi-BDD-Py is capable of efficiently alleviating this aggregation, in particular, because of the introduced pyridine moiety in its backbone. By optimizing a Por4NT:PBDTSi-BDD-Py:electrolyte blend for the active material in an LEC device, we obtained a strong NIR radiance of 134 µW cm −2 (peak wavelength = 810 nm) with an EQE of 0.121% at a low drive voltage of 5.0 V. Zn was selected as the porphyrin core metal over, e. g., Pt or Pd, because of its abundance, low toxicity, and low price. [31] The choice of the large fused tetracyclic NT ring moiety was motivated by its extended conjugation and planarity, which was anticipated to result in redshifted emission, significant solidstate packing, and high charge-carrier mobility. Each NT unit was further decorated with two long and branched (2-hexyldecyl)oxy side chains to endow the porphyrin compound with high solubility in common organic solvents for facile solution processing. Thermogravimetric analysis (TGA) and differential scanning calorimetry (DSC) of solid Por4NT ( Figure S1, Supporting Information) revealed a high decomposition temperature of 300 °C under inert atmosphere, and no thermal transition between −50 and 250 °C. Results and Discussion Scheme S1, Supporting Information, shows the synthesis of the NT monobromide units following the procedure outlined by Kroon et al. [32] The porphyrin core of the guest emitter, Por4NT, was synthesized by condensation of 3-(triisopropylsilyl) propiolaldehyde with pyrrole followed by oxidation as described by Lindsey. [33] The porphyrin core was further metallated by reaction with zinc acetate dihydrate to afford ZnP-TIPS 4 . [30] Desilylation of ZnP-TIPS 4 with TBAF followed by Sonogashira coupling of the intermediate with NT monobromide afforded Por4NT, as depicted in Scheme S2, Supporting Information. Figure we reported the design, synthesis, and functional application of PBDTSi-BDD. [30] PBDTSi-BDD-Py was prepared by the replacement of 20% of the 1,3-bis(5-bromothiophen-2-yl)-5,7bis(2-ethylhexyl)-4H,8H-benzo[1,2-c:4,5-c′]dithiophene-4,8-dione (BDD) moieties in PBDTSi-BDD with pyridine units. The presence of pyridine units in PBDTSi-BDD-Py terpolymer was expected to facilitate axial coordination to the zinc metal core of Por4NT through the lone pair of electrons on the nitrogen atom leading to improved solubility and dispersity of the guest in the host matrix. [34] In addition, pyridine derivatives are commonly good electron transport materials because of their electron-withdrawing nature. [35] Moreover, the presence of pyridine reduces the polymer regularity and thereby renders the polymer backbone more flexible, which is expected to allow for more facile ion migration and electrochemical doping in the active material during LEC operation. [36] The third host polymer, F8BT, was included in this study because it is a common benchmark host material that is frequently employed in high-performance hostguest light-emitting devices. [37] The host polymer PBDTSi-BDD-Py was synthesized by the Stille coupling reaction between ((2,6-bis(trimethylstannyl) benzo[1,2-b:4,5-b′]dithiophene-4,8-diyl)bis(thiophene-5,2-diyl)) bis(tributyl-silane) (BDTSi-Sn 2 ), 1,3-bis(5-bromothiophen-2- Density functional theory (DFT) calculations were employed to study the molecular conformations and the electronic transitions of Por4NT. Figure 2a presents a side-view (upper part) and top-view (lower part) of the ground-state conformation (S 0 , red structure) and the first-excited-state conformation (S 1 , green structure) of Por4NT. The DFT data reveal that Por4NT adopts a highly coplanar conformation in both the ground state and the excited state. The modeling was performed with isobutoxy side chains of a C 4h point group conformation, but the complementary systematic DFT study presented in Figures S2 and S3 shows that the detailed conformation (C 4h and D 2h point groups) and the exact length of the side chains (methoxy, isobutoxy, 2-ethylbutoxy and 2-propylpentyloxy) have a negligible influence on the optimized ground state conformation and the excited state energy. Figure 2b presents the DFT-calculated natural transition orbitals (NTOs) of the lowest-energy electronic transition. We find that the hole and the electron distribution are both localized at the center part of the molecule, and that they are strongly overlapping in space. It is thus anticipated that the localized excited state is effectively protected from the surrounding by the bulky side chains, which in turn is expected to lead to a high yield of emissive excitons. [38] This hypothesis is further supported by the observation that the highest occupied molecular orbital (HOMO) and the lowest unoccupied molecular orbital (LUMO) distributions in the ground state were also calculated to be co-localized at the central part of the molecule ( Figure S4, Supporting Information). The DFT further predicted that the vertical excitation energy of Por4NT is 1.92 eV, and that the energy of the geometry-optimized S 1 state, that is, the adiabatic excitation energy is 1.78 eV. Figure 3a presents the absorption spectrum of the Por4NT emitter at different concentrations in toluene solution and as a thin film. Por4NT exhibited two absorption maxima at ≈515 nm and ≈705 nm (see Table 1), which are attributed to the S 0 -S 2 ("the Soret band") and the S 0 -S 1 ("the Q band") transitions, respectively. [39] We find that the Soret band is slightly blue-shifted with increasing Por4NT concentration, while the Q-band is slightly redshifted. In the solid Por4NT film, the Soret band peaked at 495 nm and the Q band at 724 nm. The optical energy gap of Por4NT was calculated from the onset of absorption in the solid-state to be 1.61 eV. Figure S5a, Supporting Information, presents a plot of the absolute value of the molar extinction coefficient of the Por4NT solution as a function of wavelength, and it peaked at a high value of ≈5 × 10 5 M −1 cm −1 in the Soret band region and reached ≈1 × 10 5 M −1 cm −1 in the Q band region. Figure 3b displays the normalized photoluminescence (PL) spectrum of Por4NT at concentrations ranging from 1 × 10 −7 M to 1 × 10 −4 M in toluene. Two PL peaks at ≈730-740 nm and ≈810-820 nm are observed for all concentrations, but the relative intensity of the longer-wavelength peak increases strongly with the Por4NT concentration. We also note that the longer-wavelength peak is redshifted somewhat with increasing concentration from 809 nm at 1 × 10 −7 M to 823 nm at 1 × 10 −4 M. A comparison with the absorption data in Figure 3a reveals that the Stokes shift of the S 0 -S 1 transition is low at ≈20 nm (=44 meV). Table 1 further reveals that the PL quantum yield (PLQY) drops significantly with increasing Por4NT concentration from 29.3% at 1 × 10 −7 M to 3.8% at 1 × 10 −4 M. Notably, we could not detect any measurable PL from the solid Por4NT film. Our observations from the emission studies suggest that Por4NT has a strong propensity for aggregation both in solution and in the solid-state. [40] This behavior is in agreement with the conformational DFT results depicted in Figure 2a, which revealed that Por4NT prefers to adopt a highly flat conformation in both the ground state and the excited state. Accordingly, we suggest that the aggregation of Por4NT takes the form of flat-on packing. We thus assign the higher-energy PL peak at ≈730-740 nm to the emission from "aggregation-free Por4NT" molecules, and the lower-energy PL peak at ≈810-820 nm to "partially aggregated Por4NT" molecules. It is worth noting that solid films of the "fully aggregated Por4NT" molecules do not exhibit measurable PL. We also note that the comparatively concentration-invariant absorption (Figure 3a) suggests that the influence of aggregation is most prominent in the emission. The three host polymers were optically characterized as regards to their capacity to function as efficient hosts for the Por4NT guest. Figure S5b, Supporting Information, presents the normalized absorption spectra of thin films of the host polymers which show absorption maxima at 465, 600, and 560 nm for F8BT, PBDTSi-BDD, and PBDTSi-BDD-Py, respectively. The optical energy gap, as derived from the onset of absorption, is 2.32 eV for F8BT, 1.83 eV for PBDTSi-BDD, and 1.84 eV for PBDTSi-BDD-Py. Figure 4a presents the normalized PL spectra of the three host polymers and the absorption spectrum of the Por4NT guest emitter. F8BT exhibited the highest-energy emission with the PL peak at 558 nm. PBDTSi-BDD featured the lowest-energy emission with its PL peak at 748 nm, whereas the PL peak of PBDTSi-BDD-Py is positioned at 690 nm. The solid-state PLQY was measured to be 4.1% for F8BT, 5.2% for PBDTSi-BDD, and 7.6% for PBDTSi-BDD-Py. The incorporation of the pyridine moiety in the backbone of the terpolymer PBDTSi-BDD-Py resulted in a significant blueshift of both the absorption and the emission, as well as an increase in the PL efficiency, in comparison to the copolymer PBDTSi-BDD. Time-resolved PL (TRPL) measurements were also conducted with an excitation wavelength of 400 nm, and the lifetimes (τ) of the neat host films and the host-guest blend films with a guest concentration of 7 mass% were measured. The normalized spectra are presented in Figure S8, Supporting Information, and the lifetime was estimated by the 1/e method, considering the instrumental response function. Table S2, Supporting Information, presents the derived rates of the radiative and the non-radiative processes using the values for the PLQY and the PL lifetimes. Note that the non-radiative processes in blend polymers can include a carrier dissociation processes after the photogeneration. The TRPL data show that the neat host films exhibit very short lifetimes on the order of picoseconds, and that the lifetime is even shorter for the blend films. This indicates that the energy transfer process from host to guest is very fast. It also confirms that the emission is in the form of short-lived fluorescence and not long-lived phosphorescence. Importantly, we find that the PL spectrum of F8BT exhibits a partial overlap with the higher-energy Soret-band absorption of the Por4NT guest emitter, whereas the PL spectra of PBDTSi-BDD and PBDTSi-BDD-Py feature almost perfect overlap with the lower-energy Q-band absorption of Por4NT. This observation indicates that efficient Förster resonance energy transfer from host to guest can take place for all three host-guest blends. A more direct investigation of the merits for host-to-guest energy transfer is provided by Figure 4b, which presents the PL spectra of thin films of the host-guest blends. With F8BT as the host polymer, two significant PL peaks are observed, with the one at 560 nm being due to the F8BT host and the one at 821 nm originating from the Por4NT guest; this shows that the host to guest energy transfer for the F8BT system is not complete. A more complete host to guest energy transfer is observed for the other two host-guest systems, with the major PL peak being positioned at 820 and 817 nm for PBDTSi-BDD:Por4NT and PBDTSi-BDD-Py:Por4NT blends, respectively. This majority peak originates from the Por4NT guest emitter, while only a very weak remnant higher-energy PL emission is observed from the host. More specifically, for the PBDTSi-BDD:Por4NT blend the remnant host PL corresponded to ≈15% of the total PL, while for the PBDTSi-BDD-Py:Por4NT it was even lower at ≈11%. Finally, we also measured the solid-state PLQY of the host-guest blends, and it was 7.6% for F8BT:Por4NT, 3.7% for PBDTSi-BDD:Por4NT, and 4.0% for PBDTSi-BDD-Py. The electrochemical properties of the three host polymers and the Por4NT guest molecule are critical for the operation of LEC devices, and they were investigated by cyclic voltammetry (CV). Figure 5a shows the CV traces recorded on solid-state thin films, which reveal that all three host polymers exhibit highly reversible electrochemical oxidation and reduction reactions. The CV trace for the Por4NT guest indicates a highly reversible oxidation reaction, whereas the reduction reaction is less reversible, presumably because the reduced Por4NT is highly reactive in the CV solution. [41] We tentatively assigned the observed oxidation and reduction reactions to the conductivity-enhancing processes of p-type doping and n-type doping, respectively, and more support for this assignment is given below. The HOMO and LUMO energy levels could be estimated from the measured onset potentials for oxidation (E ox ) and reduction (E red ) in CV, using the equations HOMO = −(E ox + 5.13) eV and LUMO = −(E red + 5.13) eV. Figure 5b presents a summary of the derived HOMO and LUMO levels, which are −5.66 and −3.80 eV, respectively, for the Por4NT guest. The electrochemical energy gap of Por4NT is accordingly 1.86 eV, which is higher than the measured optical gap of 1.61 eV. The derived HOMO values of the PBDTSi-BDD and PBDTSi-BDD-Py host polymers are −5.82 and −5.87 eV, while their LUMOs are positioned at −3.48 and −3.34 eV, respectively. The electrochemical energy gaps are 2.34 and 2.53 eV for PBDTSi-BDD and PBDTSi-BDD-Py, respectively, which are larger than their corresponding optical energy gaps of 1.83 and 1.84 eV, respectively. The LUMO level of F8BT is positioned in between that of the other two host polymers at −3.44 eV, whereas the HOMO is much deeper at −6.17 eV. This translates into the largest electrochemical energy gap of 2.73 eV for F8BT. We note that the energy levels of the Por4NT guest are located within the energy gap of all three host materials. This implies that both electrons and holes will be trapped on the Por4NT guest, which is beneficial for the host-to-guest energy transfer during electrical driving of host:guest blends in devices. [37a] We now turn our attention to the performance of LEC devices based on the new host and guest materials. For the electrolyte in the active material, we selected to employ the ionic liquid tetrahexylammonium tetrafluoroborate (THABF 4 , mass concentration = 5%), because of its broad electrochemical stability window is expected to inhibit non-desired electrolyte-induced side reactions. [42] The LEC devices were fabricated in an indium-tin-oxide (ITO)/poly (3,4- We find that all three device types feature a decreasing voltage and an increasing radiance during the initial operation at a constant current density of 75 mA cm −2 . This is the characteristic behavior of a well-behaved LEC device that features in situ conductivity-and injection-enhancing p-type and n-type doping at the two electrode interfaces, [43] and this observation thus yields further support to our previous conclusion that all three polymer hosts are functional LEC materials that can be both p-and n-type doped; see Figure 5 and related discussion. We note that the minimum drive voltage is lowest at 3.9 V with the host polymer being PBDTSi-BDD and highest at 6.9 V for the F8BT polymer, which is in agreement with that the electron/hole trap depths at the guest molecules are most shallow in the former system and deepest in the latter; see Figure 5b. A comparison between the host-only devices (Por4NT concentration = 0%) and the host-guest devices in Figures 6d-f reveal that all host-guest LECs feature a significant emission contribution from the host polymer (at ≈550 nm for F8BT, ≈670 nm for PBDTSi-BDD, and ≈660 nm for PBDTSi-BDD-Py), but that this host contribution, as expected, is decreasing with increasing Por4NT guest concentration. We also find that the overall NIR performance of the F8BT-NIR-LEC is inferior to that of the other two host-guest LECs (see also Table 2), and we therefore focused on the two new polymer hosts for the subsequent optimization and analysis. This optimization resulted in that the best device performance was attained with a 7 mass% concentration of the Por4NT guest dispersed into the PBDTSi-BDD-Py host (and the THABF 4 electrolyte). More specifically, Table 2 shows that this device delivers a strong peak radiance of 134 µW cm −1 at an EQE of 0.121% and a drive voltage of 5.0 V. Importantly, a vast majority of this radiance (≈96%) is delivered in the NIR range above 700 nm, with the peak emission wavelength being 810 nm. Figure 6f shows that the fraction of NIR light is similar for the Por4NT guest concentration of 10 mass%, but Table 2 reveals that the drawback is a significantly lowered radiance and efficiency and an increased voltage. The steady-state EL spectra of all of the host-guest LECs are displayed in Figure S7, Supporting Information. The observation that the optimized PBDTSi-BDD-Py:Por4NT host-guest LEC emits efficient NIR light at 810 nm is interesting, since the PL data presented in Figures 3b and 4 strongly suggest that this emission originates from the "partially aggregated Por4NT" molecules. Remember that "fully aggregated Por4NT" molecules in a solid film do not feature any measurable emission in PL, and that isolated "free Por4NT" molecules emit at much shorter wavelengths. The device results also show that a Por4NT guest concentration above 7% results in a significant drop in radiance and efficiency, which suggests that the "partial aggregation"-"complete aggregation" threshold for good device performance is positioned at a guest concentration of 7%. Further support for this conclusion is provided by the AFM study displayed in Figure S9, Supporting Information, which shows that minor surface aggregation can be visibly observed in PBDTSi-BDD-Py:Por4NT active-material film at a Por4NT guest concentration of ≈7 mass%. We have also fabricated and characterized OLED devices, with PEDOT:PSS and Ca as the two electrodes, and Table S1, Supporting Information, show that the host-guest LECs outperform the corresponding hostguest OLEDs by a factor of two for both the peak radiance and the efficiency, while the drive voltage is essentially the same. To summarize, we report on the design and synthesis of a new NIR porphyrin-based Por4NT emitter, which features a high solubility in common non-polar solvents and high PL efficiency of ≈30% in dilute toluene solution. Por4NT exhibits a strong tendency to form aggregates because of its flat conformation, and this aggregation results in a strong redshift, and a drop in efficiency of the emission. We therefore designed and synthesized a compatible PBDTSi-BDD-Py "host" terpolymer, which is capable of inhibiting the aggregation of the Por4NT "guest". An optimized blend of the host, guest and an ionicliquid electrolyte was utilized as the active material in an LEC, and such optimized host-guest LECs delivered a strong NIR radiance of 134 µW cm −1 with a long peak wavelength of 810 nm at a low drive voltage of 5.0 V. We attribute the attainment of the strong NIR emission from the host-guest LEC to a tuned partial aggregation of the Por4NT emitter, which results in the desired aggregation-induced redshift of the emission at a reasonably retained efficiency. These results suggest that the future design of efficient porphyrin-based NIR emitters could consider a tuning of the aggregation through the introduction of bulky substituents at the porphyrine periphery and by a synthesis of emitter-compatible host materials. Experimental Section Material Characterization: Nuclear magnetic resonance spectra were recorded on a Bruker 600 MHz instrument in chloroform-d and pyridine-d5. For the host materials, size exclusion chromatography (SEC) was performed on an Agilent PL-GPC 220 integrated high-temperature GPC/SEC system with refractive index and viscometer detectors and three sequential PLgel 10 µm MIXED-B LS 300 × 7.5 mm columns. The eluent was 1,2,4-trichlorobenzene and the operating temperature was 150 °C. The molecular weights were calculated relative to calibration with polystyrene standards. The absorbance spectra were measured using a Varian Cary 50 UV/Vis spectrophotometer in a 10 × 10 mm 2 quartz cuvette. PL and PLQY measurements were carried out with a C9920 Hamamatsu absolute PLQY spectrometer. AFM measurements were performed in tapping mode using a MultiMode SPM microscope equipped with a Nanoscope IV Controller (Veeco Metrology) on solidstate thin films.TGA was conducted using a Mettler Toledo TGA/DSC 3+ STAR System under a N 2 atmosphere at a heating rate of 10 °C min −1 . DSC measurements were carried out on a Mettler Toledo DSC 2 STAR System under nitrogen atmosphere, over a temperature range of −80-380 °C using a heating/cooling rate of 10°C min −1 for PBDTSi-BDD, −80-350 °C for PBDTSi-BDD-Py and −80-300 °C for Por4NT. Cyclic Voltammetry (CV): CV measurements were done on a CH-Instruments 650A Electrochemical Workstation in a threeelectrode cell using a Pt wire as the working electrode, a Pt wire as the counter electrode, and a Ag wire as the quasi reference electrode calibrated using ferrocene/ferrocenium (Fc/Fc + ) redox couple. A 0.1 M tetrabutylammonium hexafluorophosphate (Bu 4 NPF 6 ) in anhydrous acetonitrile solution was the electrolyte, which was bubbled with nitrogen before each measurement. The compound under study was deposited as a thin film onto the working electrode from a chloroform solution. The oxidation and reduction scans were measured separately at a scan rate of 100 mV s −1 , using a minimum of four measurements for each material to ensure repeatability. The HOMO and LUMO levels were derived from the first oxidation and reduction onset potential (E ox and E red ) by setting the Fc/Fc + oxidation onset potential versus the normal hydrogen electrode (NHE) to 0.63 V and the NHE to −4.50 V in the Fermi vacuum scale using equations: HOMO = −(E ox + 5.13) eV and LUMO = −(E red + 5.13) eV. Device Fabrication and Characterization: All materials were dissolved separately in anhydrous chlorobenzene. The concentration of the Por4NT guest was 10 mg mL −1 while that of the polymer hosts was 15 mg mL −1 . For the host-guest blends, the host-polymer, and the guest-emitter solutions were blended in the desired mass ratio. The active-material inks were prepared by adding tetrahexylammonium tetrafluoroborate (THABF 4 ) ionic liquid into host-guest blend at 5 mass%. The LEC devices were fabricated by sequentially spin-coating a poly(3,4-ethylenedioxythiophene):poly(styrene sulfonate) (PEDOT:PSS, Clevios P VP AI 4083, Heraeus) ink at 4000 rpm for 60 s and the activematerial ink at 2000 rpm for 60 s onto carefully cleaned indium-tin-oxide (ITO) coated glass substrates (20 Ω per square, Thin Film Devices, US). The thickness of the dry PEDOT-PSS film was 40 nm while that of the active-material film was 80 nm. A set of four Al electrodes was deposited on top of the active material by thermal evaporation at p < 5 × 10 −4 Pa. The light-emission area, as defined by the cathode-anode overlap, was 0.2 × 0.2 cm 2 . The LECs were driven by a constant-current circuit and the voltage was logged by a microcontroller board (Arduino UNO) connected to a computer. The ITO electrode was invariably biased as the positive anode and Al was the negative cathode. The emitted radiance was measured with a calibrated Si photodiode (S2387-33R, Hamamatsu), and the emission spectrum was detected with a spectrometer (USB2000+, Ocean Optics). All of the above procedures, except for the deposition of the PEDOT:PSS layer, were carried out in two interconnected N 2 -filled glove boxes ([O 2 ] < 1 ppm, [H 2 O] < 0.5 ppm). Supporting Information Supporting Information is available from the Wiley Online Library or from the author.
2021-05-11T00:03:57.776Z
2021-01-14T00:00:00.000
{ "year": 2021, "sha1": "4ae00608fa92af23db2142bb72c6140454dd7328", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/adom.202001701", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "17994902c2386f0e5f8b8b8af79d574000c62ed2", "s2fieldsofstudy": [ "Materials Science", "Chemistry" ], "extfieldsofstudy": [ "Materials Science" ] }
9687290
pes2o/s2orc
v3-fos-license
More about Kaluza-Klein Regularization We study the so-called ``Kaluza-Klein regularization''. We calculate one-loop corrections to the Higgs mass due to Kaluza-Klein modes explicitly in a model with SUSY breaking mass splitting between bosonic and fermionic modes. We perform the proper time cutoff at $1/\Lambda^2$ and the KK level truncation at $\ell$. It is shown that the finite result is obtained as long as $\ell \gg R\Lambda$ for the compactification radius $R$. Recently, many efforts have been done to study supersymmetry (SUSY) breaking in extra dimensions as well as other phenomenologies in extra dimensions, and novel aspects have been found. Among many interesting aspects, in Ref . [1]- [5] it has been found that one-loop radiative corrections to the Higgs mass are finite in the models with the Scherk-Schwarz SUSY breaking mechanism [6,7] and also in the models with localized SUSY breaking 3-brane. In such perturbative calculations, the so-called "Kaluza-Klein (KK) regularization" is used, that is, infinite summation of all KK modes is taken first. However, a doubt on the validity of the KK regularization has arisen [8,9]. It is claimed that the exchange between infinite sum and infinite integral would lead to an incorrect result. In Ref . [8], the momentum cut-off Λ is introduced and the KK-tower is truncated at the level ℓ. In addition, it is also claimed that physically the level to contribute should be truncated around the momentum cut-off Λ. In Ref . [8], such physical truncation is realized by a sharp cut, ℓ ≈ ΛR, where R is the compactification radius. Then, the quadratic divergence is found to appear in the Higgs mass corrections due to finite numbers of KK modes. Moreover, there is a debate that the result of Ref. [8] may be because of the sharp cut of the KK-tower, and such procedure spoils symmetries of the underlying theory. Instead of such truncation, in Ref. [10] the suppression by a Gaussian brane distribution is considered. There the infinite numbers of KK modes are summed, but the couplings of higher modes are suppressed by a finite width of the brane. Then the result is the same as the KK regularization, that is, the Higgs mass correction is finite. However, it is pointed out also that there are other distributions leading to a linear sensitivity with the momentum cut-off Λ. Also the Pauli-Villars regularization is discussed in Ref. [11]. 1 In this letter, we consider the grounds for finiteness of the KK regularization. We calculate explicitly the correction to the Higgs mass due to the KK modes by performing both of the momentum cut-off and the KK level truncation. Our purpose is to show in which case the correction becomes insensitive to details of physics in the ultraviolet (UV) scale. We discuss by using the proper time regularization for the UV divergence. Because the proper time regularization does not spoil four-dimensional symmetries. Also the proper time regularization is in a sense smooth compared with the sharp momentum cut-off. Actually, the proper time regularization was used to derive the power-law behavior of gauge couplings due to KK modes [14,15,16]. It was shown in Ref. [16] that transition to the power-law behavior at the compactification scale appears smoothly in the proper time regularization compared with other regularization schemes, although qualitatively same results for the power-law bahavior are obtained in different regularization schemes. Hence, we use here the proper time regularization as a first trial. Then it will be shown that the finite result is obtained as long as ℓ ≫ RΛ for the compactification radius R. This is seen also even when the sharp momentum cut-off is used [8]. Suppose that we have the following bosonic and fermionic KK modes, with n = 0, ±1, ±2, · · ·, where we have normalized the compactification radius R with the factor √ π for convenience of the later calculations. For simplicity, we have assumed that only the bosonic modes have SUSY breaking masses. However, the following discussions are applicable for more generic case. On top of that, their coupling to the (zero-mode) Higgs field is denoted by g, which is assumed to be universal between bosonic and fermionic modes, and lower and higher KK modes. In this setup, the fermionic contribution to the Higgs mass is proportional to the following integral, Thus, how to calculate this and the corresponding bosonic integral is the point to be investigated. Here we use the Scwinger representation, that is, we use the following identities: Then the above integral (2) is written as Here we truncate the KK-modes at the level ℓ and put the cut-off 1/Λ 2 for the proper time integral. Then what should be evaluated is the following integral Now, both of the summation and integral are finite, and we can exchange safely the summation and integration, Eq. (6) includes the suppression factor e −πtn 2 /R 2 for higher KK modes n ≫ ΛR/ √ π. Thus, naively thinking, we can replace the finite summation by the infinite summation, We shall discuss implication of this replacement later. If the replacement (7) is allowed, the calculation is rather simple. We use the Poisson summation formular, i.e. the modular transformation property of the θ 3 (iA) function, 2 such that we obtain where C ≡ ∞ 1/Λ 2 dy √ y e −y , and that is finite. In particular, we have C = Γ(3/2) in the limit Λ 2 → ∞. Thus we have the Λ 3 divergence for the fermionic modes only. Similarly, we can calculate the contribution due to bosonic modes with the masses (m b(n) ) 2 = π(n + a) 2 /R 2 . The divergent term is exactly the same as Eq. (10), although the finite part is different. Thus, if the replacement (7) is allowed, we can deduce that the Higgs mass correction due to bosonic and fermionic KK modes with the mass spectrum (1) is finite at the one-loop level of perturbative calculations. In the above calculations, we have used the replacement (7) of the finite number of summation by the infinite number of summation. Such replacement by the infinite number of summation may be doubtable from the viewpoint of the sharp truncation. Here we discuss more about the replacement (7). First of all, each fermionic (bosonic) KK mode would have a quadratic divergent correction Λ 2 to the Higgs mass. If we have (2k + 1) KK modes contributing to the Higss mass corrections, we have totally the corrections of the order of (2k + 1)Λ 2 . We compare this naive estimation with the result (10), whose divergence part is proportional to Λ 3 . That implies that in the proper time regularization and the replacement (7) the KK modes higher than k ≈ ΛR are decoupled effectively, but not by the explicitly sharp truncation, that is, the infinite summation seems not essential to obtain the finite result, but the summation over ℓ ≫ k ≈ ΛR is enough. This result is also in agreement with the sharp cutoff case examined in Ref. [8]. Next we examine the replacement (7). For concreteness, we use the case with a = 1/2. For the moment, we require two points that 1) the positive and negative modes be treated on an equal footing and 2) the involved number of bosonic KK modes be the same as one of the corresponding fermionic modes in Eq. (6). Hence, the integral corresponding to I f (6) is obtained Now we estimate what we have added in the replacement (7). We have added the following contribution for the fermionic modes: In a similar replacement of bosonic contribution, we have added the following contribution: Then the difference I ′ b − I ′ f is written This differece I ′ b − I ′ f contributes to the Higgs mass if ℓ is finite. The terms in the bracket of Eq. (14) can be expanded The terms corresponding to the quadratic divergence Λ 2 seem to be cancelled, but the terms corresponding to the logarithmic divergence log Λ 2 are not cancelled. However, note that there is the suppression factor e −πtn 2 /R 2 in Eq. (14) with n ≥ ℓ+1 and t ≥ 1/Λ 2 . Thus, if ℓ is large enough, such logarithmic divergence log Λ 2 is decoupled. That is consistent with the previous result. On the other hand, around ℓ ≈ RΛ, such suppression factor does not work. Then, we have the logarithmic divergence log Λ 2 for each mode. The divergence is enhanced by the number of relevant KK modes. Thus, again, whether we have the divergence or not, depends on if we allow to take ℓ enough large. We need not the infinite number for ℓ. The splitting of the modes with the masses n + 1/2 and n − 1/2 in Eq. (11) might look artificial. The reason for such representation is that we cared the edge modes with n = ±(ℓ + 1/2). However, now it is obvious that such higher modes have no contribution for sufficiently large ℓ because of the suppression factor e −πℓ 2 /(ΛR) 2 . Actually, the integral is written and the result is same, that is, we have the finite result for large ℓ ≫ ΛR. Even in the case that we differ the levels for truncation between the fermionic and bosonic modes, say ℓ f and ℓ b , we have the finite result for sufficiently large ℓ f , ℓ b ≫ RΛ. So far we have not been concerned about running of the gauge coupling (and Yukawa couplings, if any). In extra dimensions the scale dependence is given by power law. We can incorporate this running coupling into the Higgs mass corrections by examining the renormalization group (RG) equations for I b (11) and I f (6). Indeed the finiteness is clearly seen in the RG point of view. Here we consider the RG equation in the Wilson sense, which are given by with a = 1/2. Actually it is not necessary to give the beta function for the gauge coupling in order to see the finiteness of the correction. What concerns us is ε b (RΛ) − ε f (RΛ). If this difference vanishes for the region RΛ > 1, the corrections to the Higgs mass would be UV insensitive and finite. For example we take ℓ = 10. Fig. 1 shows ε b (x) and ε f (x). Behaviors of the two curves are almost same for x > 0.5. In the region with x < 10, i.e. RΛ < ℓ, the curve of ε b,f (x) behaves linearly. This behavior is consistent with the previous result (10) that the divergence behaves like Λ 3 for RΛ ≪ ℓ. For large x, ε b,f (x) goes close to the constant 2(2ℓ + 1). 2 shows the difference ε f (RΛ) − ε b (RΛ). The difference damps rapidly above x ∼ 1. That implies that for 1 ≤ RΛ < ℓ, no correction arises to the Higgs mass. Therefore, as long as we keep the KK level truncation as RΛ ≪ ℓ, the correction is finite and of O(1/R 2 ). It was mentioned above that we need large ℓ ≫ ΛR in order to obtain the finite result. This numerical calculation shows how large ℓ is necessary, and ΛR/ℓ < 0.9 seems sufficient, although the explicit value 0.9 has no serious meaning. That suggests a huge gap between ℓ and ΛR is not required. There is a small lump starting from x ≈ 10, i.e. ΛR ≈ ℓ. That corresponds to the sum of logarithmic divergences, which is mentioned in the level-by-level calculation around Eq. (15), and that is also consistent with the calculation in Ref. [8] showing the presence of divergence, where the level truncation is taken around ℓ ≈ ΛR. The difference ε f (RΛ) − ε b (RΛ) behaves as x −2 at x ≫ l. This implies that the corrections depend on the UV cut-off Λ logarithmically (with constant g), when we truncate the KK modes at much lower scale than Λ. We have calculated numerically the case with a = 1/2. We can easily extend to other cases, e.g. for other values of a, For any value of a, the difference ε f (RΛ) − ε b (RΛ) behaves like Fig. 2 in the region with x < ℓ. That implies for ℓ > ΛR the correction to the Higgs mass is finite and of We have used the proper time regularization. However we can apply also other regularization schemes, e.g. the sharp momentum cut-off. Explicitly we examine the beta function and the similar function for the bosonic contributions. Then the power law behavior does not depend on the regularization scheme, except for the KK threshold corrections [16]. Therefore we obtain qualitatively the same results for the finiteness of the corrections. Also similar discussions seem useful not only for the model with the mass spectrum (1), but more generic case. For any other mass spectra, we can define the difference ε f (RΛ) − ε b (RΛ) like Eqs. (17) and (18). Then it can be used as an index for the presence of quadratic divergences. To summarize, we have calculated explicitly the Higgs mass correction due to KK modes. The momentum cut-off Λ and the level of truncation ℓ have been included. We have used the proper time regularization, which decouples effectively higher KK modes n > ΛR, and that would fit the philosophy of Ref. [8] that such higher modes should be decoupled. However, the decoupling by the proper time regularization is smooth compared with the sharp cut of Ref. [8]. Our results are in order. Finiteness or the appearance of divergences does not depend on whether we put a finite truncation of the KK modes or we sum the infinite number of KK modes, but it depends on where we take the finite truncation ℓ. If we are allowed sufficiently large truncation compared with the momentum cut-off ℓ ≫ ΛR, we obtain the finite result. Numerical study shows we need a large truncation ℓ, but not a huge gap between ℓ and ΛR. On the other hand, if we put ℓ ≈ ΛR, the divergence appears. That might be rather obvious. Because truncating around the momentum cut-off means we would see how to truncate the modes. Therefore, finiteness depends if theory allows ℓ ≫ ΛR. For the spectrum (1), it might be artificial to assign the bosonic state of the mass m b(n) = √ π(n + q)/R with the fermionic state of the mass m f (n) = √ πn/R. The corresponding bosonic state might be the state with m b(n) = √ π(n − 1 + q)/R. For example, suppose a theory, which should be irrelevant to such artificial truncation of the edge. In this case one has to take ℓ ≫ ΛR, such that low energy physics becomes insensitive to how to treat such edge. The concept of locality in extra dimensions, which are discussed in Refs. [1]- [5], might forbid the truncation of KK modes around the cut-off. On the other hand, suppose a theory, where the cut-off Λ has a real meaning, e.g. the string scale. In this case, we have to truncate at ℓ ≈ ΛR, above which new modes might appear. For such case, we have to know initial conditions at Λ to discuss low energy physics. The radiative corrections in the string theory with a large radius of compactification may be well approximated by using the effective field theroy with infinite tower of the KK modes [17]. There the corrections are represented by a kind of proper time cutoff, with which the KK modes heavier than the string scale are decoupled. Therefore the correction to the higgs mass is supposed to be insensitive to the string scale. Finite results for the Higgs mass correction due to KK modes have been found first for models with SUSY breaking by the Scherk-Schwarz mechanism Refs. [1] - [5]. However, what we have studied is the calculations of the correction under the mass spectrum (1). Whether the spectrum is obtained by the Scherk-Schwarz mechanism or other SUSY breaking mechanisms, is irrelevant to the calculations. Thus, finite results are generic for models with the mass spectrum. To repeat, the key-point is whether models allow to take large truncation ℓ ≫ ΛR. Our discussions for finiteness are valid at the one-loop level of perturbation theory. The two-loop level [18] and more are beyond the scope.
2014-10-01T00:00:00.000Z
2001-08-08T00:00:00.000
{ "year": 2001, "sha1": "ba4cd7e8f23c4b8e63112dce57ea54b50d2e8ad4", "oa_license": null, "oa_url": "https://academic.oup.com/ptp/article-pdf/107/4/785/5343249/107-4-785.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "adb66a75d41e1540c3ad8d89ac0afe5863af6b33", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
3308273
pes2o/s2orc
v3-fos-license
Biologically-Targeted Detection of Primary and Micro-Metastatic Ovarian Cancer Ovarian cancer is the leading cause of morbidity/mortality from gynecologic malignancy. Early detection of disease is difficult due to the propensity for ovarian cancer to disseminate throughout the peritoneum. Currently, there is no single accurate test to detect primary or recurrent ovarian cancer. We report a novel clinical strategy using PPF: a multimodal, PET and optical, folate receptor (FR)-targeted agent for ovarian cancer imaging. The capabilities of PPF were evaluated in primary human ovarian cancer cells, in vivo xenografts derived from primary cells and ex vivo patient omemtum, as the heterogeneity and phenotype displayed by patients is retained. Primary cells uptake PPF in a FR-dependent manner demonstrating approximately a 5- to 25-fold increase in fluorescence. By both PET and fluorescence imaging, PPF specifically delineated FR-positive, ovarian cancer xenografts, with similar tumor-to-background ratios of 8.91±0.91 and 7.94±3.94, and micro-metastatic studding (<1mm), which demonstrated a 3.5-fold increase in PPF uptake over adjacent normal tissue. Ex vivo patient omentum demonstrated selective uptake of PFF by tumor deposits. The ability of PPF to identify metastatic deposits <1mm could facilitate more complete debulking (currently, optimal debulking is <10mm residual tumor), by providing a more sensitive imaging strategy improving treatment planning, response assessment and residual/recurrent disease detection. Therefore, PPF is a novel clinical imaging strategy that could substantially improve the prognosis of patients with ovarian cancer by allowing pre-, post- and intra-operative tumor monitoring, detection and possibly treatment throughout all stages of therapy and tumor progression. Introduction Epithelial ovarian cancer is the leading cause of morbidity/mortality from gynecologic malignancy [1], with the high-grade serous ovarian cancer (SOC) histotype representing the largest proportion (65%) of cases [2]. SOC frequently presents at an advanced stage and has a poor overall survival, largely because the ovaries are located deep within the pelvis and the disease presents with few persistent, and usually subtle, symptoms. Consequently, almost 90% of patients are diagnosed at Stage III/IV, with widespread peritoneal carcinomatosis, and a five-year survival of less than 30% [3]. Earlier detection of small volume disease, although essential for cure, is difficult with current modalities due to the propensity for per- Ivyspring International Publisher itoneal dissemination early in the course of disease. Most patients respond to current therapies, including cytoreductive surgery and chemotherapy. However, the majority (70-90%) of patients recur and eventually die of their disease [4]. Increased residual tumor volume after surgery increases the risk of relapse and decreases survival of SOC patients. Currently, there is no single accurate test to detect primary or recurrent disease. Therefore, methods that enhance the detection of SOC, before, during and after surgery, might improve the prognosis of patients with this deadly disease. Several studies have revealed that up to 90% of human ovarian tumors, particularly those of the SOC subtype, overexpress folate receptor-α (FR) [5]. By contrast, most normal tissues express low to negligible levels of FR, raising the possibility that agents targeted to this receptor might be useful for imaging and/or drug delivery, in SOC and other FR-overexpressing tumors. Indeed, several studies have shown that folate can be used as a vehicle to deliver therapeutics or imaging agents directly to FR-overexpressing tumors [6][7][8][9]. Furthermore, one study demonstrated that a folate-targeted, single photon emission computed tomography (SPECT) imaging agent identified ovarian cancer patients who were more likely to benefit from folate-targeted therapy [9]. Recently, a folate-targeted, FITC-tagged small molecule for intraoperative FR-specific fluorescence imaging was reported [8]. Based on this evidence, FR has clinical potential as an imaging and therapeutic target for SOC. However, improvements to probe design that allow the use of multiple imaging modalities could increase the clinical applicability of FR-targeted agents. Previously described folate receptor-targeted probes are either fluorescence-or SPECT-based [6][7][8][9]. However, positron emission tomography (PET) probes have multiple advantages, including: (1) shorter half-lives of most PET radioisotopes, allowing for faster clearance and higher dose administration; (2) ease of attenuation correction, producing better resolved images; (3) increased sensitivity (by 2-3 orders of magnitude), providing greater accuracy; and (4) wider clinical acceptance [10]. By contrast, PET lacks the ability to image patients in real-time with high resolution, generating increased interest in dual-modality imaging agents to aid in all stages of treatment management. For instance, by combining the non-invasive sensitivity of radionuclide imaging with the real-time, high sensitivity and high resolution of optical imaging, pre-operative and post-operative assessment of tumor burden by PET could initially be used to map disseminated lesions, and fluorescence imaging could then aid in im-age-guided surgery to more precisely delineate tumor margins. Here we will demonstrate the first report of a dual, PET and optical, -targeted contrast agent with these capabilities for use as a novel clinical imaging strategy in ovarian cancer management. Given the success of folate-targeted agents in the clinic, we targeted our multimodal PET/optical probe to the FR. This previously reported porphyrin-based probe (PPF) comprises 3 modules: (1) a multimodal porphyrin, Pyropheophorbide-α, (2) a FR-homing molecule, folate, and (3) a pharmacomodulation peptide linker conjugating Pyro to Folate [11]. By exploiting the stable metal chelation capabilities of porphyrins, we previously demonstrated a simple and stable radiolabeling method for generating this 64 Cu-PPF PET imaging probe [12]. In addition, we demonstrated the optical imaging and optical tuning capabilities of PPF, allowing tumor detection at multiple wavelengths [11,13]. Herein, we report a novel clinical imaging strategy using the dual, PET and optical, FR targeted PPF for ovarian cancer management. We uniquely demonstrate that PPF can non-invasively delineate FR-positive, primary human SOC xenografts as well as micro-metastases in the peritoneum using both PET and fluorescence imaging modalities, potentially addressing the current clinical needs for SOC detection. Early passage xenografts derived from primary SOC recapitulate the inter-and intra-patient heterogeneity observed in SOC [14], unlike cell line-derived xenografts [15]. Therefore, in this study we used either primary cells (freshly isolated from primary patient tumor samples) or xenografts derived from these cells. The clinical characteristics of the 31 patients from whom tissue was used are listed in Table 1. As expected from previous studies [5], immunohistochemical staining of a panel of early passage, primary human SOC xenografts revealed that most express FRα ( Figure 1A), although the staining intensity and percentage of positive cells showed some variability. We used flow cytometry and confocal imaging to evaluate the uptake of PPF (50 µM) by primary SOC and xenograft cells ex vivo [14]. In ascites (n=7) and xenograft (n=3) samples, the fluorescence intensity was 5-to 25-fold higher in cells incubated with PPF, compared with DMSO-treated control cells ( Figure 1B, Supplementary Material: S1C), or cells incubated with PPF in the presence of excess folic acid (n=3, Supplementary Material: Figure S1A). Cell viability was unaffected by PPF exposure (Supplementary Material: Figure S1B). Likewise, cells incubated with PPF showed detectable intracellular fluorescence ( Figure 1C) compared with control cells (without PPF, Figure 1D). Taken together, these observations confirm that primary human SOC cells take up PPF in a FR-dependent manner. We used our established primary mammary fat pad xenograft assay as a pre-clinical model to evaluate the in vivo imaging capability of PPF in multiple patient-derived samples (n=6). 64 Cu-PPF distinguished SOC tumors from other tissues at 24 hours post injection (Supplementary Material: Figure S2). Although the highest uptake was observed in the kidneys and liver, the tumor-to-muscle ratio of 64 Cu-PPF was 4.97 ± 0.61 at 4 hours, and 8.91 ± 0.91 at 24 hours, post-injection (Supplementary Material: Figure S2C), indicating rapid clearance of 64 Cu-PPF from non-target tissues and probe retention in the tumor. This result was confirmed using fluorescence imaging as the tumor-to-muscle ratio of PPF at 24h was 7.94 ±3.94. A strong fluorescence signal was localized within the tumor at 24h post-injection (Supplementary Material: Figure S2D), and was confirmed by confocal microscopy of frozen sections from these tumors (Supplementary Material: Figure S2E). To compare our PPF to the previously reported FR-targeted Fluorescein isothiocyanate (FITC) probe, we conjugated PPF to different fluorophores: FITC (PPF488), Pyropheophorbide-α (PPF) and Bacterio-chlorophyll-α (PPF740), creating probes in the green, red and near infrared range, respectively (Supplementary Material: Figure S3). As expected, with increasing excitation wavelength, the tumor-to-background ratio increased, likely due to decreased auto-fluorescence and increased penetration depths of longer wavelengths of light (Supplementary Material: Figure S3C). These results demonstrate the capacity of PPF to image human SOC by either fluorescence or PET imaging. Although mammary fat pad xenografts recapitulated the heterogeneity of SOC, they do not model other disease manifestations, such as peritoneal studding and ascites generation. Furthermore, the long imaging time point and unfavorable uptake within the abdomen after an intravenous injection is not ideal when trying to image SOC. Therefore, we also generated intraperitoneal xenografts from primary SOC, and tested the ability of PPF to accumulate in small metastases. In order to decrease the time between drug administration and imaging (1 hour), increase the drug concentration at target sites, and decrease off-target accumulation, we administered PPF by a single intraperitoneal injection. A mixture of 64 Cu-PPF (500µCi 64 Cu, 6nmol of PPF) and PPF (2.25mg/kg, 30nmol PPF) was injected, and one hour later, the animals were imaged by PET/CT. A strong PET signal was evident in the abdominal cavities of animals with ascites, whereas healthy control mice showed no evident uptake of 64 Cu-PPF ( Figure 2A). Likewise, we could detect small metastatic studding (<1mm in size) on the peritoneal wall of animals with ascites by in situ fluorescent imaging after exposing the peritoneal cavity ( Figure 2B&C). Fluorescence uptake into metastases was 3.5-fold higher than into adjacent normal tissue (p<0.001, n=5; Figure 2D). Fluorescence microscopy of frozen peritoneal slices also revealed the selectivity of PPF for malignant cells ( Figure 2E). Histological analysis of serial H&E stained sections confirmed the presence of small, fluorescent deposits of malignant cells ( Figure 2F). These results demonstrate the ability of PPF to identify animals with peritoneal spread of SOC by PET and fluorescence imaging. Finally, we evaluated the potential of PPF to be taken up by tumor deposits in the peritoneum ex vivo. Primary omentum from SOC patients (including tumor and adjacent normal tissue/stroma) was incubated in PPF (10μM). After 30 minutes, fluorescence was clearly detectable in the omentum ( Figure 3A&B). Fluorescence microscopy of frozen omental slices again demonstrated selective uptake of PPF by cancer cells (Figure 3C), an assessment confirmed by histological analysis ( Figure 3D). These data suggest that PPF also could be used intra-operatively to identify residual tumor during surgical debulking procedures. Corresponding representative (C) fluorescence (red) images were compared to sequential (D) H&E-stained slices of (i) magnified and (ii) full tissue slice, confirming microscopically the uptake and selectivity of PPF for cancerous cells. Frozen 10μm slices were DAPI-stained (blue). (Pyro excitation 410±70 nm, detection 685±40 nm). Discussion Here, we report the application of a targeted, multimodal (PET/optical) probe for imaging ovarian cancer. We confirm the high specificity of PPF in primary human models of SOC (cell suspensions and xenografts). We demonstrate the ability of systemically or intraperitoneally injected PPF to clearly delineate FR-positive, primary human SOC xenografts, as well as bulk tumor and micro-metastatic studding in the peritoneum, using PET and fluorescence imaging. In addition, we validate the ex vivo uptake of PPF by metastatic deposits in primary human omentum, similar to a previous report showing the utility of a folate-targeted FITC probe [8]. PPF thus has the capacity to act as a "one size fits all" probe for detecting and monitoring SOC. Primary human SOC cells and in vivo models derived from 31 patients were used for these studies because they retain the heterogeneity and phenotype that is displayed by patients [14]. Many studies of ovarian carcinogenesis, drug response and imaging efficacy have used immortalized cell lines that have been shown to poorly recapitulate the disease [15]. By using tumor cells derived from primary patient samples to evaluate the sensitivity of our multimodal PPF probe, we expect our results to better predict efficacy in SOC patients. The value of PET [16][17][18] and optical imaging [19,20] have been evaluated in ovarian cancer, although separately. Nevertheless, translation of such probes to the clinic requires the development of improved contrast agents to increase tumor sensitivity and specificity. Because FR is over-expressed in SOC, FR-targeted imaging and therapeutic agents have shown promise in clinical trials [6][7][8][9]11] cementing FR as a viable molecular target in this disease. Two studies independently demonstrated the utility of FR-targeted probes, by SPECT or fluorescence, for identifying FR-positive ovarian cancer [8,9]. These promising studies showed the clinical potential of FR-targeted imaging agents in SOC management; however, they highlighted a niche for a multi-modal agent. We have developed such an agent, PPF, whose appeal and novelty lie in its: (1) complementary imaging capabilities: specifically, the non-invasive, deep tissue penetration and quantitative nature of PET, combined with high-resolution, real-time fluorescence, ideal for surgical guidance; (2) targeted uptake, increasing the signal-to-noise ratio, and retention in tumors; and (3) applicability as a single agent, reducing concerns about variability in tumor uptake specificity, pharmacokinetics and pharmacodynamics. PPF could aid the current clinical treatment strategy for ovarian cancer patients in several ways. As a PET agent, it could be used for staging pre-or post-operatively, allowing high-resolution evaluation of the extent of disease before and after treatment. Concomitantly, the fluorescence imaging properties of PPF could aid in image-guided surgery to precisely delineate tumor margins and/or residual disease. Optimal debulking (<10mm residual tumor) results in a significantly improved outcome for SOC patients [21], and patients with no detectable tumor at the time of resection demonstrate even greater survival [22]. The ability of PPF to identify metastatic deposits smaller than 1mm could facilitate more complete debulking than is possible currently. PPF also has potential as a disease-monitoring and recurrence-detection tool. Currently, CA125 allows detection of relapse approximately three months sooner than CT or MRI modalities [23], and combined with PET/CT, further expedites recurrence diagnosis [24]. At present, patients treated immediately upon biochemical relapse show no significant improvement in survival over those detected only when bulk disease recurs [23]. However, the current lack of benefit of detecting recurrence earlier most likely reflects the paucity of effective treatment options for relapsed disease. The future advent of new, targeted therapeutics and/or immunotherapies could make the diagnosis of smaller tumor bulk, not only important, but also essential. In turn, more sensitive imaging methods to improve treatment planning, response assessment and residual and/or recurrent disease detection will be needed. Moreover, 10-20% of SOC patients do not produce CA125, and at present, can only be monitored by radiologic methods [24,25]. These patients also would benefit from more sensitive detection methods, including targeted PET and optical agents such as PPF. Finally, it should be noted that Pyro is a potent photodynamic agent [11,13]. Intraperitoneal PDT was evaluated in Phase II clinical trials, but did not demonstrate significant complete responses or long-term tumor control, with the ineffectiveness attributed to lack of tumor specificity in photosensitizer (Photofrin) uptake [26]. The high degree of tumor specificity of PPF might circumvent this limitation, by markedly reducing the risk of collateral damage to normal tissues within the peritoneum exposed to the photo-activating light. Studies evaluating the PDT potential of PPF are currently underway. Thus, in addition to its utility as an imaging agent, PPF might aid in the eradication of residual intraperitoneal tumor and microscopic metastatic deposits by applying a tumor-targeted PDT treatment to the entire surgical bed post-resection. Taken together, our results demonstrate that PPF is an "all-in-one" novel clinical imaging strategy that could substantially improve the prognosis of patients with SOC and other malignancies over-expressing FR, such as endometrial cancer and colon cancer, by allowing pre-, post-and intra-operative tumor monitoring, detection and possibly also treatment throughout all stages of therapy and tumor progression. Materials and Methods Tumor Samples and Cells: High-grade SOC samples were obtained from the University Health Network Tissue Bank with patient consent and Research Ethics Board approval, and were pathologist-verified. Tumors were procured within 2-4 h of excision. Samples were processed as reported previously [14]. Briefly, solid tumors were minced and digested with collagenase/hyaluronidase (Stem Cell Technologies) in DMEM at 37 °C for 2 h. Red blood cells were lysed in 0.16 M ammonium chloride, and the remaining cells were filtered through a 70-μm mesh and counted. Ascites cells were collected by centrifugation at 300xg and red blood cells lysed as above. Cells can be revived and form tumors after viable freezing [14]. Xenografts: All animal studies were carried out under institutional approval (University Health Network, Toronto, Canada). CD45-depleted cells (10 6 ) in 1:1 HBSS:growth factor-reduced Matrigel (BD Biosciences) were injected into the mammary fat pad (xenograft model) or peritoneum (ascites model) of Non-Obese Diabetic/Severe Combined Immunodeficient (NOD/SCID) or NOD/SCID/Il2rγ -/-(NSG) mice. Mice were monitored for tumors for up to 6 months post-injection or until moribund. After euthanization, tumors and tissues were harvested for subsequent histology and/or biodistribution studies. MicroPET/CT Imaging: MicroPET imaging was performed with a Siemens Focus 220 MicroPET scanner (Seimens, Munich, Germany). Tumor-bearing mice were anesthetized with 2% isoflurane in oxygen, injected with ~500μCi of 64 Cu-PPF (6nmol of PPF) via their tail veins, and placed near the center of the field of view, where the highest resolution and sensitivity are obtained. A 10 minute static PET image was obtained at 4 h post-injection, and 30-45 min. static PET images were acquired at 24 h post-injection. CT scans were obtained immediately after each PET imaging session. To this end, mice remained anesthetized throughout PET imaging, and then were transferred without any movement directly to a GE Locus Ultra microCT scanner (GE Healthcare, Little Chalfont, UK), together with the supporting bed. Biodistribution studies: Biodistribution studies were performed using NOD/SCID mice bearing primary human SOC xenografts in the mammary fat pad. The 64 Cu radiotracer (~500 μCi in 0.1 mL saline) was administered into each animal via the tail vein. Animals were euthanized 4 or 24 h post-injection with 2% isoflurane, and exsanguinated by opening the thoracic cavity and withdrawing blood samples from the heart using a syringe. Organs were excised, washed with saline, dried with absorbent tissue, weighed and counted on a γ-counter (Perkin-Elmer Wizard-1480). Organs examined included the tumor, heart, spleen, lungs, liver, kidneys, adrenal, stomach, intestine, muscle, bone and brain. Organ uptake was calculated as percentage of the injected dose per gram of tissue (%ID/g). Biodistribution data and target-to-background (T/B) ratios are reported as the mean and standard deviation based on results from three animals at each time point. Comparisons between radiotracers were made using the two-way ANOVA test (GraphPad Prim 5.0, San Diego, CA). The level of significance was set at p <0.05. Optical imaging studies: A solution of 2.25 mg/kg of PPF (30nmol of PPF) was formulated in 150 μL of an aqueous solution containing 5 μL of DMSO and 1.5 μL of Tween80. When tumors reached 5-10 mm in diameter, mice were injected with the PPF solution intravenously via their tail veins under isofluorane anesthesia. Whole-body in vivo fluorescent imaging was performed before and at multiple time points (30 min, 2h, 6.5h and 24h) after injection using the MaestroTM, CRi: PPF -661 nm (641 to 681 nm) excitation, 700nm longpass detection. Ascites imaging studies: In order to reduce absorption by the cellular fraction of the ascites, fluid (0.5-1mL/animal) was drained from the abdominal cavity of tumor-bearing mice using a 27G needle before injection and imaging. A mixture of 64 Cu-PPF and PPF was administered intraperitoneally to animals with ascites and healthy controls. PET/CT images were captured over 10 minutes at 1hr post-injection. Animals were then euthanized, and ex vivo fluorescence imaging of the peritoneal cavity was performed (MaestroTM, CRi: PPF -680nm excitation, 700nm longpass detection, auto-exposure integration time, total fluorescence signals normalized by exposure time and ROI area (total signal/(ms * pixels)). Comparisons between small metastatic deposits and background signals were made using the two-sample homoscedastic student t-test, with the level of significance set at p <0.05. Omentum optical imaging: Omental samples were imaged prior to incubation with PPF (10μM) at 37°C. Ex vivo fluorescence imaging was performed at multiple time points (30min, 1h, 2h, 4h, 5.5h and 24h), similar to the in vivo optical imaging protocol reported above. At 24h, the omentum was snap-frozen in liquid nitrogen with OCT media and stored at -80°C. Frozen sections (10μm) were cut on a cryostat. Frozen tissue slices were immersed in PBS for 5 min, dried, and 10μL of mounting solution with DAPI (Vector laboratories. Inc.) were added as a nuclear stain. Sections were overlaid with coverslips and imaged (Olympus Upright Tiling Microscope, BX50; excitation 410±70nm, emission 685±40nm). A section adjacent to the imaged frozen section was stained with Hematoxylin & Eosin (H&E) to confirm the presence of peritoneal carcinomatosis.
2014-10-01T00:00:00.000Z
2013-05-25T00:00:00.000
{ "year": 2013, "sha1": "c10533f55b5f3ea888db1729ad25b97104f27cb7", "oa_license": "CCBYNCND", "oa_url": "http://www.thno.org/v03p0420.pdf", "oa_status": "GOLD", "pdf_src": "CiteSeerX", "pdf_hash": "c10533f55b5f3ea888db1729ad25b97104f27cb7", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
249889054
pes2o/s2orc
v3-fos-license
Winning the CVPR'2022 AQTC Challenge: A Two-stage Function-centric Approach Affordance-centric Question-driven Task Completion for Egocentric Assistant(AQTC) is a novel task which helps AI assistant learn from instructional videos and scripts and guide the user step-by-step. In this paper, we deal with the AQTC via a two-stage Function-centric approach, which consists of Question2Function Module to ground the question with the related function and Function2Answer Module to predict the action based on the historical steps. We evaluated several possible solutions in each module and obtained significant gains compared to the given baselines. Our code is available at \url{https://github.com/starsholic/LOVEU-CVPR22-AQTC}. Introduction Intelligent assistants are increasingly becoming a part of users' daily lives. Along this line, Affordance-centric Question-driven Task Completion (AQTC) [22], which aims to guide the user to deal with unfamiliar events step-by-step with the knowledge learned from instructional videos and scripts, is newly introduced. Different from existing works such as Visual Question Answering (VQA) [3] or Visual Dialog [7], the question in AQTC is about specific task and the answer is multi-modal and multi-step, which makes it more challenging. To solve this problem, we propose a novel two-stage Function-centric approach, which consists of a Ques-tion2Function Module and a Function2Answer Module. Our main motivation is that only part of the instructional video is helpful to answer the question and taking the entire video into account could introduce unnecessary noise. Along this line, we first define several schemas to segment the scripts into the textual function-paras. Then we design a text similarity based method to select specific video clips as well as paras that are closely related to users' question. After obtaining relative context information, we formulate the multi-step QA as a classification task and leverage a neural network to retrieve correct answer for each step. With our model and several training tricks, we achieved substantial performance boost compared to the given baselines. Measurement of Text Similarity Generally speaking, in order to calculate text similarity, it is important to represents the text as numerical features that can be calculated directly, which could be categorized into two groups: string-based method and corpusbased method. String-based methods aim to measure similarity between two text strings based on string sequences or character composition, including character-based methods [14,21] and phrase-based methods [11,15]. Different from string-based methods, corpus-based methods leverage the textual feature or co-occurrence probability to calculate the text similarity at the corpus level, which are usually achieved in three ways: bag-of-words model like Term Frequency-Inverse Document Frequency (TF-IDF) [20], distributed representation methods like Word2Vec [18] and BERT [10], and matrix factorization methods like Latent Semantic Analysis (LSA) [8] and Latent Dirichlet Allocation (LDA) [5]. Visual Question Answering VQA is a task to answer questions based on image [4] or video [17], which could be roughly divided into attention based methods [2,25] and bilinear pooling based approaches [13,16,26]. [2] developed different attention modules to adaptively attend on the relevant image regions based on the question representation. [13] proposed to em-ploy the compact bilinear pooling methods to combine the visual and linguistic features. However, these tasks mainly focus on the third-person perspective, while the AQTC task concentrates on the egocentric scenes. Proposed Method In this section, we first present the problem statement of our proposed two-stage Function-centric approach towards solving the AQTC task. Then, we introduce the technical details of each module in our framework step-by-step as shown in Fig.1. Problem Statement Given an instructional video V , the video's corresponding textual script S, the user's question Q, the set of candidate answers A i j where A i j denotes the j-th potential answer in the i-th step, we target at select one correct answer in each step. Specifically, to grounding question Q in the instructional video V and script S, we first segment V and S into function set {f 1 , f 2 , . . . , f n } and then matching Q with related functions. Note that each function f consists of function-clip f v and function-para f t . Afterwards, taking the weighted function set {f 1 , f 2 , . . . , f n }, the question Q and the candidate answer A i j as input, we formulate the multi-step QA as a classification task in a supervised way. Question2Function Module We now turn to explain the technical details for the Ques-tion2Function Module. Since the instructional videos are used to guide user or AI assistant in a step-by-step manner, we first segment both script and video into individual functions, instead of sentences or frames, to insure the completeness of each step. Meanwhile, it is critical to ground the specific question with the related function as the correct answer often co-occurs with the corresponding function. Specifically, we first segment the script S into the textual function-paras f t according to the pre-defined schema (see details in our project's repo), and then divide the corresponding video V into the visual function-clips f v via the aligned script timestamp. In this way, the instructional video and script are divided into the functions set{f 1 , f 2 , . . . , f n }, and each function f not only contains textual description f t but also includes visual guidance f v . We further match the specific question with the function set based on text similarity. Due to the small volume of the dataset and the corresponding functions are not highly semantic similar with the given question, we calculate the similarity score between Q and {f 1 , f 2 , . . . , f n } via TF-IDF model [19] instead of deep learning based methods. The ablation result in Sec.4.2 indicates that the traditional statistical based TF-IDF method behaves much better than the deep learning based approach. Function2Answer Module After we determined the related function thanks to the former module, we now turn to formulate the following multi-step QA as a classification task. Since the candidate answers are given as textual action descriptions and visual button images, we need to predict the correct action in each step well as the corresponding button according to the historical steps. Input Features. For the text embedding, we encode the function-para, question and candidate answers into E t f , E t q and E t a via XL-Net [24], which performs much better than the Bert [9] backbone as shown in Tab.1, since the XL-Net is good at processing long context. For the visual part, we encode the frames of function-clip and the button image in candidate answers into E v f and E v a via vision transformer (ViT) [12] following [23]. Steps Network and Prediction Head. Same as baseline [23], we use GRU [6] to leverage the historical steps and predict the final score for each answer via a two-layer MLP followed by softmax activation. Loss Function. By taking the embeddings of question E t q , candidate answer E a = {E t a , E v a } and the weighted function set E f = {E t f , E v f } given by the former module as input, we formulate the following multi-step QA as a classification task as follows. whereŷ i = Pred head(Steps network(E f , E t q , E a )) and y i represents the ground truth. Other Attempts Considering the changing views within the same instructional video and the occlusion of buttons, it is really challenging to link the button's image to its function. Therefore, we tried to use finger detection [1] in the video as additional information in the Function2Answer module (see details in our project's repo). However, since the strong assumption and the cumulative error introduced by the detection module, the performance of this purely-inference method is poor as shown in Tab.2. Dataset and Parameters setting Following [23], we trained our model on the training set containing 80 instructional videos and evaluated the model performance on the testing set which contains 20 instructional videos. We use the same evaluation metrics in [23], i.e., Recall@k, Mean rank (MR) and Mean reciprocal rank (MRR). Video The button left is the mode selection … long press the top left adjust … (script) Ablation Study In this section, we compare the performance of possible solutions in each module separately. Specifically, in the Question2Function module, we evaluate the different segment methods (sentence-centric v.s. function-centric) as well as the grounding approaches (cross attention v.s. TF-IDF) as shown in Tab.1. Compared with baseline using raw settings, we observe that the adjustment of the text encoder(BERT → XL-Net) and optimizer (SGD → Adam) has a significant impact on performance (30.2R@1 → 39.3R@1). Meanwhile, the functioncentric segment approach performs better than the sentencecentric on most of the metrics, which demonstrates the significance of the step completeness. For grounding approach, the TF-IDF model performs much better than the cross-attention mechanism (39.3R@1 → 44.4R@1 on sentence-centric and 38.5R@1 → 45.2R@1 on functioncentric). We also show an case (see Fig.2) between the two different grounding approaches, which reveals that TF-IDF model can help us find the related functions effectively. For Function2Answer module, we evaluate the impact of visual and textual part of functions and answers via extensive ablation studies. As we can see, the visual guidance of the function helps the model choose the correct answer significantly (41.0R@1 → 45.2R@1). However, the inclu- Conclusion In this paper, we introduced a two-stage method to solve the novel AQTC task. Our model achieved significant performance boost compared to the given baseline. For future work, we will attempt other solutions to model this interesting task.
2022-06-22T01:16:40.538Z
2022-06-20T00:00:00.000
{ "year": 2022, "sha1": "a913ad902723fe4fcc4ec4cfbe07d9928683aaaf", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "d865c1d25a5631b2a15a00e2c2ded88c8000fa1b", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
17854488
pes2o/s2orc
v3-fos-license
Giant Cell Tumor Developing in Paget’s Disease of Bone: A Case Report with Review of Literature Introduction: Paget’s disease of bone (PDB) is a disease of elderly characterized by disorganized bone remodeling. Development of secondary neoplasm in PDB is a known but rare phenomenon. Development of giant cell tumor in PDB (GCT-PDB) is extremely rare, and little is known about its etiopathogenesis and management. We present a case report of such a development with a review of the literature and the role of various new modalities of treatment available in the management of this rare condition. Case Report: A 40-year-old gentleman presented with back pain and on evaluation was diagnosed as a case of polyostotic PDB. He was treated with intravenous bisphosphonates, calcium, and vitamin D supplements. After an asymptomatic period of 3-year, he presented with a gluteal mass involving ilium and sacrum which was confirmed as GCT on biopsy. Serial angioembolization was attempted but mass progressed, so surgery performed with excision and curettage of the lesion. He presented with a local recurrence 2 years later with a large soft tissue component. He was started on denosumab, RANKL inhibitor, with the aim to downstage the lesion. The patient showed a good response after 6 doses with reduction in soft tissue mass followed by which he underwent surgery with partial T-1 internal hemipelvectomy and curettage of sacrum. Currently, the patient is asymptomatic at a follow-up of 15 months. Conclusion: GCT-PDB is a rare phenomenon occurring mainly in polyostotic PDB and is associated with more severe manifestations of the disease. The management is challenging and requires multimodality management. Pharmacological agents include use of bisphosphonates and RANK ligand inhibitor - denosumab. Although surgery is the mainstay of treatment for GCT, other modalities of treatment such as RANK ligand inhibitors (denosumab), selective arterial embolization, or radiation therapy has to be used for inoperable cases or where surgery would be functionally too morbid, especially in cases of GCT-PDB where the disease affects more commonly the axial skeleton. Giant Cell Tumor Developing in Paget's Disease of Bone: A Case Report with Review of Literature histiocytoma, and very rarely locally aggressive tumor-like giant cell tumor (GCT) [1]. The reported cases of GCT complicating Paget's occur mainly in polyostotic disease [2]. In view of its rarity, there is only modest information available about the etiology and management of GCT-PDB. In addition, newer developments in the management of both these disorders have enabled new avenues for combined multimodal management in this rare coexistence. We present a case of GCT developing in the background of polyostotic PDB. We also discuss the mechanism of development and multimodality treatment available for the management in such an uncommon situation. Case Report A 40-year-old gentleman presented with low backache of 5 years duration. He was evaluated clinically and radiologically. A diagnosis of PDB was made. Bone scan revealed multiple site involvement (skull, sternum, dorsal and lumbar, pelvis, ribs, femur, and tibia) ( Fig. 1, 2, 3, 4). Bone biopsy from the iliac region revealed numerous multinucleated giant cells with the haphazard new bone formation and diagnosis of polyostotic Paget's disease was confirmed. The patient was treated with intravenous (IV) bisphosphonate every 3 weeks (pamidronate 60 mg IV infusion) with vitamin D and calcium supplement at another institution. He was apparently alright for 3 years when he noticed a lump in the right gluteal region. It was associated with dull aching localized pain. Radiographs revealed a lytic lesion in right posterior ilium and a magnetic resonance imaging (MRI) of the pelvis showed a large lesion with an extraosseous soft tissue component involving the iliac bone and adjacent sacral ala. Blood investigations showed an elevated serum alkaline phosphatase. Computed tomography-guided trucut biopsy from gluteal mass was diagnosed as GCT. Serial angioembolizations (Fig. 5) were attempted with intent to control the disease without surgery owing to the complex anatomy and associated morbidity of surgery, but the mass progressed in size. A decision for surgical excision was taken. Excision of soft tissue mass with curettage and cementation of the right sacroiliac component of the lesion was done. The final histopathology report confirmed GCT in case of PDB. 2 years after the surgery, he presented with a large local recurrence involving the posterior ilium and sacral ala. The recurrence was confirmed with a repeat biopsy. He was started on denosumab, a monoclonal RANK ligand inhibitor in an attempt to downstage the lesion and thus reduce the morbidity of surgery. Denosumab was administered at a dose of 120 mg subcutaneously at monthly intervals with loading doses at day 1, 8, and 15. A repeat MRI evaluation, after 6 injections of denosumab, revealed a good response with shrinkage of the soft tissue mass. He then underwent partial Type-1 internal hemipelvectomy and curettage of the sacral lesion (Fig. 6,7,8,9,10). At a 15 months follow-up, the patient is asymptomatic and disease-free. Discussion Inspite of a reported increased incidence of GCT of bone in the Asian population, most of the cases of GCT-PBD have been reported in Caucasians. This may be explained by the fact that the incidence of PDB is higher in western population as compared to Asians. Available data also suggested a significant prevalence of familial GCT occurring in PDB in Italy region. A systematic review [3] was conducted to identify cases of CGT occurring in PDB patients (PROSPERO database registration number: CRD42014007030). The analysis revealed some unique characteristic of GCT developing in Paget's disease. The majority of ethnicity in this study was Caucasians of European ancestry 82% and around 8% were Asian. GCT in PDB more commonly involves the craniofacial bones followed by pelvis and spine as compared to GCT which commonly occurs at an epiphyseal location. The understanding of genetics and pathophysiology of GCT has considerably increased since the discovery of RANK/RANK-L (receptor activator of nuclear factor-kappa B ligand) pathway which is an important osteoclastic differentiating factor. Osteoblasts and stromal stem cells express RANKL, which binds to its receptor RANK, on the surface of osteoclasts and their precursors. This regulates the differentiation of precursors into multinucleated osteoclasts. Studies have shown that RANKL is overexpressed by stromal cells in GCT tissue. These stromal cells are the neoplastic mononuclear cells in GCT, and it is postulated that the molecular signals from these stromal cells promote the formation of multinucleated osteoclast-like cells. The multinucleated giant cells cause the osteolysis. The pathogenesis of PDB involves mutations/ polymorphisms identified in genes TNFRSF11A encoding RANK, TNFRSF11B encoding osteoprotegerin, Valosin-containing protein encoding p97, and SQSTM1 encoding p62 play a role in the RANK-NFKB signaling pathway [4,5]. Thus, RANK/RANKL pathway seems to be an interlinking pathway in the development of GCT in Paget's disease. However, how GCT may develop in PDB is not yet clearly established. Multimodality management in GCT-PDB GCT-PDB cases should be treated on the same principles as that of GCT. A thorough clinical and hematological evaluation is must understand both the pathologies. Surgery is the standard of treatment in GCT, which may be either function preserving curettage or more morbid procedure like resection. Traditionally, bisphosphonates have been used in the management of surgically nonresectable GCT either alone or in combination with angioembolization. This leads to induction of osteoclasts apoptosis leading to decreased bone restoration and bone healing. Similarly, bisphosphonates have also been used as a drug of choice for the management of and are also the treatment of choice in PDB. Introduction of newer agents such as denosumab [6,7], a human monoclonal antibody, has shown early promising results in managing the GCT at difficult sites. Denosumab inhibits the development of osteoclasts and their activity thereby reducing bone resorption and increase bone density and shrinks the tumor mass. Although its use has been popularized in reducing the skeletal-related events in some carcinomas such as breast and prostrate carcinoma, it is proving to be effective in controlling the growth of axial GCT where resection may be really morbid [8]. Literature search has shown cases where denosumab is used as a therapeutic agent targeting the pathophysiology in PDB [9]. Although currently not included as a standard of care in PDB, expected to become one of the potential therapeutic agents in the future. The duration of denosumab therapy is still a matter of debate, but complete cessation of therapy has seen resurgence of disease in most of the cases. Histopathological studies have shown complete apoptosis of multinucleate giant cells, but stromal cells do show resistance to its treatment and are considered responsible for the recurrence of the disease. This modality may be combined with arterial embolization [10] to have effective disease control at axial sites. For refractive cases, radiotherapy [11] can be tried in carefully selected cases. Conclusion GCT developing in the background of Paget's disease is a rare occurrence which is commonly seen in polyostotic PDB. GCT-PDB is associated with more severe manifestations of the disease with reduced life expectancy. Various modalities of treatment can be used depending on the individual case. Pharmacological agents include the use of bisphosphonates and RANK ligand inhibitors. Although surgery is the mainstay of treatment for GCT, denosumab, selective arterial embolization, or radiation therapy has to be used for inoperable cases or where surgery would be functionally too morbid, especially in cases of GCT-PDB where the disease affects more commonly the axial skeleton. Clinical Message Management of GCT-PDB is challenging and requires multimodality management. Surgery is the mainstay of treatment, but one should be aware of all the armamentarium available like bisphosphonates, RANK ligand inhibitor, angioembolization, and radiation therapy to downstage the lesion facilitating surgery or as a definitive modality of treatment in inoperable cases. The recent advances have led to successful cure and lesser morbidity in the treatment of this rare condition.
2017-10-19T16:37:42.545Z
2016-09-01T00:00:00.000
{ "year": 2016, "sha1": "21b220ea7a63486f46f8aced5d9a2101338c3d46", "oa_license": "CCBYNCSA", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "21b220ea7a63486f46f8aced5d9a2101338c3d46", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235766356
pes2o/s2orc
v3-fos-license
Mining in the Newspapers: Local and Regional Media Representations of Mineral Exploration and Mining in Finland, Germany, and Spain The understanding of public debates over mineral exploration and mining largely originates from exceptional situations such as mining accidents of conflicts. Less is known about how mining is portrayed and understood under more conventional settings. What storylines dominate the local day-to-day public debate? This article presents results from a comparative case study focusing on newspaper coverage of mineral exploration and mining in three European countries representing different geological and socio-economic contexts. Newspaper articles from the Geyer-Erzgebirge region in Germany, the Andalusia region in Spain, and Northern Finland are studied. The sample looks into the period between September 2018 and February 2020 and shows that regional newspapers report about mining issues relatively intensively even in the absence of major accidents or other media events causing peaks of attention. The tone of the articles is generally neutral to positive towards mining activities, reflecting the specific local settings, historical experiences, and future expectations. Despite the different contexts of the three countries, there were considerable similarities to the topics highlighted, including common themes of mining revival, mining events and social interaction, history of mining, and damages related to mining. Past, present, and future employment opportunities related directly or indirectly to the mining sector are key storylines. Another recurrent underlying theme is the need to balance environment and safety risks and socio-economic prosperity, typically covered through ordinary disputes among the mining sector, public authorities, regional non-governmental organizations, and local initiatives. Introduction The responsibility and sustainability of the mining sector have been publicly discussed under two general narratives. One narrative focuses on the economic importance of the sector and its fundamental role as a provider of key raw materials needed for the basic functions of the economy [1][2][3]. This narrative has traditionally emphasized the gradual accumulation of wealth and prosperity through the utilization of natural resources by manufacturing and industries. More recently, this narrative has also highlighted the need for rapid socio-technical change to meet the sustainability challenges and the role of the mining sector as the provider of rare earth metals and other critical resources needed to transform societies to become carbon-neutral [4][5][6]. For example, it has been cautioned that the contribution of wind power and solar photovoltaics to the EU transition to green energy may be limited due to shortages of several critical materials [7]. It is stated that the recycling of materials and substitutes obtained from synthetic or biogenic renewable materials will not be able to entirely replace the extraction and utilization of non-renewable mineral deposits in the foreseeable future. The other narrative is more critical towards the mining sector. It casts light on harmful environmental and social impacts, accidents, and risks. Heated debates over planned, operating, or closed mines direct criticism towards the whole sector and underline the need for responsible practices during all phases of mining [8,9]. The narrative includes both actual effects and risks, often making claims about insufficient regulatory frameworks, poor public accountability, a lack of transparent and fair governance, an unfair division of benefits, corruption, and a neglect of the rights of indigenous people and livelihoods of local communities [10][11][12][13]. Critical discussions around mining taxes and royalties are also a part of this narrative. In order to address these and other concerns, concepts such as "green mining" or "social license to operate" have been introduced and procedures concretely supporting "sustainable mining" have been developed [14][15][16]. However, under this narrative, such initiatives can be considered as attempts to legitimize the use of natural resources without properly ensuring democratic processes at the local level and a "right to say no" of the local citizens [17]. Media is a key arena where these narratives are constructed and reproduced. Traditionally, newspapers have played a key role as gatekeepers of public debate and arenas of interplay between news sources, journalists, and audiences. However, relatively few research articles focus on the newspaper coverage of mining [8,[18][19][20][21][22]. More studies use media reporting as one source of data complementing document analysis, interviews, or public surveys [9,[23][24][25][26][27]. While such mixed-material studies are certainly useful, they are unable to paint a larger picture of the media debate and its potential influence. These and other previous studies of mining debates have mainly focused on narrowly defined and isolated case studies [28,29]. They have provided a rich understanding of the highly variable contexts of mining debates and have provided valuable lessons on possibilities to prevent problems [30][31][32]. However, such lessons are often closely tied to a certain local context, and our understanding of what lessons may be more widely applicable remains thin. As the mining industry is increasingly international, there is a pronounced need for comparative studies allowing learning from multiple cases. Previous studies have typically focused on public debates during exceptional circumstances, such as the aftermaths of mining disasters or other short periods of heated controversies [3,8]. Focusing on such extreme cases is important in order to learn how to avoid such situations, but it is also important to better understand what kind of storylines and topics emerge or dominate the debate during the longer periods of non-heated routine media reporting that sets the baseline of public and policy agendas. Framings and tones of such conventional or everyday reporting influence the long-term development of public opinion and awareness that strengthens or erodes trust towards the mining sector. This, in turn, influences policy decisions and willingness to invest. Furthermore, previous studies indicate that the public mining debate may not be as polarized as it may be assumed based on the high-profile controversies. Polls and studies aimed to determine the medium citizen position towards mining, within the European Union and in other contexts like Canada or Australia, suggest that mining activities are mostly tolerated if they adjust to local governance [33][34][35]. Trust is clearly a key factor for mining acceptance, but the role of media coverage deserves further attention. This research aims to contribute to filling these gaps by focusing on the media debate of mining from a comparative case study and routine reporting perspective by asking: • How does the regional and local newspaper coverage address mining and mineral exploration activities in three European case regions? • What storylines dominate the debate, and what are the main topics in each case region? • Who are the key actors presenting their views? • What is the tone of the news coverage towards mining and exploration? • How are different risks and opportunities of mining and exploration brought up? The next section briefly outlines the cases studied here and describes methods for data collection and analysis, followed by a presentation of the results and discussion. Finally, conclusions summarizing key results are presented. Methods and Materials Our materials originate from the INFACT project (Innovative, Non-Invasive and Fully Acceptable Exploration Technologies) that aimed to develop socially accepted, environmentally friendly, and technologically advanced methods for raw material exploration in the European Union. The project conducted detailed studies in three European mining regions. The selected case regions represent different geological, political, and socio-economic contexts but share a cross-national EU framework. The combination of regional cases was designed to allow an examination and comparison of differences and similarities across different contexts of mining and exploration. The cases are the Geyer-Erzgebirge region in Eastern Germany, the Andalusia region in Southern Spain, and the Sodankylä region in Northern Finland [33]. The analysis presented here focuses on regional level newspaper coverage covering these and adjacent areas. Regional level newspaper coverage was considered as the best basis for the analysis since exploration and mining are place-based activities with considerable regional and local social, economic, and physical effects, as well as high awareness among the local community [36,37]. In addition to geological differences of the sites, the three regions represent different economies, policy systems, and media systems that provide distinctive contexts for public mining debate. The media and communication systems in Germany and Finland have been grouped under the democratic corporatist media model [38]. In this system, the media is coupled with a strong welfare state and democratic corporatism [39]. Media's autonomy is highly appreciated. Newspapers are privately owned, and the societal role of regional and local press has been relatively strong. Ownership of the commercial media has been somewhat concentrated and public service broadcast companies have an independent and strong position. The media system of Spain has been labelled as a polarized pluralist model typical for relatively young democracies, countries with strong government intervention in economy, or countries that have eliteoriented press [38]. In this polarized model, journalism is less professional and the links between political actors and journalists are strong. The communication industry is dominated by big multimedia corporate groups, coexisting with fast-growing independent digital newspapers. Despite the increasing importance of social media, the fragmentation of the media industry, and the decline of readership of most printed newspapers, regional newspapers still play a key role as hubs connecting top-down and bottomup information flows [9,40]. Newspapers have been widely read, and the societal role of high-quality newspapers is still relatively strong in Finland and Germany [41,42]. Spaniards have a low level of trust towards the media, especially in comparison to Finland [43]. Spain is considered to have a satisfactory situation of press freedom ranking (ranked 29 by the World Press Freedom Index 2020 [44]) close to France (34) or the UK (35), but far from Finland (2) or Germany (11). Newspapers influence the public and policy agenda by presenting or omitting certain topics and by framing the presented topics in a certain way [45,46]. While these are not necessarily conscious choices made by the media entities, they define the topics for the public discourse. The agendasetting function of the newspapers means that those issues that are highlighted by the news are likely to be the most influential and societally salient [47]. Framing denotes to the practices of news production that highlight (or omit) certain aspects of a perceived reality to intentionally or unintentionally promote a certain problem definition, causal interpretation, moral judgement, or policy recommendation [48]. The sample studied here covers an 18-month period from September 2018 to February 2020. Common procedures for collecting and analyzing the data were agreed on in order to ensure the best possible comparability among the three cases, but flexibility was needed to enable efficient data collection and analysis, considering the different media systems, differences in mining vocabulary and expressions in three languages, and the availability of news articles via electronic databases. Materials include news stories, editorials and columns by journalists, and opinion pieces by the representatives of the public and the stakeholders. The German case included media articles on mining, raw material, and exploration, focusing on the regional level of "Erzgebirge/Saxony" in East Germany, an old mining region in transformation. Here, direct access to the online newspaper and supplementary through the news platform "Genios" provided the data. The data originated from the regional newspaper "Freie Presse," using the German keywords for "mining," "mineral exploration," and "raw material." After screening out irrelevant hits, a total of 214 media articles with full-text access were examined. The Spanish case focused on the Huelva and Seville provinces of the Autonomous Region of Andalusia in southern Spain, a traditional and active mining region. The newspapers consulted included local level newspaper "Tinto Noticias" and regional newspapers with readership focusing on more populated areas within these provinces. These regional newspapers are "Huelva Informacion" and "Huelva Ya" for the Huelva province, and "Diario de Sevilla" and "ABC de Sevilla" for the Seville province. The data was screened by using the Spanish keywords for "mining" and "mineral exploration." A total of 177 articles were gathered for the selected timeframe. In Finland, the regional-level newspaper coverage of Northern Finland, a relatively new mining region, was screened based on the "ePress" database. The newspapers were available as digital replicas of the original printed ones. The regional focus was limited to Finnish Lapland, resulting in hits from the following newspapers: "Kittilälehti," "Koti-Lappi," "KotoSalla," "Kuriiri," "Lapin Kansa," "Lappilainen," "Lounais-Lappi," "Luoteis-Lappi," "Pohjolan Sanomat," "Uusi Rovaniemi." Most hits originated from the leading regional newspaper "Lapin Kansa" that is read mostly in northern Finland where most of the mines of the country are located. It publishes six issues per week, while most of the other newspapers have a smaller circulation and publish one or two issues per week. Based on the testing of different search strings, the Finnish equivalent for "ore prospecting/exploration" was considered to be the most suitable one. Search strategies focusing more widely on mining were unfeasible because they produced a sample that was too wide and unfocussed. Furthermore, initial screening suggested that the news items mentioning exploration often have mining as their main focus. The Finnish sample included 142 articles. The sample from all three regions allowed charting of the overall volume and temporal development of the mining debate, as well as identification of the main topics presented. Because of the heterogeneity of the materials, a qualitatively oriented content analysis [49] was employed in order to identify the tone of the news item towards mining, key actors having a voice, and the specific topics highlighted. The categorization of the tone of the articles was based on simple categories of "positive," "neutral," or "negative"" in order to secure reliable interpretations and comparability between cases and with earlier research [9,37]. The key actors that were present in the news were coded from the texts and aggregated to a higher level in order to allow country comparisons. Identification of the key topics involved subjective assessment focusing not only on the manifest content but also latent content of the news items and visual materials, when available [18]. Interpretations were based on iterative rounds of reading by researchers and common discussions aimed to produce a shared understanding of the correspondence of topics identified across different countries. Overall Characteristics and Volume of the Coverage In all three countries, mining issues were addressed by the newspapers on a regular basis without significant breaks (Fig. 1). The volume of reporting tended to increase over time, but due to the relatively short study period, trends should be examined with care. It remains open whether the trend will continue, peak, or drop for any reason or incident. In Germany, approximately 12 articles of the newspaper "Freie Presse" addressed mining and mining-related topics per month. The number varied from 0 in March 2019, to up to 27 articles in November, and 24 in December 2019. The peak was partly influenced by the Christmas time and the festive events around mining, which take place at the end of each year. In Spain, on average, 2 news items were published per newspaper and month (10 articles per month). The trend during the 18-month period was influenced by peaks covering certain relevant events, accidents, and lawsuits, like the III International Mining and Minerals Hall in Seville (October 2019). Also, the results from the Finnish case -based on a narrower focus on exploration -suggest that the regional level newspapers cover mining and mineral exploration regularly. The regional newspaper "Lapin Kansa" published, on average, over four (4.3) news items per month mentioning or focusing on mineral exploration. The coverage was characterized by fluctuations related mainly to the activity of exploration and mining firms and related administrative Tone of the Debate Most of the news items were characterized by a positive or neutral tone towards mining and exploration (Fig. 2). Nearly half of the 214 German articles gave positive messages. Roughly one-third of the tone could be characterized as "neutral," while 17% were clearly on the negative side. Likewise, in Spain, three out of four news were positive, while only about 10% had a negative tone towards mining. The strong historical, social, and economic links between the mining sector and the local stakeholders were a key explanation for the rather benevolent attitude towards mining in regional media. As examples, in Spain, the legal aspects of a partially failed environmental authorization of a mine were covered by many articles, some quite illustrative: "[Locals] claim the environmental authorization has been paralyzed for 400 days" (TintoNoticias, 21 January 2020), "Miners and mayors threaten to protest if the authorization is not solved soon" (TintoNoticias, 9 January 2020), "[Union] joins defense of mining activity in [the region]" (TintoNoticias, 2 July 2019) or "[Political party] urges for the granting of the authorization of the mine" (HuelvaYa, 20 January 2020). There were many positive articles covering the International Mining Hall celebrated in the region as well, e.g., "[Regional capital] mining global showcase" (Diario de Sevilla, 15 October 2019). In Germany, a number of articles focused on the new era of mining, which is often positively framed. Headlines like "First steps for the return of mining" (Freie Presse, 18 September 2018), "Saxony will remain a mining country" (Freie Presse, 17 June 2019), and "Bergeschrey will shape Luchsbach valley again for decades" (Freie Presse, 10 April 2019) showed a neutral to positive tone in media. However, a mixed tone with both positive and negative (concerned) arguments and news was linked to the new mine in Poehla, as these headlines indicate: "The ore mine in Pöhla will mine tungsten and fluorspar from 2021" Freie Presse, 3 July 2019), "Ore mine/city insists on their interests and those of the citizens" (Freie Presse, 13 April 2019), and "Mining plans in Poehla move people" (Freie Presse, 28 February 2020). In Finland, the share of news items with a neutral tone was high. The results suggest that the share of news items with a positive tone towards exploration and mining decreased in 2019 if compared with 2018. However, the two first months of 2020 showed a clear dominance of positive tone (52% of the coverage) and a sharp decrease of coverage with a negative tone (to 9%). This was mostly explained by the lack of critical opinion pieces and several editorials supporting the mining industry by the regional newspaper "Lapin Kansa." Overall, the editorials written by the senior journalists were dominated by a positive stance towards mining, for example, a piece making a direct plea highlighting the expected economic and employment gains: "Let's not expel the mines," (Lapin Kansa, 7 June 2019). Key Actors of the Debate Key actors having a voice in the newspaper differed somewhat between the countries (Fig. 3). In the German case, public authorities and municipalities dominate the sample as key actors. This result suggests that mining (including all phases of mining, historical, recent, and future prospects) is strongly governed by the local level and by authorities (e.g., institutions in charge of mining, such as Oberbergamt Freiberg). Surprisingly, only a few mining companies were cited in the news, the one starting a new mine at Poehla dominating the scene. The tone towards the mining company varied over time but had clear downward tendencies due to results of a public hearing and critical requests from the local community. Furthermore, the location of the shaft which will be used later for the underground mine, and the effects on the landscape, traffic, and environment raised concerns. In the Spanish case, the mining companies were the most salient stakeholders, followed by public authorities. Also, the local communities were quite frequently cited, mostly because of the local and regional scope of the newspapers. The visibility of environmentalists, mining institutions, unions, and academy was much lower, but their views were contrasted against the major actors. Regional media recurrently featured actors related to mining and mining institutions, events, investments, policies, and budgets. On the one hand, the local community and stakeholders demanded to be informed frequently about mining. Mining is not just an economic activity in the region, but part of the identity of its inhabitants. On the other hand, this could be explained by the mining companies' desire to influence the public agenda to maintain a positive attitude towards mining. In Finland, the mining and exploration companies were the most prominent group of actors, followed by regionallevel authorities, and representatives of municipal administration and policymakers. The public visibility of environmental non-governmental organizations (NGO) was weak. In many cases, the exploration and mining firms were able to strongly influence the agenda-setting by being the only party interviewed by journalists. Short news was often based on the press releases and other communication material provided by the exploration companies, while longer news items included interviews of the representatives of the companies and occasional comments from local or national policymakers, authorities, and researchers. The tone of short pieces was typically neutral, while longer pieces presenting commentaries from representatives of exploration or mining companies often had a positive tone. Interestingly, even though economic importance and the potentials of mining were often referred, the actual assessments of the local, regional, or national economic impacts of mining were almost completely missing from the debate. Local community was cited and present in all three regions, but had a secondary role. Local concerns were addressed by media, but usually, they were introduced by other actors, mainly public authorities. Other stakeholders present in media debate are academy and research institutions, together with environmental NGOs, either supporting certain argumentations or pushing for their own agenda. Unions and political parties were only relevant in Spain, suggesting a more politicized discussion than in the other countries. Key Topics and Storylines Several common topics were identified between the countries, while some key topics of the debate reflected specific country contexts (Table 1). In Germany, due to the long and rich history of mining in the region of Erzgebirge and the highly active and successful use of tourism potential, the most common topics included historical mining in general, specific events for mining (often associated with the history of mining), and UNESCO world heritage, which means that parts of Erzgebirge belong to the world list of Fig. 3 Key actors present in the coverage of mining by regional newspapers in Finland, Germany, and Spain, Sep. 2018-Feb. 2020 Germany Spain Finland most important heritage sites. Damages from old mining was another topic high on the agenda and indicated the long and costly restoration of mining projects and remains dating from the middle ages up to the recent past. The popularity of the topics new mining in the ore mountains (11% of the articles) and exploration (6%) indicate, on the one hand, hopes for a new era of mining, and an increasing importance in different sectors, and, on the other hand, tensions between society and key actors with the mining area. The topic of new mining in the ore mountains addressed general issues, such as the potential for new mines or the chances of geothermal power and the long-lasting reputation of a traditional mining region. One specific project was addressed frequently (9%). The development of a new underground mine in the municipality of Poehla was described through articles informing the citizens about the exploration phase, about public events, and also reporting about complaints, fears, and hopes for the future and what a mine means for the local community. The main topics in the Spanish newspaper coverage were lawsuits, mentioned in 46% of the articles, followed by mining revival (32%), and the safety of mines (24%). Safety issues were increasingly mentioned towards the end of the study period. The media treatment of the other topics remained at a constant level. Mining dependency awareness and mining decline were secondary topics. Somewhat surprisingly, past mining accidents were not a major topic despite the legacy of the infamous Aznalcollar disaster. This was the worst environmental catastrophe in Spain in recent times, caused by a dam accident that released mining wastes into the Guadiamar River, one of the main water sources of the wetlands of Doñana National Park [50][51][52]. Instead, more attention was given to current issues, such as lawsuits directed at Rio Tinto mining company, Atalaya mining, as a result of an incomplete environmental permit process in 2019. Most of the attention was directed at metallic mining, while exploration received less attention. Because of the selected search strategy, the mineral exploration was the most prominent (55% of the articles) topic in the Finnish sample. However, a substantial share of the news items (31%) focused on mining and only mentioned exploration in passing. Both types of news generally emphasized the importance of the mining industry for the economy and employment and often discussed the interactions between the mining sector and the rest of the society. The news items that had exploration as their main topic most often informed about permit decisions for plans to study certain areas. These items were typically short news based on announcements by the exploration firms. News about concrete progress of the exploration process was less prominent. Safety issues related to the mining industry were not present in the sample. Likewise, coverage specifically focusing on past mining accidents was missing from the sample, despite the importance of the recent problems and accidents in the infamous Talvivaara mine in Eastern Finland [8,27]. Occasional news items related to legal processes typically described complaints against permits for exploration. Discussions over the need to renew the Finnish mining act brought up related fundamental questions of ownership of mineral resources and fairness of the distribution of socio-economic benefits. More detailed observations revealed some interesting insights. In Finland, several front-page news and long reportages with a positive tone towards exploration were published. For example, one two-page reportage titled "The smell of gold wafts from the deeps of the fell" introduced two persons from a Canadian exploration firm. The comments by these interviewees built highly positive framings of exploration with strong connotations of gold mining as a desirable and beneficial activity. The reportage reproduced the myth of a gold rush and evoked emotive responses often attached to the utopian views of northern resources [53] by depicting Finnish Lapland as a place of unimaginable richness: "Here you can accidentally trip and fall because of a piece of stone containing gold" (Lapin Kansa, 9 August 2019). In Spain, the most positive news items were linked to the celebration of the International Mining and Minerals Hall in Seville, as well as the reports on the mining company's economic growth and investments. In Erzgebirge, Germany, the mineral exploration at Poehla and the operational activities to open a new mine brought about many critical viewpoints from the media. This demonstrates how easy and quickly a public debate can turn negative. In all three countries, environmental risks were often mentioned in passing, or they were a latent theme of the discussion. The focus was on potential future risks while only a few items specifically focused on current or past environmental effects of mining. The actual environmental effects of exploration received relatively little attention. In addition to environmental risks, the environmental benefits of the mining industry were discussed. In Finland, one subtheme of the topic of mining revival was the importance of minerals for the development of an emerging battery industry cluster and, more widely, for the transition of the whole energy system. Women's career was a topic present in varying degree in all countries, but, generally, not dominating the debate. Actors being cited in the news were typically male and women were presented as external actors joining the mining sector, or being underrepresented. For example, in Spanish newspapers, the focus was on training programs for women and emerging women self-empowering initiatives within the mining sector ("The association Women in Mining and Industry Spain comes into existence in Huelva," Huelva Ya, 11 February 2020). Discussion Media representation is one key part of the complex societal dynamics affecting the public image and social acceptance of the mining sector. Media, policy, and public agendas are intertwined in different ways in different societies, and it is, therefore, challenging to draw general lessons. However, our cases indicate that some topics are reported regularly, while others are more likely to be addressed irregularly or only occasionally. Our cases suggest that the mining news regularly feature issues related to employment, investments, and social events. The results also suggest that ordinary reporting of mining issues often characterized by neutral or positive tone differs from reporting focusing on specific extraordinary events characterized by negative tone. High-profile events such as lawsuits or mining accidents are characterized by rapidly fluctuating coverage and a potentially long shadow of suspicion and mistrust as indicated by the Talvivaara and Aznalcollar cases [8,52]. This highlights the need to take the specific history of risks into account when communicating about present activities [54]. Past experiences of mining are reproduced by the news and influence current perceptions partly regardless of the current practices of exploration and mining companies [11,55]. The cautionary tales from history easily become overemphasized because the public generally lacks knowledge of the latest technologies such as non-invasive exploration technologies [34]. Also, the criticism directed towards specific cases and accidents is easily perceived as criticism towards the entire mining sector. Mining revival was a key general theme in all the countries. Newspapers covered it in different ways, from broad reportages to focused storylines describing exploration activities, plans for new mines, private investments, innovative mining-related technologies, and employment or education opportunities. The Finnish newspapers reported on exploration in various locations and presented speculations over several potential new mines or the reopening of old mines. This was different from German and Spanish coverage of new mines focusing more on the future potentials of a limited number of certain individual mines. Mining revival raised concerns regarding social issues, economic implications, and safety and environmental risks. They were addressed in all the three case regions from a double perspective. First, there was a short-term point of view highlighting environmental damages and accidents, as well as the current issues related to societal fairness and inequalities perceived (gender gap, legal issues, and distribution of economic gains). Second, there was a long-term point of view on the integration of mining activities into a successful model for regional development. The future scenarios discussed by the regional newspapers were often contrasted with specific historical mining experiences and past accidents. Therefore, future outlooks become deeply linked with the region's past, heritage, and identity, as well as wider considerations of world markets of raw materials [9,11]. These include the EU's concerns of critical minerals and uncertainty of the markets in Asian ore production. The COVID-19 crisis has only deepened the mistrust in Asian markets [56], serving as an example of rapid changes in mining debates. Overall, the short-term concerns shared more common elements with all three case regions, while the representations of long-term concerns were more versatile. The long-term future outlooks in Finland focused mostly on the knowledge-intensive economy and transition of the energy sector, stressing the importance of critical minerals for information and communication technologies and the prospects of the batteries industry. In Germany, the future outlooks were built based on the long history of mining and its cultural implications, stressing the importance of tourism and the service sector. In Spain, the debate revolved around the mid-term continuity of the metallic mining sector and investment possibilities guiding the region towards a more sustainable and responsible economic model with high employment and resilience to future crisis. Regarding environmental issues, in Spain, the environmental permit of one single mine (Aznalcollar) was the most referred topic in all newspapers. Environmental concerns were raised for other main storylines as well, linking the reopening of Aznalcollar mine and the accident in Cobre Las Cruces with potential water pollution. While not such a big topic in Geyer Erzgebirge Germany, environmental concerns were raised relating to the new mine in Poehla and its location close to a protected area and nature reserve. In Finland, environmental concerns were mainly brought up by occasional and often indirect references to the notorious Talvivaara mine and by speculations about potential environmental risks of future mining activities. However, contrary to some expectations [9,46], the regional-level debate was not dominated by a long-lasting confrontation between the mining industry and its adversaries spearheaded by environmental NGOs. Lawsuits were not frequently covered in Finland or Germany, but were a hot topic in Spain. This was largely coincidental and due to specific legal processes during the study period. In Finland, the debate over a potential need to renew mining legislation was a prominent theme, especially related to the responsibilities of mining firms and the distribution of economic profits and socio-ecological risks. In Germany, this debate was dominated by the transformation of the historical coal mining sector, which is relevant for the region, and the phase-out of mining coal nationwide in the long-term [19,57,58]. Despite the negative and neutral news items addressing on-going lawsuits against mining companies, the tone of news items in Spain was most often positive, reflecting the high level of social acceptance and regional support for mining activities. In Andalusia, there is a high interest from public authorities and mining companies on promoting a positive image for the mining sector that leads to the publicizing of investments and funding of events with a strong social component [59]. For Erzgebirge Germany, the tone of the articles was also neutral or slightly positive. This was mainly explained by the focus of the majority of articles on less controversial issues of historical mining heritage and its use for tourism, as well as events attached to recreation activities. The tone used by the media was more positive towards exploration than towards the entire mining sector. Mineral exploration was mostly represented as an innovative mining-related activity. Exploration was framed as an activity ultimately benefitting the mining companies, but also as an effort conducted by public authorities and academy that may have a beneficial impact on future regional development. In Spain, less than 20% of the mining news during the study period covered exploration. It is a secondary topic in the Iberian Pyritic Belt, where the active mining operations dominate the media debate. Futureoriented coverage focusing on exploration and prospects of new mining or revitalization of old mining areas was more pronounced in Finland and in Germany. In Finland, the news items focusing on exploration had a particularly positive tone, often related to the expectations of economic and employment benefits and the prospects of new energy technologies related to Finland's aim to be a zero-carbon society in 2035. Even though the positive views dominated the debate, a polarization between views emphasizing the opportunities for economic and employment benefits of mining and views emphasizing potential negative impacts on nature and local livelihoods was present. Such a polarization can be deepened in the absence of integrating concepts [60]. Environmental risks of mining, potential threats to tourism, and, increasingly, issues related to fairness, liability, and ownership of mineral resources were concerns that were raised. In Finland, editorials written by the newspaper staff were clearly in favor of mining and exploration, while critique was presented mainly in opinion pieces. Journalistic news generally had a neutral or positive tone towards exploration and mining. Our results support earlier studies showing that a major part of media's reporting, possibly as high as one-third, is based heavily on press releases produced by various societal actors, including companies [61]. This could mean that at the local and regional levels, exploration companies are able to emphasize the positive economic expectations, while downplaying economic risks or realities, which serves the companies' interests and needs to portray the project in an attractive manner to the potential investors. This can build unrealistic expectations for the local communities and skew the local deliberation on whether they should accept or reject such projects in their region. Our results suggest that this risk exists both in contexts of the democratic corporatist media model of Finland and Germany and polarized pluralist media model of Spain. The relatively weak voice of local communities was a surprising result, given the regional and local focus of the newspapers studied here. Also, academics could play a more active role in regional media outlets to increase public engagement on mining issues and related challenges, such as energy transitions and climate change [62]. Especially in Finland, the rather low visibility of environmental NGOs was surprising, given the high visibility of social movements such as Stop Talvivaara and several smaller local movements elsewhere and active opposition towards exploration in the case study area [11]. The reasons leading to actors being vocal or silent in the news clearly requires further investigation. The relatively weak visibility of NGOs may also be because these organizations increasingly focus on influencing through other channels such as campaigning and networking through social media [55,63]. While the future role of regional newspapers remains unclear, it appears that there is room for a more integrative role bringing different voices together as newspapers are increasingly read through digital platforms. One caveat in our study is that debate in certain case regions and during a certain time period allows only uncertain assumptions about the debate elsewhere and at other times. Therefore, further comparative analysis and follow-up studies are needed in order to better anticipate the emerging concerns and changes in media framings and public attitudes. Conclusions Whether an issue is highlighted or omitted by the media reporting obviously influences the awareness by stakeholders and general public. In addition, the tones and framings of news coverage have implications on investment decisions and policy priorities. The results presented here suggest that the tone of routine reporting by local and regional newspapers is more positive towards the mining sector than the tone during exceptional situations that typically describe accidents, risks, or controversies. The positive tone of regional-level newspaper coverage reflects not only different mining histories of specific areas, but also different socially constructed perceptions and expectations. There are two competing but intertwined general storylines that emphasize positive economic and employment opportunities and negative environmental and social implications of mining. These storylines are influenced by stakeholders with pro-mining or anti-mining agendas. The negative storyline is primarily pushed by associations and locals concerned with environmental issues and risks, such as water usage, waste management, and sustainable future alternatives. The negative storyline easily becomes strengthened by isolated cases of accidents and misconducts serving as cautionary examples, affecting the perceptions of the entire mining sector far beyond specific cases. The positive storyline covers the benefits of the mining sector, including activities of the mining companies at a regional level, the funding of social activities and events. These are related to prospects of economic growth, new investments and employment opportunities, and public support of mining from different local stakeholders. This positive attitude of the media towards mining is partially dependent on the influence of mining sector on local economy, and it may rapidly dwindle if the economic importance of the sector decreases. On a longer-term, the positive storyline may remain persistent because of high and even increasing societal demand for many products of the mining industry. However, if the expected regional benefits are not achieved or decline, for example due to automatization or the use of non-local workforce, while the "environmental costs" continuously make headlines, it is possible that the rationale for accepting mining activities is questioned. This highlights that unless practices and business models that genuinely meet responsibility and sustainability criteria are convincingly implemented, the currently prevailing positive storyline will be increasingly challenged. Author Contribution The study conception, design, and data collection were performed by Jari Lyytimäki, Ludger Benighaus, and Javier Gómez. The first draft of the manuscript was written by Jari Lyytimäki. All authors contributed to the analysis and read and approved the final manuscript. Funding Open access funding provided by Finnish Environment Institute (SYKE). This research has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement nº 776487. Conflict of Interest The authors declare no competing interests. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2021-07-09T05:19:58.962Z
2021-07-06T00:00:00.000
{ "year": 2021, "sha1": "c00e85899b74250aa45595ea7bc01475ef133854", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s42461-021-00453-4.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "c00e85899b74250aa45595ea7bc01475ef133854", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
216449508
pes2o/s2orc
v3-fos-license
Effect of Cell Size on the Performance and Temperature Distribution of Molten Carbonate Fuel Cells Molten carbonate fuel cells (MCFCs) are high-operating-temperature fuel cells with high efficiency and fuel diversity. Electrochemical reactions in MCFCs are exothermic. As the size of the fuel cells increases, the amount of the heat from the fuel cells and the temperature of the fuel cells increase. In this work, we investigated the relationship between the fuel cell stack size and performance by applying computational fluid dynamics (CFD). Three flow types, namely co-flow, cross-flow, and counter-flow, were studied. We found that when the size of the fuel cells increased beyond a certain value, the size of the fuel cell no longer affected the cell performance. The maximum fuel cell temperature converged as the size of the fuel cell increased. The temperature and current density distribution with respect to the size showed a very similar distribution. The converged maximum temperature of the fuel cells depended on the gas flow condition. The maximum temperature of the fuel cell decreased as the amount of gas in the cathode size increased. Introduction The operating temperature of molten carbonate fuel cells (MCFCs) is 580 • C or more [1]. The MCFC has the advantage of using an inexpensive catalyst and internal reforming due to its high operating temperature. However, an MCFC has the disadvantage that the fuel cell reactions are exothermic, which reduces the lifetime of the stack [2]. A high cell temperature evaporates the liquid electrolytes [3]. A non-uniform electrochemical reaction in the fuel cell causes large temperature deviations. Large temperature differences in MCFCs induce thermal stress that accelerates fractures in the electrolyte matrix, compromising the long-term operation [4]. There has been considerable research into MCFC stacks. Yoshiba et al. [5] studied cell performance and temperature profiles in various flow geometries and types. They found that a stack with a co-flow has some performance advantages and produces a reduced maximum temperature. Koh et al. [6] investigated an internal-reforming-type 5-kW stack with various parameters, including fuel cell size, gas utilization, and operating temperatures. Kim et al. [7] developed simulation procedures for MCFCs using three-dimensional computational fluid dynamics (CFD) analysis. The current collector was assumed to be a porous medium consistent with Darcy's law. Kim et al. [8] compared the effects of flow types on performance, current density, and temperature. In their work, counter-flow type MCFCs showed the best performance. Lee et al. [9] studied the temperature distribution and performance of the newly developed 100-cm 2 cell frame. At this size, counter-flow type MCFCs produced more type MCFCs produced more uniformly distributed temperatures and current densities without any hot spots. However, many previous studies have only analyzed one or two stack sizes. Consequently, it has been difficult to determine the relationship between temperature distribution, performance, and size in the design of large stacks. In this study, we analyzed the flow, mass transfer, and electrochemistry to determine currentdensity and temperature distributions relative to the fuel cell size and flow characteristics. We studied co-flow, counter-flow, and cross-flow configurations in rectangular shaped fuel cells with lengths of 0.1 m, 0.5 m, 1 m, 1.5 m, and 2 m. We used CFD to determine the relationship between cell size and the temperature distribution. Reaction Models MCFCs are composed of a cathode, an anode, and a matrix with electrolytes, such as K2CO3, Na2CO3, or Li2CO3. MCFCs use NiO as the cathode, γ-LiAlO2 as the matrix [9], and an alloy of Ni and Ni-5wt%Al as the anode. The structures of the electrodes and matrix used in previous studies are shown in Figure. 1. When fuel gases, such as H2, H2O, and CO2 are supplied to the anode, a reduction reaction occurs between the hydrogen and carbonate that emits electrons. The electrons return to carbonate ions through an oxidation reaction with the fuel gas at the cathode, producing electricity. Oxidation and reduction of carbonate ions are the basic reactions of MCFCs. Equation (1) describes the reactions in the anode and cathode: Anode: The voltage of the molten carbonate fuel cells (Vcell) was assumed to be constant throughout the cell. Here, the Nernst potential ENernst is given by Equation (2), where the first term on the right-hand side is standard potential (E0) from the change in the molar Gibbs free energy. The local current density is defined in Equation (3). Total resistance is defined as the sum of the internal resistance, anode resistance Ra, and cathode resistance Rc. Equation (4) shows the calculation for Rohm as the internal resistance including the contact resistance, Ra, and Rc as the resistance components due to polarization and diffusion. We used the results from the experimental literature [10,11]. Unlike SOFC(Solid Oxide Fuel Cell; SOFC) [12] and PEMFC(Polymer Electrolyte Membrane Fuel Cells; PEMFC) [13], MCFCs showed very complicated reduction and oxidation reactions, therefore it is difficult to apply reaction-based equations [11]. When fuel gases, such as H 2 , H 2 O, and CO 2 are supplied to the anode, a reduction reaction occurs between the hydrogen and carbonate that emits electrons. The electrons return to carbonate ions through an oxidation reaction with the fuel gas at the cathode, producing electricity. Oxidation and reduction of carbonate ions are the basic reactions of MCFCs. Equation (1) describes the reactions in the anode and cathode: Anode : The voltage of the molten carbonate fuel cells (V cell ) was assumed to be constant throughout the cell. Here, the Nernst potential E Nernst is given by Equation (2), where the first term on the right-hand side is standard potential (E 0 ) from the change in the molar Gibbs free energy. The local current density is defined in Equation (3). Total resistance is defined as the sum of the internal resistance, anode resistance R a , and cathode resistance R c . Equation (4) shows the calculation for R ohm as the internal resistance including the contact resistance, R a , and R c as the resistance components due to polarization and diffusion. We used the results from the experimental literature [10,11]. Unlike SOFC(Solid Oxide Fuel Cell; SOFC) [12] and PEMFC(Polymer Electrolyte Membrane Fuel Cells; PEMFC) [13], MCFCs showed very complicated reduction and oxidation reactions, therefore it is difficult to apply reaction-based equations [11]. Energies 2020, 13, 1361 3 of 12 R a = 2.27 × 10 −9 exp 6435 T P −0.42 The heat generated from the electrochemical reaction was determined from the sum of the enthalpy change of the reaction minus the electrical power reduced. The heat produced by the fuel cell reaction is as follows [14]: In addition, a fast acting water-gas shift (WGS) reaction occurs at the anode, as shown in Equation (7) below. The equilibrium constant of the WGS reaction [15] is presented in Equation (8), and the enthalpy change is expressed using Equation (9): Anode side : Flow Types MCFCs are categorized according to the relative flow directions between the anode and the cathode gases [8]. A co-flow type presents the anode and cathode gas flows in the same direction. x y x y Nernst cell Nernst cell x y x y x y x y irr a c ohm The heat generated from the electrochemical reaction was determined from the sum of the enthalpy change of the reaction minus the electrical power reduced. The heat produced by the fuel cell reaction is as follows [14]: In addition, a fast acting water-gas shift (WGS) reaction occurs at the anode, as shown in Equation (7) below. The equilibrium constant of the WGS reaction [15] is presented in Equation (8), and the enthalpy change is expressed using Equation (9): 157.02 0.4447 4.2777 10 1.3871 10 , Flow Types MCFCs are categorized according to the relative flow directions between the anode and the cathode gases [8]. A co-flow type presents the anode and cathode gas flows in the same direction. A counter-flow type presents opposing flow directions. A cross-flow type presents perpendicular flow directions. These flow types are shown in Figure 2. Simulation Model To confirm that the current collector plate is a porous medium with an equivalent pressure drop and heat transfer coefficient, we applied unit cell analysis to determine the heat transfer coefficient and permeability coefficients. The current collector plate of MCFCs is the repeated structures with an open sheared trapezoidal shape. The length and height of each trapezoidal shape are 4 mm and 2.4 mm, respectively. Using a full model of the current collector plate requires excessive computation due to Energies 2020, 13, 1361 4 of 12 the plate's complex structure [16]. Consequently, we analyzed the current collector by assuming a porous media with an equivalent pressure drop and thermal conductivity. We modeled the gas flow in the porous media with Darcy's law, presented in Equation (10) [17]. The detailed calculation result and the included homogenized properties are published in a previous paper [18]. The permeability (κ) of the current collector plate was 1.75 × 10 −7 m 2 . As the heat transfer is produced by conduction, we applied the linear heat conduction equation shown in Equation (11). Using the difference in temperature as a result of the analysis, the K eff for each direction was obtained and was used as an equivalent physical property. The heat transfer coefficient for each direction was calculated to be K eff X = 3.11 W/mK, K eff Y = 1.42 W/mK, and K eff Z = 1.19 W/mK [18]. Seven elements were used in the thickness direction, and a total of 80,640 hexahedral elements were used. In the analysis, we increased the number of elements by 30% and determined the number of elements whose analysis error in the next step decreased by 1%. The simulation model in this work is shown in Figure 3. The simulation used COMSOL Multiphysics v 5.4 with a conjugated heat flow model. We assumed a laminar inflow as the gas flow channel input boundary condition. Energies 2020, 13, x FOR PEER REVIEW 4 of 12 To confirm that the current collector plate is a porous medium with an equivalent pressure drop and heat transfer coefficient, we applied unit cell analysis to determine the heat transfer coefficient and permeability coefficients. The current collector plate of MCFCs is the repeated structures with an open sheared trapezoidal shape. The length and height of each trapezoidal shape are 4 mm and 2.4 mm, respectively. Using a full model of the current collector plate requires excessive computation due to the plate's complex structure [16]. Consequently, we analyzed the current collector by assuming a porous media with an equivalent pressure drop and thermal conductivity. We modeled the gas flow in the porous media with Darcy's law, presented in Equation (10) [17]. The detailed calculation result and the included homogenized properties are published in a previous paper [18]. The permeability (κ) of the current collector plate was 1.75 × 10 −7 m 2 . ∆ As the heat transfer is produced by conduction, we applied the linear heat conduction equation shown in Equation (11). Using the difference in temperature as a result of the analysis, the Keff for each direction was obtained and was used as an equivalent physical property. The heat transfer coefficient for each direction was calculated to be Keff X = 3.11 W/mK, Keff Y = 1.42 W/mK, and Keff Z = 1.19 W/mK [18]. Seven elements were used in the thickness direction, and a total of 80,640 hexahedral elements were used. In the analysis, we increased the number of elements by 30% and determined the number of elements whose analysis error in the next step decreased by 1%. The simulation model in this work is shown in Figure 3. The simulation used COMSOL Multiphysics v 5.4 with a conjugated heat flow model. We assumed a laminar inflow as the gas flow channel input boundary condition. In the simulation of the fuel cell system, heat transfer is one of the most important problems. In the case of a stack, heat transfer occurs between adjacent cell surfaces. As the heat discharged from the adjacent cells and the heat discharged from the cells were the same, we calculated the heat transfer in this portion under an adiabatic condition. The interpretation in the next chapter validates this assumption. In the simulation of the fuel cell system, heat transfer is one of the most important problems. In the case of a stack, heat transfer occurs between adjacent cell surfaces. As the heat discharged from the adjacent cells and the heat discharged from the cells were the same, we calculated the heat transfer in this portion under an adiabatic condition. The interpretation in the next chapter validates this assumption. Simulation Conditions The temperature of the inlet gas was the same as the operating temperature of 580 • C, and the operating pressure was 1 atm. The reference gas utilization of O 2 in the cathode side and H 2 in the anode side was 0.4 at the current density of 1000 A/m 2 . This means that 40% of the gases were consumed during the electrochemical reaction at the average current density of 1000 A/m 2 . The gas Energies 2020, 13, 1361 5 of 12 composition of the anode side was H 2 :CO 2 :H 2 O = 0.72:0.18:0.1. The gas composition of the cathode side was Air:CO 2 = 0.7:0.3 [6]. The thermal properties of the anode, the cathode, and AISI316L, which is the material of the cell frame and the current collector plate, are summarized in a previous study [9]. Thermal properties of the anode and cathode mixture gases were calculated simultaneously during simulation because the gas composition was changed due to electrochemical reactions. The specific heat, thermal conductivity, and viscosity were calculated using an ideal gas mixture rule [19]. Stacking Effects The stacking conditions of the fuel cell were analyzed. MCFCs are used to stack many unit cells together to produce hundreds of kW. In the simulation, the size of the fuel cell was 1 m × 1 m. We analyzed the heat transfer conditions between unit cells. For the upper and lower surfaces of the stack, the heat transfer condition was analyzed using the natural convection condition of 5 W/m 2 K. The operating conditions were an operating temperature of 580 • C and an average current density of 1000 A/m 2 . We compared the maximum cell temperature relative to the number of stacked unit cells. The results are shown in Figure 4. Energies 2020, 13, x FOR PEER REVIEW 5 of 12 The temperature of the inlet gas was the same as the operating temperature of 580 °C, and the operating pressure was 1 atm. The reference gas utilization of O2 in the cathode side and H2 in the anode side was 0.4 at the current density of 1000 A/m 2 . This means that 40% of the gases were consumed during the electrochemical reaction at the average current density of 1000 A/m 2 . The gas composition of the anode side was H2:CO2:H2O = 0.72:0.18:0.1. The gas composition of the cathode side was Air:CO2 = 0.7:0.3 [6]. The thermal properties of the anode, the cathode, and AISI316L, which is the material of the cell frame and the current collector plate, are summarized in a previous study [9]. Thermal properties of the anode and cathode mixture gases were calculated simultaneously during simulation because the gas composition was changed due to electrochemical reactions. The specific heat, thermal conductivity, and viscosity were calculated using an ideal gas mixture rule [19]. Stacking Effects The stacking conditions of the fuel cell were analyzed. MCFCs are used to stack many unit cells together to produce hundreds of kW. In the simulation, the size of the fuel cell was 1 m × 1 m. We analyzed the heat transfer conditions between unit cells. For the upper and lower surfaces of the stack, the heat transfer condition was analyzed using the natural convection condition of 5 W/m 2 K. The operating conditions were an operating temperature of 580 °C and an average current density of 1000 A/m 2 . We compared the maximum cell temperature relative to the number of stacked unit cells. The results are shown in Figure 4. With a single layer, the maximum cell temperature was 622.1 °C. The maximum cell temperature increased as the number of layers increased, as did the average temperature and the average current density. Beyond a certain number of layers, the cell temperature converged to a certain value. The maximum temperature was compared with one cell with an adiabatic condition. The maximum temperature of the cell was 697.1 °C. The converging temperature condition was a case where heat insulation conditions were given to the upper and lower surfaces of the cell. One cell in the stack could be simulated under adiabatic conditions because the heat generated from the fuel cells and the heat generated by the cell were the same. Therefore, the upper faces of stacked cells can be simulated as adiabatic conditions. All of the following analyses were conducted with the same heat transfer boundary conditions. Effects of Cell Size with Respect to the Flow Types The molten carbonate fuel cells were classified into a co-flow type, a cross-flow type, and a counter-flow type according to the relative directions between the cathode gas and the anode gas. The analysis was carried out according to the length of the cell and the relative flow direction. We compared the temperature and current density distributions according to the cell length and flow With a single layer, the maximum cell temperature was 622.1 • C. The maximum cell temperature increased as the number of layers increased, as did the average temperature and the average current density. Beyond a certain number of layers, the cell temperature converged to a certain value. The maximum temperature was compared with one cell with an adiabatic condition. The maximum temperature of the cell was 697.1 • C. The converging temperature condition was a case where heat insulation conditions were given to the upper and lower surfaces of the cell. One cell in the stack could be simulated under adiabatic conditions because the heat generated from the fuel cells and the heat generated by the cell were the same. Therefore, the upper faces of stacked cells can be simulated as adiabatic conditions. All of the following analyses were conducted with the same heat transfer boundary conditions. Effects of Cell Size with Respect to the Flow Types The molten carbonate fuel cells were classified into a co-flow type, a cross-flow type, and a counter-flow type according to the relative directions between the cathode gas and the anode gas. The analysis was carried out according to the length of the cell and the relative flow direction. We compared the temperature and current density distributions according to the cell length and flow types. The fuel cell in the simulation was square with the lengths of 0.1 m, 0.5 m, 1 m, 1.5 m, and 2 m being simulated. In the case of co-flow and counter-flow, only the normal direction of the gas flow direction affects the temperature distribution [20,21]. Therefore, even if the analysis is applied to Energies 2020, 13, 1361 6 of 12 the square shape, the effect on the size can be analyzed. The current density distribution and the temperature were compared with the normalized length. The figures presenting the current density and the temperature are drawn with the same normalized length scale. Co-Flow Cell Our first analysis examined the effects of cell size on temperature and current density distributions in the co-flow type cell. Figure 5 shows the results. At a length (L) of 0.1 m, the maximum current density occurred near the cell center. However, when the cell length increased beyond 0.5 m, the maximum current density occurred near the anode gas outlet. The maximum temperature distribution occurred at the anode and cathode gas outlets. The generated heat did not concentrate at the cell center but rather moved along the gas flow direction. As a result, the maximum temperature and the current density occurred at the gas outlet. The minimum values were observed at the inlet. Energies 2020, 13, x FOR PEER REVIEW 6 of 12 types. The fuel cell in the simulation was square with the lengths of 0.1 m, 0.5 m, 1 m, 1.5 m, and 2 m being simulated. In the case of co-flow and counter-flow, only the normal direction of the gas flow direction affects the temperature distribution [20,21]. Therefore, even if the analysis is applied to the square shape, the effect on the size can be analyzed. The current density distribution and the temperature were compared with the normalized length. The figures presenting the current density and the temperature are drawn with the same normalized length scale. Co-Flow Cell Our first analysis examined the effects of cell size on temperature and current density distributions in the co-flow type cell. Figure 5 shows the results. At a length (L) of 0.1 m, the maximum current density occurred near the cell center. However, when the cell length increased beyond 0.5 m, the maximum current density occurred near the anode gas outlet. The maximum temperature distribution occurred at the anode and cathode gas outlets. The generated heat did not concentrate at the cell center but rather moved along the gas flow direction. As a result, the maximum temperature and the current density occurred at the gas outlet. The minimum values were observed at the inlet. Figure 6 presents the temperature and current density distributions along the y-direction at the center (x = 0.5 L). The y-direction refers to the anode and cathode flow direction. Figure 6a,b show that an L value of 0.1 m produced different results from other length values. At cell lengths of 0.5 m or more, the current density and temperature distributions were identical, proving that cell performance was independent of size when the length of the cell was 0.5 m or greater. Energies 2020, 13, x FOR PEER REVIEW 6 of 12 types. The fuel cell in the simulation was square with the lengths of 0.1 m, 0.5 m, 1 m, 1.5 m, and 2 m being simulated. In the case of co-flow and counter-flow, only the normal direction of the gas flow direction affects the temperature distribution [20,21]. Therefore, even if the analysis is applied to the square shape, the effect on the size can be analyzed. The current density distribution and the temperature were compared with the normalized length. The figures presenting the current density and the temperature are drawn with the same normalized length scale. Co-Flow Cell Our first analysis examined the effects of cell size on temperature and current density distributions in the co-flow type cell. Figure 5 shows the results. At a length (L) of 0.1 m, the maximum current density occurred near the cell center. However, when the cell length increased beyond 0.5 m, the maximum current density occurred near the anode gas outlet. The maximum temperature distribution occurred at the anode and cathode gas outlets. The generated heat did not concentrate at the cell center but rather moved along the gas flow direction. As a result, the maximum temperature and the current density occurred at the gas outlet. The minimum values were observed at the inlet. Counter-Flow Cell The temperature and current density distributions in the counter-flow cell type are shown in Figure 7. The maximum current density occurred toward the anode inlet and the cathode outlet. At the inlet of the anode gas (or the outlet of the cathode gas), the electrochemical reaction occurred the most. The maximum temperature occurred near the cathode outlet because the sensible heat of the cathode gas exceeded that of the anode gas. As a result, the heat flowed along the cathode gas flow direction. When the size of the cell was longer than 0.5 m, the point of maximum temperature moved in the cathode gas flow direction, as in the co-flow case. At cell sizes of 1 m or more, temperature and current density distributions were similar. Energies 2020, 13, x FOR PEER REVIEW 7 of 12 Figure 6. (a)Normalized temperature and (b)current density distribution relative to the length for the co-flow type. Counter-Flow Cell The temperature and current density distributions in the counter-flow cell type are shown in Figure 7. The maximum current density occurred toward the anode inlet and the cathode outlet. At the inlet of the anode gas (or the outlet of the cathode gas), the electrochemical reaction occurred the most. The maximum temperature occurred near the cathode outlet because the sensible heat of the cathode gas exceeded that of the anode gas. As a result, the heat flowed along the cathode gas flow direction. When the size of the cell was longer than 0.5 m, the point of maximum temperature moved in the cathode gas flow direction, as in the co-flow case. At cell sizes of 1 m or more, temperature and current density distributions were similar. Figure 8 presents the temperature distribution and current density distribution. The normalized length (y/L) value of 0 refers to the inlet of the anode gas and the outlet of the cathode gas. As the length of the cell increased, the position of the maximum temperature and current density moved toward the cathode gas outlet. Furthermore, when the length of the cell was larger than 0.5 m, the concentrated current density and temperature were observed near the cathode gas outlet. The temperature and current density distributions were nearly identical at cell lengths of 1 m and greater. Figure 8 presents the temperature distribution and current density distribution. The normalized length (y/L) value of 0 refers to the inlet of the anode gas and the outlet of the cathode gas. As the length of the cell increased, the position of the maximum temperature and current density moved toward the cathode gas outlet. Furthermore, when the length of the cell was larger than 0.5 m, the concentrated current density and temperature were observed near the cathode gas outlet. The temperature and current density distributions were nearly identical at cell lengths of 1 m and greater. Energies 2020, 13, x FOR PEER REVIEW 7 of 12 Figure 6. (a)Normalized temperature and (b)current density distribution relative to the length for the co-flow type. Counter-Flow Cell The temperature and current density distributions in the counter-flow cell type are shown in Figure 7. The maximum current density occurred toward the anode inlet and the cathode outlet. At the inlet of the anode gas (or the outlet of the cathode gas), the electrochemical reaction occurred the most. The maximum temperature occurred near the cathode outlet because the sensible heat of the cathode gas exceeded that of the anode gas. As a result, the heat flowed along the cathode gas flow direction. When the size of the cell was longer than 0.5 m, the point of maximum temperature moved in the cathode gas flow direction, as in the co-flow case. At cell sizes of 1 m or more, temperature and current density distributions were similar. Figure 8 presents the temperature distribution and current density distribution. The normalized length (y/L) value of 0 refers to the inlet of the anode gas and the outlet of the cathode gas. As the length of the cell increased, the position of the maximum temperature and current density moved toward the cathode gas outlet. Furthermore, when the length of the cell was larger than 0.5 m, the concentrated current density and temperature were observed near the cathode gas outlet. The temperature and current density distributions were nearly identical at cell lengths of 1 m and greater. Cross-Flow Cell The simulation results of the cross-flow type fuel cells are presented in Figure 9, showing that the maximum temperature was concentrated toward the anode and cathode outlets. Because the cathode Energies 2020, 13, 1361 8 of 12 gas had a large sensible heat, the maximum temperature was more biased toward the cathode gas flow direction. The maximum current density occurred toward the anode gas inlet. When the cell size was 0.5 m or greater, the temperature distribution and the current density distribution showed similar results. Energies 2020, 13, x FOR PEER REVIEW 8 of 12 The simulation results of the cross-flow type fuel cells are presented in Figure 9, showing that the maximum temperature was concentrated toward the anode and cathode outlets. Because the cathode gas had a large sensible heat, the maximum temperature was more biased toward the cathode gas flow direction. The maximum current density occurred toward the anode gas inlet. When the cell size was 0.5 m or greater, the temperature distribution and the current density distribution showed similar results. Figure 9. Current density and temperature distribution relative to the length for the cross-flow type. Figure 10 presents the normalized temperature and current density distributions for the crossflow type. The temperature was measured along the cathode gas flow direction, where a normalized length (x/L) value of 0 refers to the cathode gas inlet while a value of 1 refers to the outlet. The current increased along the cathode gas flow direction but the current density decreased significantly at the cathode gas outlet. The temperature distribution increased along the cathode gas flow direction. At cell lengths larger than 0.5 m, the temperature and the current density were very similar. Discussion We compared MCFCs using co-flow, cross-flow, and counter-flow configurations. A square cell geometry and five different square dimension lengths (0.1 m, 0.5 m 1.0 m, 1.5 m, and 2.0 m) were examined. In all three flow types, the maximum cell temperature converged as the cell length increased. Figure 11 presents the maximum temperature of each flow type relative to the cell size. Figure 10 presents the normalized temperature and current density distributions for the cross-flow type. The temperature was measured along the cathode gas flow direction, where a normalized length (x/L) value of 0 refers to the cathode gas inlet while a value of 1 refers to the outlet. The current increased along the cathode gas flow direction but the current density decreased significantly at the cathode gas outlet. The temperature distribution increased along the cathode gas flow direction. At cell lengths larger than 0.5 m, the temperature and the current density were very similar. Energies 2020, 13, x FOR PEER REVIEW 8 of 12 The simulation results of the cross-flow type fuel cells are presented in Figure 9, showing that the maximum temperature was concentrated toward the anode and cathode outlets. Because the cathode gas had a large sensible heat, the maximum temperature was more biased toward the cathode gas flow direction. The maximum current density occurred toward the anode gas inlet. When the cell size was 0.5 m or greater, the temperature distribution and the current density distribution showed similar results. Figure 9. Current density and temperature distribution relative to the length for the cross-flow type. Figure 10 presents the normalized temperature and current density distributions for the crossflow type. The temperature was measured along the cathode gas flow direction, where a normalized length (x/L) value of 0 refers to the cathode gas inlet while a value of 1 refers to the outlet. The current increased along the cathode gas flow direction but the current density decreased significantly at the cathode gas outlet. The temperature distribution increased along the cathode gas flow direction. At cell lengths larger than 0.5 m, the temperature and the current density were very similar. Discussion We compared MCFCs using co-flow, cross-flow, and counter-flow configurations. A square cell geometry and five different square dimension lengths (0.1 m, 0.5 m 1.0 m, 1.5 m, and 2.0 m) were examined. In all three flow types, the maximum cell temperature converged as the cell length increased. Figure 11 presents the maximum temperature of each flow type relative to the cell size. Discussion We compared MCFCs using co-flow, cross-flow, and counter-flow configurations. A square cell geometry and five different square dimension lengths (0.1 m, 0.5 m 1.0 m, 1.5 m, and 2.0 m) were examined. In all three flow types, the maximum cell temperature converged as the cell length increased. Figure 11 presents the maximum temperature of each flow type relative to the cell size. As the cell size increased, the temperature difference relative to the flow type varied significantly. The cross-flow type showed the lowest maximum temperature at a cell length of 0.1 m. At a 0.5 m cell length, co-flow had the smallest temperature difference and counter-flow had the largest. In addition, co-flow and counter-flow showed no significant temperature deviation when the cell size increased to 1.0 m or greater. However, co-flow, counter-flow, and cross-flow showed temperature convergence above 1 m. The converged temperatures of co-flow, counter-flow, and cross-flow at 2 m were 673.8 °C, 836.9 °C, and 737.1 °C, respectively. The temperature difference between 1.5 m and 2 m of co-flow, counter-flow, and cross-flow were 0.4 °C, 0.8 °C, and 1.1 °C, respectively. The reason why the maximum temperature became constant as the cell size increased was that MCFCs were in an equilibrium state relative to sizes beyond a certain size. With a small length, such as 0.1 m, the effect of the fixed temperature (gas inlet) had a large effect on the temperature distribution. As the length of the cell increased, the effect of the fixed temperature (gas inlet) decreased. Because the heat generation per area is proportional to the current density according to Equation (5), the same current density distribution results in the same temperature distribution. Fuel cell performance can be expressed in terms of the current density as a function of voltage, most commonly shown as an I-V curve. A higher voltage at the same current density indicates a higher cell performance. Figure 12 presents I-V curves for each cell length for all three flow types. Figure 12a shows the co-flow data. In the case of 0.1 m, the I-V curve showed a low current density at the same cell voltage. However, when the size of the cell increased to more than 0.5 m, there was no big difference. In the cases of counter-flow and cross-flow, the size at which the maximum temperature was reached occurred in cells greater than 1 m. In the case of counter-flow and crossflow, the cell sizes of 1 m and 2 m did not show a difference. This means that the cell showed similar performances. As the anode polarization resistance, cathode polarization resistance, and internal resistance of Equation (4) are related to the temperature, the performance was similar when the temperature distribution became similar. The reason why the maximum temperature became constant as the cell size increased was that MCFCs were in an equilibrium state relative to sizes beyond a certain size. With a small length, such as 0.1 m, the effect of the fixed temperature (gas inlet) had a large effect on the temperature distribution. As the length of the cell increased, the effect of the fixed temperature (gas inlet) decreased. Because the heat generation per area is proportional to the current density according to Equation (5), the same current density distribution results in the same temperature distribution. Fuel cell performance can be expressed in terms of the current density as a function of voltage, most commonly shown as an I-V curve. A higher voltage at the same current density indicates a higher cell performance. Figure 12 presents I-V curves for each cell length for all three flow types. Figure 12a shows the co-flow data. In the case of 0.1 m, the I-V curve showed a low current density at the same cell voltage. However, when the size of the cell increased to more than 0.5 m, there was no big difference. In the cases of counter-flow and cross-flow, the size at which the maximum temperature was reached occurred in cells greater than 1 m. In the case of counter-flow and cross-flow, the cell sizes of 1 m and 2 m did not show a difference. This means that the cell showed similar performances. As the anode polarization resistance, cathode polarization resistance, and internal resistance of Equation (4) are related to the temperature, the performance was similar when the temperature distribution became similar. The reason why the maximum temperature became constant as the cell size increased was that MCFCs were in an equilibrium state relative to sizes beyond a certain size. With a small length, such as 0.1 m, the effect of the fixed temperature (gas inlet) had a large effect on the temperature distribution. As the length of the cell increased, the effect of the fixed temperature (gas inlet) decreased. Because the heat generation per area is proportional to the current density according to Equation (5), the same current density distribution results in the same temperature distribution. Fuel cell performance can be expressed in terms of the current density as a function of voltage, most commonly shown as an I-V curve. A higher voltage at the same current density indicates a higher cell performance. Figure 12 presents I-V curves for each cell length for all three flow types. Figure 12a shows the co-flow data. In the case of 0.1 m, the I-V curve showed a low current density at the same cell voltage. However, when the size of the cell increased to more than 0.5 m, there was no big difference. In the cases of counter-flow and cross-flow, the size at which the maximum temperature was reached occurred in cells greater than 1 m. In the case of counter-flow and crossflow, the cell sizes of 1 m and 2 m did not show a difference. This means that the cell showed similar performances. As the anode polarization resistance, cathode polarization resistance, and internal resistance of Equation (4) are related to the temperature, the performance was similar when the temperature distribution became similar. Generally, as the fuel cell size increases, the maximum temperature increases [6]. However, the analysis results show that when the cell size increased above a certain size, the cell maximum temperature converged to a certain temperature, and the overall cell temperature and current density distributions also converged to similar values. These results can be used to design MCFCs with desired temperature and current density distributions. As the gas input of the cathode increased, the gas utilization decreased. At this time, the temperature of the cell decreased because the heat transfer through the cathode gas increased. Increasing the gas input of the cathode side lowered the maximum temperature of the cell and made the temperature of the cell more uniform. As presented in Figure 13, the maximum temperature in the counter-flow could be significantly lowered. At a cathode gas utilization of 0.8, the maximum cell temperature was over 950 • C. This temperature was beyond the operational range. However, with a gas utilization of 0.2, the cell peak temperature was 695.4 • C, which is a decrease in the peak temperature of over 250 • C. Increasing the gas amount of the cathode gas decreased the maximum temperature and made the cell and current density distribution more uniform. Therefore, even if the size of the cell increased, the temperature of the cell could be made more uniform by increasing the gas input at the cathode. Energies 2020, 13, x FOR PEER REVIEW 10 of 12 Generally, as the fuel cell size increases, the maximum temperature increases [6]. However, the analysis results show that when the cell size increased above a certain size, the cell maximum temperature converged to a certain temperature, and the overall cell temperature and current density distributions also converged to similar values. These results can be used to design MCFCs with desired temperature and current density distributions. As the gas input of the cathode increased, the gas utilization decreased. At this time, the temperature of the cell decreased because the heat transfer through the cathode gas increased. Increasing the gas input of the cathode side lowered the maximum temperature of the cell and made the temperature of the cell more uniform. As presented in Figure 13, the maximum temperature in the counter-flow could be significantly lowered. At a cathode gas utilization of 0.8, the maximum cell temperature was over 950 °C. This temperature was beyond the operational range. However, with a gas utilization of 0.2, the cell peak temperature was 695.4 °C, which is a decrease in the peak temperature of over 250 °C. Increasing the gas amount of the cathode gas decreased the maximum temperature and made the cell and current density distribution more uniform. Therefore, even if the size of the cell increased, the temperature of the cell could be made more uniform by increasing the gas input at the cathode. Conclusions In this study, we analyzed the temperature distribution and current density distribution in terms of the gas flow direction and cell size in a molten carbonate fuel cell. Due to the difficulty in experiments with various sizes of the molten carbonate fuel cells, the analysis was performed using computational fluid analysis using the reaction analysis model. Co-flow, cross-flow, and counterflow were analyzed. The cell size was analyzed using 0.1 m, 0.5 m, 1 m, 1.5 m, and 2 m square sides. The simulation results showed that all three flows converged to a constant temperature as the cell size increased above a certain value. Co-flow and cross-flow converged to a constant temperature above 0.5 m, with co-flow showing the lowest convergence temperature. Counter-flow temperatures converged above a 1-m cell size at the highest temperature. At larger lengths, the temperature and current density distributions also converged. Increasing the cathode gas input reduced the maximum cell temperature, achieving the most important MCFC design goal. Conclusions In this study, we analyzed the temperature distribution and current density distribution in terms of the gas flow direction and cell size in a molten carbonate fuel cell. Due to the difficulty in experiments with various sizes of the molten carbonate fuel cells, the analysis was performed using computational fluid analysis using the reaction analysis model. Co-flow, cross-flow, and counter-flow were analyzed. The cell size was analyzed using 0.1 m, 0.5 m, 1 m, 1.5 m, and 2 m square sides. The simulation results showed that all three flows converged to a constant temperature as the cell size increased above a certain value. Co-flow and cross-flow converged to a constant temperature above 0.5 m, with co-flow showing the lowest convergence temperature. Counter-flow temperatures converged above a 1-m cell size at the highest temperature. At larger lengths, the temperature and current density distributions also converged. Increasing the cathode gas input reduced the maximum cell temperature, achieving the most important MCFC design goal.
2020-03-19T10:18:04.080Z
2020-03-15T00:00:00.000
{ "year": 2020, "sha1": "ca786c60a2821cf7cea71d491dab79493c0533e2", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1073/13/6/1361/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "12b9d978ecae00232337346e546b5dbdeeecc8bd", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
1609978
pes2o/s2orc
v3-fos-license
Attosecond double-slit experiment A new scheme for a double-slit experiment in the time domain is presented. Phase-stabilized few-cycle laser pulses open one to two windows (``slits'') of attosecond duration for photoionization. Fringes in the angle-resolved energy spectrum of varying visibility depending on the degree of which-way information are observed. A situation in which one and the same electron encounters a single and a double slit at the same time is discussed. The investigation of the fringes makes possible interferometry on the attosecond time scale. The number of visible fringes, for example, indicates that the slits are extended over about 500as. The conceptually most important interference experiment is the double-slit scheme, which has played a pivotal role in the development of optics and quantum mechanics. In optics its history goes back to Young's double-slit experiment. Its scope was greatly expanded by Zernike's work and continues to deliver new insights into coherence to the present day [1]. One of the key postulates of quantum theory is interference of matter waves, experimentally confirmed by electron diffraction [2,3]. More than 30 years later, Jönsson was the first to perform a double-slit experiment with electrons [4]. Of particular importance for interpreting quantum mechanics have been experiments with a single particle at any given time in the apparatus [5,6]. More recent work has illuminated the fundamental importance of complementarity in which-way experiments [7] and of quantum information in quantum-eraser schemes [8]. In this letter a novel realization of the double-slit experiment is described. It is distinguished from conventional schemes by a combination of characteristics: (i) The double slit is realized not in position-momentum but in time-energy domain. (ii) The role of the slits is played by windows in time of attosecond duration. (iii) These "slits" can be opened or closed by changing the temporal evolution of the field of a few-cycle laser pulse. (iv) At any given time there is only a single electron in the double-slit arrangement. (v) The presence and absence of interference are observed for the same electron at the same time. Interference experiments in the time-energy domain are not entirely new. Interfering electron wave packets were created by femtosecond laser pulses [9]. Accordingly, the windows in time (or temporal slits) during FIG. 1: Temporal variation of the electric field E (t) = E0(t) cos(ωt + ϕ) of few-cycle laser pulses with phase ϕ = 0 ("cosine-like") and ϕ = −π/2 ("sine-like"). In addition, the field ionization probability R(t), calculated at the experimental parameters, is indicated. Note that an electron ionized at t = t0 will not necessarily be detected in the opposite direction of the field E at time t0 due to deflection in the oscillating field. which these wave packets are launched were comparable to the pulse duration. In the present experiment, in contrast, the slits are open during a small fraction of an optical cycle, which gives the attosecond width. A number of experiments, in particular in intense-laser atom physics, can and have been interpreted in this spirit (for a review see, for example, [10]), and were also extended to the microwave region [11]. Here, however, the optical cycles are precisely tailored by controlling the phase of few-cycle laser pulses (also known as absolute or carrierenvelope phase). This provides an unprecedented degree of control for the double-slit arrangement. Not only are the principles of quantum mechanics beautifully demonstrated, it is also likely that applications exploiting inter-ferometric techniques for measuring attosecond dynamics will emerge. Argon atoms are ionized by intense few-cycle 850-nm laser pulses. Photoionization under these conditions is a highly nonlinear process whose first step can be described by optical field ionization. This immediately explains the generation of one attosecond window (or slit) in time per half-cycle close to its extremum, see Fig. 1. By using phase-controlled few-cycle laser pulses [12], it is possible to manipulate the temporal evolution of the field, thus gradually opening or closing the slits, and controlling which-way information. Depending on the field, one or two half-cycles (or anything in between) contribute to the electron amplitude for a given direction and electron energy. This corresponds to a varying degree of which-way information and, accordingly, to varying contrast of the interference fringes. Subsequent half-cycles emit electrons in opposite directions. The temporal slits are therefore spaced by approximately the optical period, resulting in a fringe spacing close to the photon energy. The experimental setup is quite similar to that described in [13]. The laser beam mentioned above intersects an atomic gas jet inside a vacuum apparatus. The laser polarization is horizontal and electrons emitted in opposite directions ("left" and "right") are detected by two opposing time-of-flight (TOF) detectors. The phase of the field can be controlled by delaying the envelope of the pulse with respect to the carrier by means of a glass wedge shifted into or out of the beam. The phase of the field is measured as described in [13]. Figure 2 displays measured electron spectra. In Fig. 2(a) the spectra recorded at the left and the right detectors are shown for ±cosine-like and ±sine-like pulses as defined in Fig. 1. A problem in presenting such spectra is that they quickly roll off with increasing electron energy. This roll-off was eliminated by dividing the spectra by the average of all spectra over the pulse's phase. Clear interference fringes with varying visibility are observed as expected from the discussion above. The highest visibility is observed for −sine-like pulses in the positive ("right") direction. For the same pulses, the visibility is very low in the opposite direction. Changing the phase by π reverses the role of left and right as expected. The most straightforward explanation -which will be detailed by a simple model below -is to assume that, for −sine-like pulses, there are two slits and no which-way information for the positive direction and just one slit and (almost) complete which-way information in the negative direction. The fact that the interference pattern does not entirely disappear is caused by the pulse duration, which is still slightly too long to create a perfect single slit. It should be noted at this point that there is only a single photoelectron involved at a time because single ionization is observed. At the same time, this single electron interferes in one direction and does not in the other. The fringe pattern exhibits an envelope. From 20 30 Photoelectron spectra of argon measured with 6-fs laser pulses for intensity 1 × 10 14 W/cm 2 as a function of the phase. Panel (a) displays the spectra for ±sine-and ±cosinelike laser fields. The red curves are spectra recorded with the left detector (negative direction), while the black curves relate to the positive direction. For ϕ = π/2 the fringe exhibit maximum visibility for electron emission to the right, while in the opposite direction minimum fringe visibility is observed. In addition, the fringe positions are shifted. Panel (b) displays the entire measurement where the fringe visibility is coded in false colors. The fringe positions vary as the phase ϕ of the pulse is changed. This causes the wave-like bending of the stripes in these figures. Both panels, in principle, show redundant information because a phase shift of π mirrors the pulse field in space and thus reverses the role of positive and negative direction. However, this data was in fact measured simultaneously and thus single-and double-slit behavior is observed for the same electron at the same time. a width of this envelope of about 4 fringes is inferred. Just as for a double-slit experiment, the width of this envelope can be associated with the width of the slits. It will turn out, however, that what is seen here is not the width of the slit. Rather, each slit can be resolved into a pair of slits whose separation is inversely proportional to the width of the envelope. Disregarding the changing visibility, the peaks observed in the spectra resemble the well-known abovethreshold ionization (ATI) peak pattern and they are certainly related to them. However, the relationship is non-trivial: Besides the visibility of the fringes, also their positions change as the phase of the field is varied. Details of the fringe shifts can be seen in Fig. 2(b). For conventional ATI, one would try to explain this in terms of the ponderomotive potential U P . This does not work here, because the concept of the ponderomotive potential, which is defined as the cycle-averaged kinetic energy of an electron quivering in an oscillating electric field, is questionable in the few-cycle regime. In contrast, an interpretation based on the double-slit analogy is obvious. In a spatial double slit, the fringe pattern would shift if a phase shifter (for light, simply a glass plate) were placed in front of one of the slits. For nontrivial particle trajectories one needs to consider the action S along the particles' paths and use the fact that the particles' phases are given by S/h. In order to exclude other scenarios, we compare the experimental data with results obtained by numerically solving the time-dependent Schrödinger equation (TDSE) in three spatial dimensions. The selfconsistent effective argon potential was calculated numerically within density-functional theory using the optimized effective potential approach proposed in Ref. [14]. During time-propagation, only the 3p valence electron was considered active, moving in the combined field of the effective potential and the 6.5-cycle sin 2 laser pulse of peak intensity 10 14 W/cm 2 . The directional spectra were calculated for 32 different carrier-envelope phases using a method described in [15]. Finally, the spectra were divided by the phase-averaged spectrum, using the same procedure that was applied to the experimental data underlying Fig. 2. The numerical TDSE result for the rightgoing electrons is shown in Fig. 3, to be compared with the experimental result in the right panel of Fig. 2(b). Virtually all details found in the measurement can also be found in the calculation. This confirms that singleelectron dynamics are sufficient to explain the fringes. For an interpretation we resort to a classical model, the so-called simple-man's model [16], which -together with various extensions and modifications -has proven to be extremely helpful for understanding strong-field laser-atom interaction; for a review see, for example, [10]. Alternatively, Keldysh-type models, which can be interpreted as an approximation of Feynman's path integral [17], could be used. Respective results can be found in the literature: Ref. [18] predicts effects analogous to those described in this letter for circular polarization. References [12,19,20] explain related classical effects for electromagnetic XUV radiation produced by high-harmonic generation. For the present problem, the classical and the quantum model lead to qualitatively the same results. The classical model assumes that an electron is launched into the continuum at some time t 0 . Evidently, only for times t 0 where the electric field is close to its highest strength, is there an appreciable probability of such a process. Another crucial assumption of the model is that the electron's velocity is zero at t = t 0 . This means that p − eA(t 0 ) = 0, where p is the momentum of the electron at the detector, A(t) the vector potential of the field, and e = −|e| the electron's charge. It is largely this relationship that explains the double-slit behavior of few-cycle photoionization. The strength of the classical model is the intuitive insight it provides. In the following, hardly more than the number and position of the solutions of p−eA(t 0 ) = 0 for given p will be used in order to explain the double-slit behavior. The respective solutions t 0 (p) in a Keldysh-type model are complex, thus allowing access to classically forbidden electron energies. However, the symmetry of these solutions stays the same and so do the results qualitatively. In Fig. 4 the vector potential A(t) is drawn for a −sine-like pulse. The solutions of p − eA(t 0 ) = 0 and thus all trajectories of momentum p that could interfere can be found by intersecting A(t) with a horizontal line at p/e. It is now important to recall that a fringe pattern of maximal visibility requires equally "strong" slits, i.e. minimal which-way information. For a few-cycle pulse whose envelope is maximal at t = 0, the "strength" of a slit decreases very quickly with increasing |t 0 | and is essentially zero for |t 0 | > 2π/ω because of the highly nonlinear dependence of photoionization on the field strength. As the maximum of the pulse envelope was chosen to be at t = 0, the condition of equally strong slits is identical to requiring that the solutions of p− eA(t 0 ) = 0 be symmetric with respect to t = 0. This is the case for −sine-like pulses with electrons emitted in the negative direction and for +sine-like pulses with electrons emitted in the positive direction. For both cases, the respective opposite direction can be considered to act like a single slit as long as the pulse is short enough. Figure 4 also shows that each slit is, on closer inspection, a pair of slits and that the temporal separation of these sub-slits depends on the electron energy [21]. The experimental data must be considered to be a measurement of the time difference of the two sub-slits, which is approximately 500 as. This is a first simple example for using interferometry on the attosecond time scale in order to investigate electronic dynamics. In addition, Fig. 2(a) shows that the relative phase of the sub-slits is different for sine-and cosine-like pulses, resulting in a shift of the fringe envelope. It should be noted that the simple-man's model does not reproduce the dependence of the fringe visibility on electron energy as observed experimentally and in the solution of the TDSE. Therefore, the direction for which interference is predicted by the simple model may be wrong, depending on the energy. Using several theoretical models (3D TDSE, 1D TDSE, Keldysh-type, and classical), we were able to show that this is not a fundamental problem of the classical theory. Rather, it is an effect of the atomic binding potential, which obviously deflects the outgoing photoelectrons. The solution of the one-dimensional TDSE (which cannot deflect) with a soft-core potential, for example, agrees qualitatively very well with the classical and a Keldysh-type model. In particular, it does not show a pronounced energy dependence of the fringe visibility, and it predicts the interferences in the same direction as the simple models. More insight from the classical model can be gained by treating the electrons as deBroglie-waves, computing their actions S, calculating their phases S/h, and adding them coherently. This allows predicting the fringe positions: Just as for any other double slit, fringe maxima are observed, if the difference in phase is n · 2π were n is the order of the fringe. Indeed, respective calculations show a phase-dependent fringe shift resembling the one experimentally observed. In the same way, also maxima and minima of the fringe pattern's envelope can be calculated in dependence of the phase ϕ. However, quantitative agreement is certainly not to be expected, given that the classical model neglects the atomic potential entirely. In conclusion, we have realized an intriguing implementation of the double slit in the time domain. The observation of interference and its absence at the same time for the same electron is a beautiful demonstration of the principles of quantum mechanics. It should also be noted that attosecond slits were used and that the interferograms reflect the attosecond dynamics of electronic transitions. Further experimental and theoretical progress should make it possible to use interferometric techniques for attosecond science.
2014-10-01T00:00:00.000Z
2005-03-19T00:00:00.000
{ "year": 2005, "sha1": "b6f841bb3cc52c9c2230f9ca1dd25403c005b2f0", "oa_license": null, "oa_url": "http://arxiv.org/pdf/quant-ph/0503165", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "b6f841bb3cc52c9c2230f9ca1dd25403c005b2f0", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
119007013
pes2o/s2orc
v3-fos-license
Phase-Sensitive measurements on the corner junction of iron-based superconductor BaFe1.8Co0.2As2 We have made a phase-sensitive measurement on the corner junction of the iron-based superconductor BaFe1.8Co0.2As2, and observed the typical Fraunhofer-like diffraction pattern. The result suggests that there is no phase shift between the a-c face and b-c face of a crystal, which indicates that the superconducting wavefunction of the iron based superconductor is different from that of a cuprate superconductor. The iron-based high T c superconductors discovered several months ago have become the focus of interest in condensedmatter physics because they have displayed the high T c up to 55K so far [1] and included a magnetic element in the crystalline structure. The similarities between the iron-based and cuprate superconductors, i.e. the layered crystal structure, the approximate 2D conduction layer [2] [3], closeness to a long range antiferromagnetic order [4], all suggest that the ironbased superconductors may have the same superconducting mechanism as the cuprate superconductor's. Recent heat capacity measurement [5] and photoemission spectroscopy [6] measurements seem to favor this opinion. However, some other recent experiments such as ARPES [7,8,9], infrared spectrum [10], etc., would rather support that the iron-based superconductors act more like a conventional superconductor with regard to pairing behavior. Meanwhile, there are two different kinds of theories in the effort of trying to disclose the underlying superconducting mechanism: one is based on the strong-coupling approach [11,12,13], which emphasizes onsite correlations applicable to the high T c cuprate superconductors; the another is based on the weak-coupling approach [14,15,16], which emphasizes itinerant-electron physics. The debates indicate that there is the need of much more work to do to determine the superconducting wavefunction as well as its underlying mechanism in the iron-based superconductor, among which the phase-sensitive experiment is obviously one of the most important works drawing much attention. The first phase-sensitive experiment on cuprate superconductor was reported in 1993 [17]. Since then, phase-sensitive experiments based on different configurations especially the corner junction [18,19,20,21,22,23] have played an important role in studying the high T c cuprate superconductors, and it has been regarded as the most direct and key tool in studying some intrinsic properties of the superconducting wavefunction. Till now, it is still the only one tool to directly detect the * Electronic address: ypwang@aphy.iphy.ac.cn superconducting phase. For an ideal corner junction based on a conventional superconductor, its critical current as a function of magnetic field is represented in the following form [19](shown in Fig.1a): where Φ=Blt is the total magnetic flux threading it, l is the length of the corner junction, t is magnetic barrier thickness, Φ 0 is the flux quantum. It reaches a peak at zero magnetic field. For a corner junction based on cuprate superconductor, its critical current is represented in the different form [19] (shown in Fig.1b): Obviously, at zero applied magnetic field, there is a minimum in the curve of critical current, because the π phase difference of the superconducting wavefunction between the two faces of the crystal corner leads to a destructive interference of superconducting current. Therefore, the diffraction pattern of the critical current of corner junction could be used as typical and direct evidence, which indicates whether or not the wavefunction of a superconductor is like that of the cuprate superconductor. In this letter, we present the phase-sensitive measurement on the corner junction of the iron-based superconductor BaFe 1.8 Co 0.2 As 2 . To our knowledge, this is the first phasesensitive experiment on the iron-based superconductors. We fabricated single crystal BaFe 1.8 Co 0.2 As 2 into corner junctions in the way described in Ref [17]. Our single crystals with T C =22K was obtained by flux-melt technique similar as described in [24]. All of the samples are cleaved into small, thin sheets with the typical thicknesses of 20 ∼ 40µm; and all the faces used for corner junction are cleavage planes, smooth and flat. After masking the sample (leaving the corner we need uncovered which is even at the both faces), we sputter about 40nm Au on the sample, then continue to sputter 300nm Pb over the Au layer. The typical lengths of both sides of the corner junctions in our experiment are 100∼200µm, and the geometric asymmetry of the corner junctions are less than 15%(According to Ref [18], the small asymmetry will not affect the final conclusion). Critical current of our junctions used for measurement at 2 K is 20µA ∼ 3mA which is feasible to measure at low temperature. We manufactured two superconductor cans with inner layer of Pb and outer layer of Nb in order to make sure of the good shielding effect. Measurement was taken in the temperature range of 1.8K∼ 4.2K which is far below the transition temperatures of Nb and Pb. The electrical characters of our corner junction is shown in Fig.2, the I-V curve (Fig.2a) exhibits a typical resistively shunted current-voltage character, which should be expected from superconductor-normal metal-superconductor (SNS) junction with a high quality. The very sharp transition in the dynamic resistance curve (Fig.2b) makes it feasible to detect the critical current precisely. Magnetic field modulation of the critical current is shown in Fig.3. It displays a typical, symmetric Fraunhofer diffraction pattern, completely different from that of the corner junction of cuprate superconductors, which shows a minimum instead of a maximum at zero field. The diffraction pattern of our edge junction is the same as that of our corner junction. Symmetric Fraunhofer diffraction pattern of the corner junction indicates that there is no phase shift between the a-c face and b-c face of the corner. The possibility of flux trapping in the corner could be ruled out [18], because this diffraction pattern can be repeatedly demonstrated in different samples and in the same sample during several different thermal cyclings between the measurement temperature and 25K. It should be mentioned hereby that, there has been a common belief of which the π phase shift is a direct evidence for dwave pairing symmetry, and on the other hand, the zero phase shift is that for s-wave pairing symmetry [19] [22]. Therefore, the result that we have reported here seems to support s-wave pairing symmetry. However, some other points of view [27] is questioning about the ground theory [25] [26] of the above belief, and since the aim of this letter is to report the experimental result, we will leave the theoretical debate open for further research. In summary, we did phase-sensitive experiment in the ironbased superconductor BaFe 1.8 Co 0.2 As 2 , the typical Fraunhofer diffraction pattern shows that the critical current is max-imum at zero magnetic field, which means there is no phase shift between the a-c face and b-c face of the crystal corner. This indicates that the superconducting wavefunction of the iron based superconductor is definitely not like that of a cuprate superconductor. This work was supported by the 973 project of Ministry of Science and Technology of China, the National Natural Science Foundation of China, and the Knowledge Innovation Project of Chinese Academy of Sciences.
2008-12-17T13:53:09.000Z
2008-12-17T00:00:00.000
{ "year": 2008, "sha1": "a567751f023a09ad82143971d141c736a2ebd513", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "a567751f023a09ad82143971d141c736a2ebd513", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science", "Physics" ] }
231727787
pes2o/s2orc
v3-fos-license
A hypomorphic variant in EYS detected by genome-wide association study contributes toward retinitis pigmentosa The genetic basis of Japanese autosomal recessive retinitis pigmentosa (ARRP) remains largely unknown. Herein, we applied a 2-step genome-wide association study (GWAS) in 640 Japanese patients. Meta-GWAS identified three independent peaks at P < 5.0 × 10−8, all within the major ARRP gene EYS. Two of the three were each in linkage disequilibrium with a different low frequency variant (allele frequency < 0.05); a known founder Mendelian mutation (c.4957dupA, p.S1653Kfs*2) and a non-synonymous variant (c.2528 G > A, p.G843E) of unknown significance. mRNA harboring c.2528 G > A failed to restore rhodopsin mislocalization induced by morpholino-mediated knockdown of eys in zebrafish, consistent with the variant being pathogenic. c.2528 G > A solved an additional 7.0% of Japanese ARRP cases. The third peak was in linkage disequilibrium with a common non-synonymous variant (c.7666 A > T, p.S2556C), possibly representing an unreported disease-susceptibility signal. GWAS successfully unraveled genetic causes of a rare monogenic disorder and identified a high frequency variant potentially linked to development of local genome therapeutics. G enetic diagnosis of heterogenous inherited disorders became less challenging after next-generation sequencing became widely available. However, although the technological development has substantially improved diagnosis rates, the genetic basis of the disease remains unknown in a large proportion of patients, highlighting the limits of the nextgeneration sequencing approach. Retinitis pigmentosa, which lacks effective treatment options, is the most common form of inherited retinal degeneration. It is initially characterized by the loss of rod photoreceptors, which mediate night vision, and then involves the loss of cone photoreceptors, which are responsible for daylight vision. Retinitis pigmentosa affects~1 in 3000 people worldwide. The disease is highly heterogenous presenting with a variety of hereditary patterns 1 ranging from classical Mendelian inheritance to non-Mendelian inheritance due to incomplete penetrance 2,3 , hypomorphic allele [4][5][6][7][8] , or oligogenecity 4,5,9,10 . However, despite the number of reports of non-Mendelian inheritance causing retinitis pigmentosa, its significance in the context of the overall genetic pathology of the disease is yet to be demonstrated. In Japan, the genetic basis of retinitis pigmentosa remains unknown in up to 70% of cases even after sequencing of all the coding regions and the flanking immediate exon-intron boundaries of the known retinitis pigmentosa genes with a nextgeneration sequencer (NGS panel test), whole-exome sequencing, or whole-genome sequencing 4,[11][12][13] . There is an urgent need for genetic diagnosis of these unsolved cases, particularly those with the autosomal recessive inheritance pattern of retinitis pigmentosa (ARRP), caused by loss-of-function mutations, because such patients may be amenable to adeno-associated virus (AAV)mediated gene supplementation therapy 14 . Furthermore, there is a growing interest in the detection of prevalent founder mutations as they may serve as potential candidates of mutation-specific therapy including genome-editing therapy that targets a small specific area of the genome [15][16][17][18] . Therefore, its best application would be founder mutations found in a large number of patients. These cases are also candidates for antisense oligo therapy 7,16,19 , which allows local treatment of the retinal genome and can target larger genes that cannot be treated with conventional gene supplementation therapy. Genome-wide association studies (GWAS) are a type of analysis that is most often applied to identify susceptibility loci for common traits, each with a relatively small genetic influence 20 . At the same time, it can also uncover rare variants with strong genetic effects in complex diseases that behave almost as Mendelian alleles in monogenic diseases [21][22][23] . However, GWAS has never been used to directly search for genetic risks in rare monogenic diseases, and its usefulness in such purposes remains unknown. By comparing differences in allele frequency in cases and controls, GWAS can provide unbiased means of detecting disease-associated loci evenly across the genome, with a little assumption of the inheritance pattern. This contrasts with caseoriented sequencing approaches, which are often obliged to focus around exons and their boundaries, to identify mutations that follow classic Mendelian inheritance. Thus, GWAS can in theory complement these widely used sequencing approaches by searching for any significant genetic risks that remain undetected. Here, we report the detection of three disease-associated signals/variants in patients with presumed ARRP (see below), using an integrated approach that combined GWAS with NGS panel test. Results Detection of the EYS locus with GWAS and NGS panel test. We gathered a total of 944 DNA samples from unrelated Japanese patients who had been diagnosed with retinitis pigmentosa consistent with the autosomal recessive mode of inheritance that typically had at least one affected sibling and no affected members in other generations. In addition, isolated cases with no family history as well as an offspring of consanguineous parents were also included. Control samples comprised 924 Japanese individuals. Most of them had been confirmed to have normal fundus through ocular exam. The cases were mostly from either northeastern or southern Japan, whereas the controls were mostly from the northeast (see "Methods" section for detail). All samples were genotyped with a single-nucleotide polymorphism (SNP) array. To search for undetected genetic risks contributing to ARRP, we carried out a meta-GWAS using two independent data sets. The summary of the data sets and the workflow of the GWAS are summarized in Table 1. Of the 644 cases and 620 controls genotyped in the first GWAS, 432 cases and 603 controls were used for analysis in the end after removing 63 cases and 21 controls that failed quality control (QC) and an additional 149 solved cases in which NGS panel test identified pathogenic mutations to account for the cause of disease 11 . Similarly, after removing 14 cases and 13 controls that failed QC and excluding 78 cases genetically solved with NGS panel test 11 , the second GWAS included 208 cases and 287 controls. The results of the two GWASs are summarized in Supplementary Fig. 1, Supplementary Tables 1 and 2. Then these two GWAS data sets were combined (for a total of 640 cases and 890 controls) to carry out a meta-GWAS (Fig. 1a). In this analysis, only the locus encompassing EYS, the most frequent ARRP-associated gene in Japanese patients, remained significant (OR = 3.95, P = 1.18 × 10 −13 ). Among signals that did not reach genome-wide significance (P < 5.0 × 10 −8 ), there were 12 other peaks with possible relevance (P < 1.0 × 10 −5 ) in which no known retinitis pigmentosa genes were included (Supplementary Table 3). To investigate whether the EYS locus contains multiple variants independently associated with the disease that are not included in the same linkage disequilibrium block, we performed conditional analysis Genome-wide association study (GWAS) of ARRP patients and detection of three independent signals in EYS. a Results of a meta-GWAS displayed as a Manhattan plot. Genome-wide significance (P = 5.0 × 10 −8 ) and possible significance (P = 1.0 × 10 −5 ) are marked with red and blue lines, respectively. A single peak at the EYS locus surpassed genome-wide significance. b Results of a conditional analysis presented as a regional plot. Three independent peaks at P < 5.0 × 10 −8 were delineated after conditioning (Peaks 1-3). c Linkage disequilibrium plot using all non-synonymous variants (identified in >5% of cases) and lead variants for Peaks 1-3 identified in GWAS in presumed ARRP patients. The linkage disequilibrium plot was generated using Haploview (ver. 4.1). The default color setting of the software was used for block color setting (D′/LOD). The numbers on the blocks indicate r 2 × 100; numbers are shown on the blocks only for pairs with r 2 > 0.3. Peaks 1, 2, and 3 were in linkage disequilibrium with G843E, S2556C, and S1635Kfs, respectively. The lead variants for Peaks 1-3 are shown in red. Reported pathogenic founder mutations 11 are shown in green, while non-synonymous variants linked to the lead variants are shown in blue. Note, S1653Kfs, a reported founder mutation linked to a GWAS lead variant, was shown in green. termed G843E; Table 2). G843E with an allele frequency (0.0171) unusually high for ARRP has been described in conflicting ways in past reports, as having uncertain significance 23 , being nonpathogenic 24 , possibly being pathogenic (although without sufficient supporting evidence) 25 , and was unreported in the two largest genetic screening projects targeting Japanese retinitis pigmentosa patients 9,11 . The second peak, with much higher allele frequency and lower odds ratio (Peak 2; rs59178556, allele frequency = 0.2161, odds ratio = 1.83, P = 3.79 × 10 −10 ), was in strong linkage disequilibrium (r 2 = 0.97) with a common nonsynonymous variant, i.e., c.7666 A>T (p.S2556C; hereafter termed S2556C variant; Table 2) registered as benign/likely benign in the ClinVar (https://www.ncbi.nlm.nih.gov/clinvar/). Peak 3 (rs79476654, allele frequency = 0.0005, odds ratio = 16.46, P = 2.45 × 10 −8 ) was in linkage disequilibrium (r 2 = 0.78) with c.4957dupA (p.S1653Kfs*2) (hereafter termed S1653Kfs; Table 2), recognized as a founder autosomal recessive mutation 11 . It remained statistically significant even after removing solved cases with biallelic EYS mutations (including homozygotes and compound heterozygotes with S1653Kfs) screened by NGS panel test prior to GWAS because a large number of heterozygous carriers of S1653Kfs mutation remained genetically unsolved 9,11,24 . Haplotype analysis of EYS based on SNP array data and the results of NGS panel test in retinitis pigmentosa patients confirmed that none of the lead variants of the identified signals were in linkage disequilibrium with c.C8805A (p.Y2935X) or c.G6557A (p.G2186E) (hereafter termed G2186E), the two other known founder mutations in this gene 9 , in contrast to S1653Kfs, which was in linkage disequilibrium with Peak 3 (Fig. 1c, Table 2 and Supplementary Table 4). This suggests that Peaks 1 and 2 represent under-recognized genetic risks in EYS. Thus, GWAS in combination with the NGS panel test successfully detected diseaseassociated variants overlooked by simple sequence-based approaches in a rare monogenic disease. Expression analysis of G843E allele in genome-edited patientderived lymphoblasts. The allele frequency (0.2140) of S2556C, linked to Peak 2, was undoubtedly too high for a pathogenic Mendelian mutation causing a rare monogenic disease. On the other hand, the allele frequency of G843E linked to Peak 1 was much lower (0.017), yet still too high for a classical AR allele, raising the possibility that it represents a reduced penetrant or hypomorphic ARRP allele. Meanwhile, it is also possible that a true ARRP mutation in linkage disequilibrium with Peak 1 exists deep in the non-coding region. However, the vast majority of the known pathogenic mutations in EYS are either nonsense, frameshift, or splice site mutations 11 that would presumably result in a qualitative alteration in the mRNA sequence. Thus, we directed our search to the main variants that could affect the coding sequence. For this purpose, we carried out two experiments. First, we performed whole-genome sequencing (WGS) in two G843E homozygotes and two compound heterozygotes (G843E and S1653Kfs or G2186E). We found that there were no obvious structural variants in EYS that affected the coding sequence. A splice site prediction analysis 24 also detected no coding and non-coding variants that could alter splicing in these patients. Second, we established patient-derived lymphoblasts (or lymphoblast cell line; LCL) from homozygotes of G843E and S1653Kfs and studied the expression of EYS mRNA by forced transcription of EYS through the insertion of a constitutively active CAG promoter immediately upstream of the initiation codon of the gene (Supplementary Fig. 3a, b) 15 . Among the seven main transcript variants reported for EYS 25 , the retina-specific long isoforms (transcription variants 1 and 4) are considered essential for photoreceptor biology 25,26 . RT-PCR followed by Sanger sequencing indicated that mRNA containing G843E (exon 16) was expressed without the loss of the C-terminal end of the retina-specific long isoform (Fig. 2a-c). This was unlike transcripts with homozygous S1653Kfs that resulted in the loss of the long isoform via nonsense-mediated decay, which was successfully rescued by replacing the mutation with wild-type sequence through genome editing (Fig. 2d). These results argue against the presence of an intronic mutation in linkage disequilibrium with Peak 1 that results in altered splicing and a presumed premature termination of the reading frame, but support G843E as the causal mutation linked to Peak 1. Meanwhile, the presence of a non-coding variant that causes the disease through changes in the regulation of EYS transcription cannot be ruled out. Functional analysis of EYS G843E in zebrafish. EYS gene is absent in mammalian laboratory animals such as mice, rats, or rabbits. And zebrafish (Danio rerio) is the only model in which loss-of-function mutations in the homologous eys have been shown to recapitulate photoreceptor degeneration observed in retinitis pigmentosa patients with EYS mutations [27][28][29] . Importantly, G843 residing in the Epidermal Growth Factor-like domain is conserved across various species ranging from zebrafish, chicken, and zebrafish ( Fig. 3) As an expression of G843E mutant in the mRNA has been confirmed with the patientderived LCL (Fig. 2), we used zebrafish to directly assess the function of the mutant allele. Endogenous Eys protein localized near the basal interface of the connecting cilium of the photoreceptors in adult fish (Fig. 4a, b). During development, Eys expression was observed after 4 days post-fertiliztion (dpf, Fig. 4c-f). There was some staining in the inner nuclear, inner plexiform layer, and ganglion cell layer, which was in line with that reported in human 30 . However, the role of Eys outside of photoreceptors is unknown. To perform specific and effective knockdown of eys, we prepared three different splice site morpholinos (SPMO1-3) and compared their effects on eys expression. SPMOs sometimes yield abnormal splicing variants. However, all three SPMOs did not show evidence of such abnormalities (Fig. 4g), which allowed us Information on the three independent peaks detected in this study and exonic variants in linkage disequilibrium are presented. *P-value for S2556C was calculated after conditional analysis. † The odds ratio and P-value for S1653Kfs was not available (NA), because the variant was not included in the imputed genotypes of the GWAS analysis. Fig. 2 Degradation of the EYS G843E mutant mRNA in patient-derived lymphoblastoid cell lines (LCLs). a A schematic map of the RT-PCR primer designed in relation to the exon-intron structure and mutations (G843E and S1653Kfs) in EYS and published transcript variants (Tv) 25 . The locations of G843E (Exon 16) and S1653Kfs (Exon 26) are indicated by the arrows. Exon numbers are based on Tv1. Note, Tv5 was identified only in fibroblasts 25 . b RT-PCR analysis. The regions for exons 5-6, exons 14-18, and exons 40-43 of EYS were amplified on cDNA generated from patient-derived lymphoblast cell lines with wild-type EYS (normal), homozygous S1653Kfs, and homozygous G843E. The Y79 retinoblastoma cell line was used as a positive control. Note C-terminal exons of the long isoform Tv1 were detected in LCLs with homozygous G843E but not in that with homozygous S1653Kfs. Sanger sequencing of the RT-PCR amplicon confirmed the expression of the G843E mutation using a primer pair targeting exons 14-18. Meanwhile, mRNA for exons 4-5 and 14-18 were detectable, possibly reflecting the differential expression of distinct EYS isoforms 25 . c Chromatogram for RT-PCR amplicon (exons [14][15][16][17][18]. Note, G843E variant is present in the patient's mRNA. d RT-PCR analysis after mutation replacement genome-editing treatment (GE) or inhibition of nonsensemediated mRNA decay (NMD) in LCL from an S1653Kfs homozygote, after which expression of exons 40-43 was detected. to assess their knockdown efficacy by quantitative RT-PCR analyses. All three SPMOs showed suppression of eys mRNA (range, 58-88%; Fig. 4h, Supplementary Data 1). However, as SPMO1 appeared least effective, it was co-injected with ATG-MO targeting the first methionine for downstream experiments. Using these morpholinos we examined the mislocalization of rhodopsin. Rhodopsin mislocalization has been reported in mice models of RP [31][32][33] . In addition, as eys is localized to the connecting cilium, the mislocalization of rhodopsin caused by defective ciliary transport is a reasonable phenotype. Moreover, we have demonstrated that the mislocalization can cause photoreceptor cell death in zebrafish 34 . Furthermore, a zygotic mutant (knockout of eys in zebrafish) also shows the same phenotype 29 . The injection of these different morpholinos (SPMO1 + ATG-MO, SPMO2, and SPMO3)-induced mislocalization of rhodopsin in photoreceptors ( Fig. 4i-l). This precedes photoreceptor death and is the expected phenotype for eys knockdown zebrafish. More importantly, this phenotype observed also at 7 dpf following injection of SPMO1 + ATGMO, or SPMO2 (Fig. 4m, n) was rescued by co-injection of human wild-type EYS mRNA (Fig. 4p). On the other hand, the rescue effect was significantly diminished when SPMO1 + ATGMO, or SPMO2 and EYS mRNA with G843E variant were co-injected (Fig. 4o, q and Supplementary Data 1). These results provide direct evidence for the dysfunction of EYS caused by G843E. Enrichment of G843E in genetically unsolved heterozygous carriers of another EYS mutation. A recent large-scale mutation screening project in 1,204 Japanese retinitis pigmentosa cases revealed an unusually high frequency of carriers of heterozygous deleterious mutations in EYS, accounting for 25.1% of the unsolved cases 11 , strongly indicating that there are autosomal recessive mutations in EYS yet to be identified. Keeping this in mind, when we specifically looked at retinitis pigmentosa patients who were still genetically unsolved after the NGS panel test 11 , we found that G843E was highly enriched in patients with a heterozygous deleterious mutation in EYS (allele frequency = 17.0%) compared to those without (allele frequency = 6.9%, odds ratio = 2.46, P = 8.51 × 10 −7 , Fisher exact test; Table 3) or to the general population using a public database (allele frequency = 1.7%, odds ratio = 10.0, P = 2.21 × 10 −32 , Fisher exact test; Table 3). This strongly suggests that the G843E allele contributes to retinitis pigmentosa in trans with another EYS mutation, as in ARRP. Similarly, the frequency of G843E homozygotes was significantly higher (odds ratio = 97.0, P = 9.89 × 10 −12 ) in genetically unsolved retinitis pigmentosa patients (13/640) compared to the general population (1/4773) establishing that the G843E allele contributes to retinitis pigmentosa in homozygosity as well, typical for an ARRP mutation. Meanwhile, analysis of Peak 2, linked to S2556C, also revealed significant enrichment of the variant in unsolved patients with a heterozygous deleterious mutation in EYS compared to those without (P = 2.56 × 10 −7 , Fisher exact test), although the difference was relatively small (allele frequency = 39.1% vs 31.2%, odds ratio = 1.25). Taken together, the G843E mutation may cause retinitis pigmentosa when both alleles of EYS are affected, either in a compound heterozygous or a homozygous state, as observed in an ARRP allele. Meanwhile, Peak 2 may confer a different pathomechanism given the high frequency of the pathogenic allele (see "Discussion" section). Segregation analysis. Although G843E is consistent with an ARRP variant according to the analysis above, it is unlikely that the G843E allele acts as a simple Mendelian allele, considering its relatively high allele frequency in the general population (1.7%). Theoretically, even homozygotes of G843E alone would cause ARRP in at least 1 in 3460 births, with a modest assumption of random mating, which is more frequent than the reported overall prevalence of ARRP in Japan (1 in 7000) 35 . Furthermore, although the allele frequency of G843E (1.7%) is 3.8-fold higher than that of the founder variant S1653Kfs (0.44%) in the general population (Table 2), the observed frequency of homozygotes of G843E (14/867) is actually lower compared to that of S1653Kfs (24/867) in retinitis pigmentosa patients. This could be accounted for by G843E causing incomplete penetrance or a mild retinal phenotype, both of which could lead to a large underestimation of disease frequency. To explore this possibility, we carried out a segregation analysis in 18 unaffected (and 1 affected) siblings of index patients with G843E (either in a compound heterozygous or a homozygous state, 13 families; Supplementary Fig. 2). None of the unaffected siblings of the patients carried biallelic EYS mutations, except for the brother of YWC133, who was unexpectedly found to be compound heterozygous for G843E and S1653Kfs. This 75-year-old man was considered unaffected according to a local ophthalmologist who had carried out cataract surgeries on both eyes within the preceding year. Re-assessment of the patient at Tohoku University Hospital revealed a mildly but clearly constricted visual field, accompanied by moderate attenuation of the retinal vessels and diffuse alteration of the retinal pigment epithelium with modest retinal thinning in both eyes, although he had normal visual acuity (20/20). Nevertheless, the marked reduction in the electrical response of the patient's COMMUNICATIONS BIOLOGY | https://doi.org/10.1038/s42003-021-01662-9 ARTICLE COMMUNICATIONS BIOLOGY | (2021) 4:140 | https://doi.org/10.1038/s42003-021-01662-9 | www.nature.com/commsbio retina to light stimuli probed by electroretinogram indicated that he also had a mild form of retinitis pigmentosa ( Supplementary Fig. 3). Thus, the results are consistent with G843E being a hypomorphic ARRP allele and show that it can indeed sometimes cause mild retinal disease that may be overlooked without a thorough assessment. This may partly account for the gap between the known prevalence of retinitis pigmentosa and the allele frequency of G843E, complicating its interpretation in the past 11,13,36,37 . Assuming that G843E is an ARRP allele, the mutation would account for an additional 7.0% of Japanese cases of retinitis pigmentosa, which would increase the proportion of genetically solved cases by 26.8%, either as a compound heterozygotes or homozygotes. Discussion Although previous reports have used GWASs to identify rare penetrant pathogenic variants in complex diseases [21][22][23] , our study is the first to demonstrate that GWAS, with the help of the NGS panel test, can be applied effectively to identify genetic risks in heterogeneous monogenic disorders. We successfully identified three independent disease-associated signals, all in the gene EYS, including a signal in linkage disequilibrium with the known commonest founder mutation S1653Kfs in EYS that causes ARRP 11 . This confirmed the quality of GWAS and its ability to effectively detect classical Mendelian mutations, although S1653Kfs could have been identified with sequencing alone in this case. At the same time, the successful application of GWAS is dependent on there being common founders, which may limit its use in a highly heterogeneous population. Another important factor appears to be the study size as with the case of GWAS for common traits. While there are a few founder ARRP mutations including those in EYS, USH2A, RP1, SAG, and RP1L1 reported in the Japanese population 4,11,15,38 , those in EYS are by far the most frequent 11 . It is likely that this limited the GWAS to detect only exceedingly frequent founder mutations in EYS. However, increasing the number of cases and controls should greatly facilitate detection of less frequent founder mutations as demonstrated in recent large-scale GWAS studies that have boosted the number of disease-associated loci from a few initially to often dozens including those for glaucoma and agerelated macular degeneration 21,39,40 . We detected a signal in linkage disequilibrium with G843E, a controversial variant that has been recognized by sequencing alone but did not previously fulfill the standard criteria required to determine pathogenicity 11,13,37,41 . Herein, we provide direct evidence of EYS dysfunction caused by G843E using zebrafish as a model. Furthermore, analysis of the NGS panel test data revealed that G843E was highly enriched in heterozygous carriers of another deleterious EYS mutation and homozygotes compared to the general population, consistent with the allele mediating the autosomal recessive mode of inheritance. Yet, the relatively high allele frequency of G843E contradicts the known prevalence of ARRP. A segregation analysis identified an elderly asymptomatic patient who was compound heterozygous for G843E and S1653Kfs and had been erroneously assigned as unaffected, probably based on a lack of symptoms or typical features of retinitis pigmentosa. This was consistent with G843E being hypomorphic sometimes causing a very mild phenotype later in life. In such instances, the disease may be overlooked without an assessment by electroretinogram, the most sensitive measure to Fig. 4 Functional assessment of EYS G843E variant following morpholino-mediated knockdown of eys in zebrafish. a Immunostaining of Eys (green) in zebrafish retina at 1-year post-fertilization (ypf). b High-magnification image of photoreceptors. Eys (arrowhead, green) localized at the basal side of connecting cilium (acetylated α tubulin, red) of the photoreceptors. c-f Expression of Eys during development at 3 days post-post-fertilization (pdf), 4, 5, and 6 dpf. g RT-PCR of eys at 4 dpf (45 cycles) following injection of three different MOs. h Quantitative RT-PCR analyses of the morphants (biologically independent samples). SP1MO (N = 5), SP2MO (N = 3), and SP3MO (N = 3). Eys expression was reduced by at least 50% at 4 dpf. Vertical bar: mean ± standard deviation. i-l Basal intracellular deposition of rhodopsin (rhodopsin mislocalization) observed following injection of three different MO at 6 dpf. Note, injections of three different MOs resulted in the same phenotype. m, n Rhodopsin localization in the photoreceptors at 7 dpf. m Rhodopsin is correctly localized at the photoreceptor outer segments in the control. n eys knockdown by MO-induced rhodopsin mislocalization toward the basal and the lateral membrane of the photoreceptors (N = 6 biologically independent fishes). o, p Greater improvement of the rhodopsin mislocalization was achieved in the eyes supplemented with wild-type human EYS mRNA (p) over those injected with mutant human EYS mRNA with G843E (o) after SPMO2-mediated knockdown of eys, consistent with decreased EYS function by the mutation. q A quantitative analysis of o (N = 9 biologically independent fishes) and p (N = 9 biologically independent fishes). Numbers of cells with mislocalized rhodopsin per retinal section were counted (vertical bar: mean ± standard deviation). The difference is significant (P = 0.00903; Wilcoxon rank-sum test) **P < 0.01. PRC photoreceptors, Cont control. Scale bar = 10 µm (a-e, i-p). detect retinitis pigmentosa. Unfortunately, a reliable phenotypic comparison between G843E carriers versus non-carriers among patients with biallelic EYS mutations was not possible because many patients with G843E and milder phenotype are unlikely to be included in the genetic analysis, to begin with. This is supported by the presence of asymptomatic ARRP patient with G843E and S1653Kfs mutations (the brother of YWC133), who were erroneously assigned as unaffected prior to a thorough ocular examination and consistent with the disproportionally small number of G843E homozygotes relative to the expectation. Nevertheless, the strong evidence from the segregation analysis (P < 0.01), the presence of G843E in trans with an established pathogenic variant in multiple families, along with in vitro expression and in vivo functional analyses supporting dysfunction of G843E have allowed us to reclassify the variant as pathogenic according to the standard guidelines 41,42 . G843E as a quasi-Mendelian variant will likely enable genetic diagnosis in an additional 7.0% of Japanese patients with ARRP, which would represent a 26.8% improvement in the diagnosis rate. At the same time, the importance of this finding extended far beyond the context of genetic diagnosis as a detection of a founder mutation with an extremely large disease contribution provides a unique opportunity for the development of an AAV-mediated mutation replacement genome-editing gene therapy, which has shown promising in vivo outcomes 15,16,43 . This demonstrates the robustness of the approach, considering that mutations in novel retinitis pigmentosa genes, which are still continuously discovered by sequencing, rarely account for more than 1% of cases and are unlikely to be suitable targets for drug development because of the extremely low number of the patients affected. Recently, enrichment of the G843E variant in EYS in a group of patients with hereditary retinal degenerations (HRD) that carried a quasi-Mendelian allele in another gene (c.5797 C>T/p.R1933* in RP1), has been reported 4 , suggesting indeed a non-Mendelian, oligogenic or genetic modifier role of EYS in retinal degeneration. In our study, RP1-R1933* was infrequent among carriers of EYS-G843E, but this may be attributable to gross differences in the clinical phenotypes considered (macular degeneration or conerod dystrophy in the previous reports 4, 44 vs. canonical ARRP, studied here) Furthermore, while in heterozygous carriers RP1-R1933* seems to exert its pathogenic functions via the copresence of EYS-G843E and other hypomorphic alleles outside of the RP1 locus 4 , a reciprocal mechanism is not forcibly true, since molecular pathology of EYS-G843E in ARRP may follow different routes, as clearly shown above. Taken together, these results emphasize the unexpected pleiotropic role of EYS-G843E with respect to the range of unconventional genetic influence and its effect on clinical phenotypes. GWAS also identified a novel retinitis pigmentosa-associated EYS signal (Peak 2) with no rare exonic or splice site variants in linkage disequilibrium that could account for ARRP. It is possible that another quasi-Mendelian mutation in linkage disequilibrium with Peak 2 in the non-coding regions remains undetected after NGS panel test 4,7,19 . However, the higher frequency of the lead variant (allele frequency = 0.216) for this peak is distinct from those of the other peaks (allele frequency = 0.041 and 0.0005), resulting in a lower OR (1.83) well within the range of those for more common retinal diseases such as age-related macular degeneration 45 . Therefore, it is possible that the true pathogenic variant(s) in linkage disequilibrium may be a high-frequency variant(s) behaving in a non-Mendelian manner, similar to those presumed to account for susceptibility signals in common diseases although this is less likely given that retinitis pigmentosa is a rare disease. For example, the risk variant may act in an oligogenic fashion or as a disease modifier in combination with mutations in other genes. At the same time, it is possible that the unknown true pathogenic variant(s) different from the S2556C variant lies deep in a non-coding region as typical for signals detected by GWAS in common diseases. Although the exact mode of genetic influence remains to be elucidated for this peak, the findings stress the importance of breaking the stereotypical dogma of Mendelian inheritance in monogenic diseases and emphasize the importance of large-scale genome-wide case-control genetic studies in elucidating the genetic causes of inherited diseases largely unsolved by sequencing approaches. In conclusion, this study demonstrates the usefulness of GWAS in identifying disease-associated loci, in so-called monogenic disorders, which is dependent on the presence of founder mutations. It also highlights the under-appreciated significance of high-frequency variants that may account for the undetermined heritability of various inherited diseases. At the same time, the significance of the identified variants may extend beyond genetic diagnosis as they may simultaneously serve as ideal targets of local genome treatments. Methods Patients and controls. Nine-hundred forty-four presumed-unrelated patients with retinitis pigmentosa were recruited from Kyushu University Hospital, Tohoku University Hospital Yuko Wada Eye Clinic, Nagoya University Hospital, and Juntendo University Hospital. The majority of the patients were recruited through a genetic screening project hosted by the Japan Retinitis Pigmentosa Registry Project (JRPRP) in which 83 genes associated with retinitis pigmentosa were analyzed by NGS panel test (sequencing of all the coding exons and the flanking immediate exon-intron boundaries of the known 83 retinitis pigmentosa genes) 11 . Japanese patients consistent with the autosomal recessive mode of inheritance that typically had multiple affected siblings and no affected members in other generations were enrolled. In addition, isolated cases with no family history as well as the offspring of consanguineous parents were also included. Most of the unaffected controls, who were ruled out for retinitis pigmentosa with a fundus examination, were recruited at Tohoku University Hospital and its affiliated hospitals 39 . The remaining control samples from subjects with no documented history of ocular disease were purchased from the National Institutes of Biomedical Innovation, Health and Nutrition (https://bioresource.nibiohn.go.jp/). Blood samples were collected for DNA extraction and establishing patient-derived lymphoblastoid cell lines (LCLs). Genome-wide association study. In the first GWAS, 644 cases and 620 controls, all from Japan, were genotyped with the CoreExome-24 v1.1 (Illumina, San Diego, CA, USA). The total number of analyzed samples was reduced to 581 cases and 603 controls after quality control (QC). During QC, we excluded single-nucleotide variants (SNVs) with Hardy-Weinberg equilibrium (HWE) P < 0.0001 in the controls, a call rate <99%, or three alleles. Data were also discarded if the sample had a call rate <98%. In addition, closely related pairs (pi-hat > 0.1) 46 , or ancestral outliers, as determined with a PCA analysis using the 1000 Genomes Project (five Asians, CEU, and YRI) and PLINK software were removed. One hundred and forty-nine cases with causal mutations identified after the NGS panel test 11 were also removed. The remaining 432 cases and 603 controls were subjected to a GWAS using 10,673,864 variants following whole-genome imputation of 523,187 genotyped SNVs using phased haplotypes from the 1000 Genomes Project (Phase 3) as the reference panel. SHAPEIT was used for phasing, followed by minimac3 for genotype imputations 47 . Imputed variants with estimated imputation accuracy of Rsq >0.3 were selected. It should be noted that the variants were not excluded based on minor allele frequency (MAF) in this study because assumed rare retinitis pigmentosa mutations may be tagged better with lower frequency variants. Statistical analysis of the GWAS was performed using RVtests 48 . We used imputed genotype dosages and top 10 principle components as covariates for the analysis input data. The principal component scores were calculated using PLINK. The association between each SNP and retinitis pigmentosa was modeled as logistic regression with an allele dosage effect and adjusted for the 10 principle component scores. The Wald test was used to determine the significance of association for each SNP. The second GWAS comprised 300 cases and 300 controls and was also carried out using genotyping with the CoreExome-24 v1.2 (Illumina). The total number of samples was reduced to 286 cases and 287 controls after applying QC procedures identical to the first GWAS. Samples were also removed if they overlapped with the first GWAS. Then, 78 cases with causal mutations identified through the NGS panel test were excluded 11 . The remaining 208 cases and 287 controls were subjected to a GWAS using 10,383,808 SNVs following whole-genome imputation of 522,207 genotyped SNVs selected with the same criteria as the first GWAS. Since it was very difficult to estimate the outcome initially because we could not find a GWAS targeting recessive Mendelian disorders, we estimated the size of the second GWAS based on the results of the first GWAS assuming that meta-GWAS was to be performed. The first GWAS was carried out using all the samples available at that time. Sample sizes for the second GWAS were calculated so that the top five signals would reach statistical significance using an online sample size calculator (https://www.stat.ubc.ca/) adopting a two-sided alpha-level of 0.05, 80% power. However, the size was eventually restricted by the availability of the samples because the disease studied was a rare disease with a prevalence of 1 in 4000. Of the cases and controls actually used in GWAS, 284, 284, and 72 cases used were recruited via Tohoku University (northeastern region), Kyushu University (southern region), and Nagoya University (central region) whereas 797 and 93 controls were from Tohoku University and unknown region (purchased as a normal Japanese DNA set), respectively. A meta-analysis combining the first and second GWAS data sets was performed using METAL 49 . Stepwise conditional analysis has been used as a tool to identify secondary association signals at a locus 50 . The conditional analyses, starting with a top associated variant, were performed using the dosages of target variants of the regions used as covariates. These steps of adding the variant dosages to the covariate one by one were repeated until there were no variants satisfying the genome-wide significance level (P < 5.0 × 10 −8 ). To assess the linkage between non-synonymous variants identified through a previous NGS panel test 11 and the GWAS peak variants positioned within 1.5 Mb of each other, correlation coefficients (r 2 ) and D′/LOD were calculated using Haploview. Allele frequencies of variants in the Japanese general population were estimated using ToMMo (https://www.megabank.tohoku.ac.jp/english/), a genomic database from whole-genome sequencing of 4773 Japanese healthy individuals which has recently expanded from 3552 51 . WGS and Sanger sequencing. We performed WGS using the NovaSeq 6000 (Illumina, San Diego, CA, USA) sequencer with 151 bp paired-end reads. The sequencing library was constructed using the TruSeq Nano DNA Library Prep Kit (Illumina) according to the manufacturer's instructions. The sequenced reads were aligned to the human reference genome using BWA-mem (ver. 0.7.17). Then, PCR duplicate reads were marked using Picard tools (ver. 2.17.8). Base quality scores were recalibrated, and SNVs and short insertions and deletions were called, using GATK (ver. 4.1.2.0) according to the GATK Best Practices (https://software. broadinstitute.org/gatk/best-practices/). Structural variants were called using Manta 52 according to the instructions commands for the Single Diploid Sample Analysis (https://github.com/Illumina/manta/blob/master/docs/userGuide/ README.md). In addition, we used IGV software to visually inspect reads for specific genes that were reported to carry structural variants. Sanger sequencing was carried out for genotyping of family members using the protocol described earlier 53 . In brief, genomic DNA was amplified with PCR using Amplitaq Gold and a primer pair designed by Primer3 (ver. 0.4.0; http://bioinfo.ut. ee/primer3-0.4.0/). PCR amplification was performed in a 20 μl total volume containing 20 ng genomic DNA, 1× GoTaq buffer, 0.5 mM dNTPs, 10 μM of each primer, and 2 units (5 U/μl) of GoTaq polymerase (Promega, Madison, Wisconsin). The PCR amplicons were applied onto a 2% agarose gel with appropriate controls and markers. mRNA analysis using patient-derived lymphoblastoid cell lines. To generate patient-derived LCLs, lymphocytes were transformed with the Epstein-Barr virus at a core facility run by Tokyo Medical Dental University. The origin of the LCL cells was identified as patients by partial sequencing of the genome. LCLs were cultured in RPMI1640 medium (ThermoFisher Scientific, Waltham, MA) supplemented with 15% fetal bovine serum (FBS; ThermoFisher Scientific), 2mM L-glutamine (Ther-moFisher Scientific), and 1% penicillin/streptomycin (ThermoFisher Scientific) at 37°C in an atmosphere of 5% CO 2 . Cells were tested routinely for mycoplasma contamination. A plasmid for CAG promoter insertion genome editing 50 was constructed as shown in Supplementary Fig. 4A, B. The donor template, which comprised the flanking micro-homology arms, gRNA target site, and the donor sequence, were sub-clone and inserted into the single CRISPR/Cas9 vector (pX601, addgene #61591). gRNAs were designed (Supplementary Fig. 4C) and T7E1 assay ( Supplementary Fig. 4D) were performed as the manufacturer's instructions (New England Biolabs, Ipswich, MA). In brief, PCR products amplified using genomic DNA were denatured at 95°C for 5 min, reannealed, and incubated with T7 Endonuclease I (New England Biolabs) at 37°C for 30 min. The reaction products were resolved by electrophoresis in 2% agarose gel. DNA fragments were analyzed using ImageJ. The indel efficiency was calculated as 100 × (1 − (1−cleaved band intensity/total band intensities)1/2). The donor sequence included a CMV promoter (from pCAG-Neo, Wako, Osaka, Japan) for in-frame insertion upstream of the EYS start codon. A plasmid for mutation replacement genome editing was constructed as shown in Supplementary Fig. 4E, F. The donor template, which comprised the flanking micro-homology arms, gRNA-1 target site or gRNA-4 target site, and the donor sequence, were sub-cloned and inserted into the vector (pX601) using a DNA ligation kit (Clontech, Mountain View, CA). To avoid repeated cleavage after mutation replacement, mutations were introduced in the flanking gRNA target sites within the donor template. The mutations introduced in the 5′ gRNA-1 and 3′ gRNA-4 target sites were selected using codon optimization tool GENEisu (http:// www.geneius.de/GENEius/) on human codon table. The LCLs were transfected with a plasmid using Trans-IT XP transfection reagent (Mirus Bio, Madison, WI) treated with or without a demethylating agent, 5-Aza-2′-deoxycytidine (1 μM; Abcam, Cambridge, UK), and hydralazine hydrochloride (0.2 μM; Abcam). To test whether transcripts were degraded by nonsense-mediated mRNA decay, LCL was treated by emetine (Sigma-Aldrich, St. Louis, MO) at 60 μg/ml for 12 h before RNA extraction 54 . For mutation replacement gene editing ( Supplementary Fig. 3E, F), LCL was cotransfected with the CAG promoter insertion plasmid and the mutation replacement genome-editing plasmid (ratio 1:3). Total RNA was extracted 48 h post-transfection using the miRNeasy plus mini kit (Qiagen, Hilden, Germany) according to the manufacturer's instructions. A 500-ng sample of total RNA was reverse-transcribed with SuperScript IV (ThermoFisher Scientific) and oligo(dT) primers (ThermoFisher Scientific) at 55°C for 30 min. The design of the primer sets for RT-PCR is shown in Supplementary Table 5. The RT-PCR reaction was performed with KOD One DNA polymerase (Toyobo, Osaka, Japan) at 35 cycles of 98°C for 10 s, 60°C for 5 s, and 68°C for 5 s. PCR products were analyzed on agarose gels. Uncroppegel images were provided as Supplementary Fig. 5. Total RNA was isolated from each morpholino injectants (at least 7 embryos) according to the manufacturer's instructions, which was resuspended in 20 ml of RNase-free water. After the RNA was treated with DNase, we further extracted the RNA with phenol-chloroform. Then, RNA was reverse-transcribed with the SuperScript VILO cDNA Synthesis kit (Life Technologies®, ThermoFisher Scientific). The RT products (cDNA) were amplified and analyzed by quantitative real-time PCR (Applied Biosystems 7500 fast real-time PCR System, Applied Biosystems®, ThermoFisher Scientific) using TAKARA SYBR Green PCR mixture® (TakaRa Bio Inc., Kusatsu, Shiga, Japan). The expression of the eys was normalized by the comparative threshold cycle method and internal control (gapdh) and calculated by the ΔΔCt method. For Human EYS cDNAs with and without c.2528 G>A variant were subcloned into pcDNA 3.1(+) vector and were transcribed by the mMessage Machine T7 kit (Ambion®, ThermoFisher Scientific, Waltham, Massachusetts, USA). In morpholino knockdown experiments with or without rescue mRNA, a mixture of 200 mM ATG-MO, SP-MO, and 300 ng/μl mRNA or 380 mM ATG-MO was applied, respectively. We injected morphorinos and mRNA into embryos within 40 min after fertilization. The Quantitative analyses of mislocalized rhodopsin were assessed by the confocal microscope. The total number of photoreceptors that displayed rhodopsin mislocalization within a retinal section was counted by a person who was masked with regard to what was injected. Statistics and reproducibility. The frequency of homozygotes of the G843E and S1653fs mutations was calculated as the square of allele frequency of each mutation in the general population, assuming random mating. Numbers of cells with mislocalized rhodopsin in the field were counted and the Wilcoxon rank-sum test was carried out using R software to assess the difference between the two groups. RT-PCR was confirmed to be reproducible by three independent assays. In zebrafish experiments, three blinded observers independently analyzed the data. Ethics approval and consent to participate. The study was initiated after ethical approvals were granted by the Institutional Review Boards of Kyushu University Hospital, Tohoku University Hospital, the Yuko Wada Eye Clinic, Tokyo Medical and Dental University, and Nagoya University Hospital. All procedures followed the tenets of the Declaration of Helsinki. Informed consent was obtained from all patients and controls before collecting blood samples for DNA extraction and establishing patient-derived LCLs. All zebrafish experimental procedures were conducted after approval by the related committees, including the animal ethics committee, for the animal experiments at Osaka University Graduate School of Medicine.
2021-01-31T14:30:22.634Z
2021-01-29T00:00:00.000
{ "year": 2021, "sha1": "51dc3039ef99f875b84ff1b75f6686d8cd8aebe7", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s42003-021-01662-9.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f8d74d3ab10e064a8ccb850260fe8055c147863b", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
231586325
pes2o/s2orc
v3-fos-license
Ultrastructural observations on the oncomiracidium epidermis and adult tegument of Discocotyle sagittata, a monogenean gill parasite of salmonids During their different life stages, parasites undergo remarkable morphological, physiological, and behavioral “metamorphoses” to meet the needs of their changing habitats. This is even true for ectoparasites, such as the monogeneans, which typically have a free-swimming larval stage (oncomiracidium) that seeks out and attaches to the external surfaces of fish where they mature. Before any obvious changes occur, there are ultrastructural differences in the oncomiracidium’s outer surface that prepare it for a parasitic existence. The present findings suggest a distinct variation in timing of the switch from oncomiracidia epidermis to the syncytial structure of the adult tegument and so, to date, there are three such categories within the Monogenea: (1) Nuclei of both ciliated cells and interciliary cytoplasm are shed from the surface layer and the epidermis becomes a syncytial layer during the later stages of embryogenesis; (2) nuclei of both ciliated cells and interciliary syncytium remain distinct and the switch occurs later after the oncomiracidia hatch (as in the present study); and (3) the nuclei remain distinct in the ciliated epidermis but those of the interciliary epidermis are lost during embryonic development. Here we describe how the epidermis of the oncomiracidium of Discocotyle sagittata is differentiated into two regions, a ciliated cell layer and an interciliary, syncytial cytoplasm, both of which are nucleated. The interciliary syncytium extends in-between and underneath the ciliated cells and sometimes covers part of their apical surfaces, possibly the start of their shedding process. The presence of membranous whorls and pyknotic nuclei over the surface are indicative of membrane turnover suggesting that the switch in epidermis morphology is already initiated at this stage. The body tegument and associated putative sensory receptors of subadult and adult D. sagittata are similar to those in other monogeneans. Introduction Most monogeneans are ectoparasitic while a few genera are endoparasites of fish or other vertebrates. In aquaculture, such parasites can cause significant welfare issues and economic loss (Ogawa and Timi 2015;Trujillo-González et al. 2018; connections to nucleated subtegumental cell bodies lying among parenchymal cells beneath the superficial muscle layers (e.g., Smyth and Halton 1983;Tyler and Tyler 1997;El-Naggar et al. 1991;Ramasamy et al. 1995;Cribb et al. 2003;Hodová et al. 2010;Poddubnaya et al. 2016). In contrast, the outer body covering of the oncomiracidia is an epidermis, which is often ciliated (Whittington et al. 1999). Only a few transmission electron microscope (TEM) studies have investigated the body surface of monogenean larvae; for monopisthocotyleans, this includes Entobdella soleae(see Lyons 1973), Euzetrema knoepffleri(see Fournier 1976), Monocotyle spiremae, and Neoheterocotyle rhinobatidis(see Rohde et al. 1998), and those on polyopisthocotylean larvae are confined to a few species of the Polystomatidae (Fournier 1979;Cable and Tinsley 1992) and Zeuxapta seriolae(see Rohde 1998). These are probably not the most representative species of Monogenea: E. knoepffleri is adapted to living in the bladder of a urodele, and the polystomatids are adapted to a mesoparasitic life in amphibians and reptiles. Perhaps, not surprisingly the epidermis of E. knoepffleri was found to resemble that of the polystomatids. Whittington et al. (1999) recommended further studies on the oncomiracidial epidermis of more species of both monogenean subclasses, particularly those infecting fish. Rubio-Godoy et al. (2003) reported a rapid killing of D. sagittata oncomiracidia, if incubated in naïve plasma from rainbow trout or brown trout and their scanning electron microscope study revealed a breached epidermis. The only other ultrastructural study conducted on this parasite was that of Cable and Tinsley (2001) who documented spermiogenesis. The present study was conducted to describe the epidermis of the oncomiracidium and tegument of subadult and adult D. sagittata using TEM with the aim of comparing the effects on this transition phase with other monogeneans. It was also an opportunity to assess the tegumental ultrastructure of adults that were occasionally expelled from the gills and recovered in screening water in order to record possible changes that may occur in the tegument after their detachment from the host gills. Finally, the present study provided another opportunity to clarify different types of putative sensory structures associated with the epidermis of the oncomiracidium and tegument of adult. Materials and methods Rainbow trout (Oncorhynchus mykiss) infected with Discocotyle sagittata were caught at a Government Fish Hatchery in Cornaa on the Isle of Man, UK, during the summer of 1994. Fish were transported to Bristol University and maintained in aquaria according to the methods outlined by Gannicott and Tinsley (1997). Parasite eggs were collected every 24 h by draining the water from aquaria through a 125-μm sieve; the residue was resuspended in dechlorinated water and decanted into 200-ml crystallizing dishes. Using a dissecting microscope, eggs were collected into Petri dishes and incubated for 3-4 weeks at 13 ± 0.5°C. Recently emerged oncomiracidia were fixed for TEM or used to infect naive hosts (see Gannicott and Tinsley 1998b). The silver nitrate staining technique was used to reveal boundaries of ciliated cells (Lynch 1933). Infected fish were pithed, each gill arch was transferred quickly to a Petri dish of dechlorinated water, and parasites of different developmental stages were separated and fixed individually for TEM. Individuals of D. sagittata maintained on their host in aquaria were occasionally expelled from the gills and recovered in screening water. Two such adults and two subadults (with a single pair of clamps on the opisthaptor) found alive in screening water were fixed and processed for TEM. For comparison, a further two (apparently healthy) adults were removed from their host, left in dechlorinated water at 13°C for 24 h (equivalent to the time spent off the host by the naturally expelled parasites), and then fixed for TEM. In addition, 9 larval stages (7 newly hatched oncomiracidia and 2 post-larvae retrieved from their host 24h post-infection) were fixed together with 7 adults and 4 subadults that were processed immediately after recovery from their hosts. All specimens were fixed at 4°C in 2.5% glutaraldehyde buffered with 0.1 M sodium cacodylate, washed overnight in the same buffer, post-fixed for 1 h in cacodylate buffered 1% osmium tetroxide and washed again in buffer, dehydrated in ethyl alcohol, and embedded in Araldite resin. Ultrathin sections were double stained with uranyl acetate and lead citrate and viewed on a JEOL 1200 EX or 1210 electron microscope operated at 80 KV. Ethics Ethical considerations followed University of Bristol guidance existing at that time and studies were regulated by a UK Home Office Licence. Epidermis of the oncomiracidium Hatched oncomiracidia of Discocotyle sagittata measure 350 (240-430) μm in length and 120 (100-170) μm in maximum width. Silver nitrate staining revealed the boundaries of 28 ciliated cells, arranged in six regions (two anterolateral, two mediolateral, and two posterior). These cells are bilaterally symmetrical and distributed as follows: five on each anterolateral region, six on each mediolateral region, and three cells in each posterior group. Ultrastructurally, the epidermis of recently emerged oncomiracidia of D. sagittata is differentiated into two regions, a ciliated cell layer and an interciliary non-ciliated, syncytial cytoplasmic layer (Figs. 1, 2b). The elongated ciliated cells are plate-like and bound laterally to the adjacent interciliary layer by tight junctions (Fig. 2b). There are numerous cilia (average length 15-18 μm), each possessing the typical "9 + 2" pattern of axonemal microtubules (Fig. 2c, d). Basal bodies of the cilia are embedded in the surface layer of the cell and each is connected at its annulus to a single relatively, long cross-striated horizontal rootlet (average 2 μm in length) lying at an angle of about 90°to the insertion level of the cilia (Fig. 2e). The basal plasma membrane and associated basal lamina form few relatively short tubular invaginations (infoldings) into the cytoplasm. The cell cytoplasm has a finely granular texture and is moderately electron dense (Fig. 2b, e). Each cell possesses an elongated nucleus (average 5.5 × 1.5 μm diameter) with an irregular outline and distinctive, highly electron-dense chromatin patches, which fill most space of the nucleus (Fig. 2). Most observed nuclei of the ciliated cells are pyknotic and the cell cytoplasm is filled with elongate mitochondria (average 3.5 μm long) that are orientated at right angles to the cell surface and a network of long electron-dense strands running parallel with the mitochondria and, in many places, contact to the cell surface between cilia (Fig. 2b-e). Also, a few spherical translucent vesicles are seen in the cytoplasm, but no other cytoplasmic organelles are observed (Fig. 2f, g). A few short microvilli-like structures are found on the apical surface between the cilia (Fig. 2e, g). The lateral edges of the ciliated cells are interdigitated with the interciliary layer, which in many sections extends beneath the ciliated cell or sometimes extends to cover a small part of its apical surface (Figs. 1,2f,g). Moreover, in some regions of the ciliated cells, particularly in-between the cilia and in front of the pyknotic nucleus, an apical portion of the cytoplasm extends outwardly forming a bulb-like structure containing a large vacuole filled with fine granulated cytoplasmic matrix (Figs. 1,2c,d). In many sections, the interciliary layer, lying between the neighboring ciliated cells, extends outwardly beyond the level of the cell surface and constricts to form a nearly spherical bulb-like The interciliary layer covering the general body surface of the oncomiracidium is nucleated and varies in thickness from 1 to 7 μm (N = 9) (Figs. 1, 3-f). In most body regions, it is apparently folded (Fig. 3a, d) but becomes thinner and unfolded in the haptor region (Fig. 3b). It is bounded externally by an apical membrane and internally by a basal plasma membrane (Fig. 3b). The apical membrane is underlined by a thin, dense terminal web formed of microfilaments ( Fig. 3b) while the basal plasma membrane is associated with a relatively thick and dense undulating fibrous basal lamina (Fig. 3b). Most nuclei are irregularly shaped and possess highly electron-dense, condensed chromatin patches (Fig. 3). However, some intact nuclei appear to be in the process of dissociation as they become granulated and contain large vacuoles with heterogeneous vesicles (Figs. 1, 3c), while others have a large multilayered spherical body with a central electrondense core (Fig. 3d). Most of these nuclei are elevated slightly with the associated cytoplasm above the level of the body surface (Fig. 3c, d). Also, they are often surrounded by multilayered whorls (Fig. 3c, d) and in some cases, a narrow translucent layer is found between the nucleus and the surrounding cytoplasm (Fig. 3c). In addition to nuclei and multilayered whorls, the interciliary layer contains Golgi bodies (Fig. 3b, g), numerous mitochondria, glycogen granules, granular endoplasmic reticulum, ribosomes, and oval-shaped and circular, membrane-bound vesicles which are either translucent or moderately electron dense (Figs. 1, 3b, f, g). There are circular and longitudinal muscle layers underneath the basal lamina (Figs. 1, 3b, f, g). Nucleated cytons (tegumental cell bodies) were present underneath the muscular layers packed with translucent and moderately electro-dense vesicles similar to those in the outer interciliary layer (Figs. 1, 3e, f). The cytoplasm contains mitochondria, granular endoplasmic reticulum, and Golgi bodies that are concentrated at the periphery of the cell. However, cytoplasmic connections of these tegumental cell bodies with the outer interciliary layer could not be traced. In worms retrieved from their host 24-h post-infection, the surface layer contained very few organelles, nuclei were absent, and ciliated cells had been shed (Fig. 3h). Tegument of the adult The general body tegument of subadult and adult D. sagittata is composed of an external syncytial layer, connected to subtegumental cell bodies (cytons) through cytoplasmic connections traversing the tegumental muscle layers (Figs. 4a, 5a). The syncytial tegumental layer varies in thickness from 0.5 to 7.5 μm. The apical plasma membrane is lined internally by a dense terminal web of microfilaments similar to that in the oncomiracidium (Figs. 4a, 5a, b). The basal plasma membrane is underlined by a conspicuous, moderately electrondense basal lamina that form numerous relatively short tubular invaginations into the outer syncytial tegumental layer (Figs. 4a, 5a, b). The outer syncytial layer has no cytoplasmic organelles except mitochondria that are concentrated close to the basal plasma membrane (Fig. 5b). Spherical membranebound vesicles fill much of the syncytial layer, but they vary in appearance. Most vesicles are translucent, but some are moderately electron dense while a few are highly electron dense (Figs. 4a,5a,b). Some large vacuoles are visible, while some of the moderately electron-dense vesicles were captured releasing their electron-dense particles into the ground substance (Fig. 5b). The tegument musculature consists of several layers of circular and longitudinal muscle fibers (Fig. 5b). Each subtegumental cell body possesses a well-developed nucleus and its cytoplasm is filled with Golgi bodies, ribosomes, granular endoplasmic reticulum, mitochondria, and characteristic secretory vesicles with different forms as well as irregularly shaped vacuoles with heterogeneous contents (Figs. 4a, 5c). Each Golgi complex consists of 3-8 flattened cisternae with associated small vesicles (Fig. 5c, d). Cytoplasmic connections carrying secretory vesicles extend from these cells where they open into the outer syncytial layer (Figs. 4a, 5a). There were no detectable differences in the tegument of those specimens that had been naturally dislodged from their host and those recovered in screening water, although there was an increase in surface layer vacuolation of control specimens that had been maintained in vitro for 24 h (Fig. 5e). Discocotyle sagittata. a The ciliated cell (cc) joins the interciliary syncytium (inc) by tight junctions (arrow) and possesses pyknotic nucleus (N), mitochondria (m), and long electron-dense strands (ds). c, cilia; cm, circular muscle; lm, longitudinal muscle; pa, parenchyma. Scale bar= 1 μm. b The interciliary syncytium (inc) with glycogen particles (gl) and translucent vesicles (tv). sr, striated rootlets of cilia. Other abbreviations as in a. Scale bar = 2 μm. c Ciliated cell (cc) traversed by interciliary syncytium (inc) and the bulb-like structures (bs) extending above the surface of both regions and form a constriction (arrows). c, cilia; sr, striated rootlets. Scale bar = 1 μm. d Ciliated cell (cc) with an elongated pyknotic nucleus (N) and bulb-like structure (bs) with large vacuole (lv). Scale bar = 2 μm. e The basal bodies (bb) of the cilia (c) and striated rootlets (sr) lying at an angle of about 90°to the insertion level of the cilia. ds, electron-dense strands; m, mitochondria; mi, microvilli. Scale bar = 0.5 μm. f Interciliary syncytium (inc) covering the lateral edges of ciliated cell (cc), found beneath its basal region and covering a small part of its apical surface. c, cilia; cm, circular muscle; ds, electron-dense strands; lm, longitudinal muscle; m, mitochondria; sv, small translucent vesicle; arrow, tight junction. Scale bar = 2 μm. g Ciliated cell (cc) traversed by interciliary syncytium (inc) and contains mitochondria (m) and small translucent vesicles (sv). c, cilia; mi, microvilli; arrow, tight junction. Two types of presumed sensory structures were detected on the body surface, particularly common around the anterior region of the oncomiracidium. These are a uniciliated receptor and compound multiciliated receptor, both of which penetrate the interciliary syncytial layer (Figs. 4b,c,5f,g,h). Only the uniciliated receptor was observed on the body surface of the anterior region of the adult. However, in some sections, two separate but closely attached uniciliated receptors could be seen in the interciliary syncytial layer of the oncomiracidium (Fig. 5f). The uniciliated receptor consists of a nerve bulb anchored to the tegumental layer by annular septate desmosomes and bears a single (9 + 2) cilium measuring 2.7 μm (dia.), with a normal basal body but without rootlet (Figs. 4b, 5f). At the level of the basal body, a layer of electrondense thickening (a collar) is found on the inner surface of the bulb (Figs. 4b, 5f, g) while at the level posterior to the basal body, the bulb is supported by 8-10 annular, electrondense rings (Fig. 5g). The bulb contains a homogeneous, moderately electron-dense matrix in which are embedded mitochondria, neurotubules, and translucent vesicles (Figs. 4b, 5f, g). In some sections, the nerve bulb of the uniciliated receptor was seen in continuity with a neuron containing a relatively large spherical nucleus. The multiciliated receptor appears to consist of a single nerve bulb terminating with at least three cilia with the typical structure (9 + 2) and connects to the neighboring syncytial layer by septate desmosomes (Figs. 4c, 5h). A collar, basal bodies and thin, electron-dense rings are present, but rootlets were not seen (Fig. 4c). Also, the nerve bulb contains membranous strands close to the dense rings (Fig. 4h). Similar membranous strands are seen outside the bulb in the syncytial tegumental layer (Fig. 4c). Discussion Typical of monogeneans, the tegument of adult Discocotyle sagittata is composed of a surface syncytial cytoplasmic layer separated from underlying tegumental cell bodies by a basal lamina complex and muscle blocks. The present findings, together with previous studies, suggest a distinct variation in the epidermis ultrastructure among oncomiracidia of different monogenean species. The oncomiracidial epidermis of D. sagittata is differentiated into two regions, a ciliated cell layer and an interciliary, syncytial cytoplasm, both of which are nucleated. Moreover, the interciliary syncytium extends between the ciliated cells, covering their lateral sides, and sometimes part of their basal and apical surfaces. Subepidermal cell bodies containing secretory vesicles similar to those in the interciliary syncytial layer were seen among parenchymal cells with their cytoplasmic processes extending upwards, in close contact with the subtegumental muscles. Previous studies have shown that the two-layered tegumental structure of monogeneans develops from a primitive epithelium in which a nucleated apical layer becomes connected via cytoplasmic processes to underlying subtegumental cells (Lyons 1973;Smyth and Halton 1983;Bereiter-Hahn et al. 1984;Cable and Tinsley 1992). Three categories can be recognized for this switch in tegument morphology in the oncomiracidia of monogeneans so far studied. In the first, nuclei of both ciliated cells and interciliary cytoplasm are shed from the surface layer and the epidermis becomes a syncytial tissue during the later stages of embryogenesis as occurs in Entobdella solea and Pseudodiplorchis americanus(see Lyons 1973;Cable and Tinsley 1992 respectively). In the second category, nuclei of the ciliated cells and interciliary syncytium of the oncomiracidium remain distinct and the switch in tegument morphology occurs after the oncomiracidia hatch as in D. sagittata (present study), Polystoma integerrimum and Polystoma pelobatis(see Fournier 1979), and Zeuxapta seriola(see Rohde 1998). The third category is represented by oncomiracidia of Euzetrema knoeffleri, where the nuclei remain distinct in the ciliated epidermis but those of the interciliary epidermis are lost during embryonic development (Fournier 1976 host's bladder when the tadpole metamorphoses, whereas the fast-growing neotenic form matures on the gills of the tadpole; only the slow-growing form exhibits the delay in tegumental transformation (Fournier 1979). There is no obvious ecological or behavioral explanation to interpret this delayed switch in the life cycle of the mazocraidids D. sagittata and Z. seriola but perhaps this phenomenon is common and is just a reflection of the rarity of ultrastructural detection of surface nuclei. Persistence of nuclei of the ciliated cells and interciliary syncytium of the oncomiracidium of D. sagittata and other monogeneans indicates that these cytoplasmic layers continue to perform their synthetic and physiological activities which are controlled by these nuclei. The nuclei of ciliated cells may have a role in controlling shedding of the ciliated cells as soon as the oncomiracidium makes contact with the gills. A characteristic feature of the interciliary layer of the oncomiracidium of D. sagittata is the presence of numerous, large electron-dense whorls. Moreover, some intact nuclei contain membranous whorls or large vesiculated vacuoles. Most of these apparently dissociating nuclei project slightly above the surface of the parasite with a narrow layer of cytoplasm. Also, the nuclei of the ciliated cells appear pyknotic. All these features tend to be indicative of membrane turnover suggesting that the switch in epidermis morphology is already initiated at this stage. This transition appears to be complete in worms retrieved from their host 24-h post-infection where the surface syncytial tegumental layer contained very few organelles and had no nuclei, and their ciliated cells were shed. In P. americanus, asynchronous shedding of ciliated cells occurs 1-2-h post-infection and exciliation involves coalescence of basal vacuoles to form a large cavity which enlarges under the ciliated cell causing rupture of the lateral septate desmosomes (Cable and Tinsley 1992). In the present study, none of these vacuoles was observed suggesting that the shedding mechanism in the oncomiracidium of D. sagittata could be different. In the oncomiracidium of E. soleae, Lyons (1973) found no vacuoles between the ciliated cells and the underlying layer of discontinuous "presumptive adult tegument" and suggested that the process of shedding is undertaken under nervous control since the ciliated cells are shed when the oncomiracidium attaches to the fish host. The author presumed that either the ciliated epidermis or the presumptive adult layer secretes a substance that acts on the intercellular cement. A similar shedding mechanism could occur in the oncomiracidium of D. sagittata since the interciliary layer extends beneath the ciliary cells and covers their lateral surfaces. The large subtegumental cell bodies found underneath the subtegumental muscle bands resemble those of other monogeneans where their cytoplasmic processes fuse the interciliary cytoplasm that replaces the shed ciliated cells. The presence of cytoplasmic bulb-like structures of both the ciliated cells and interciliary cytoplasm of the oncomiracidium of D. sagittata are good evidence of pinching off some parts of the epidermal cytoplasm and subsequently support the hypothesis that the switch in epidermis morphology occurs at this stage. In the digenean miracidium of Fasciola hepatica, large vacuoles are formed between the base of the cilia and the underlying subepidermal layers, followed by expansion of the cytoplasmic interciliary ridges to replace the lost ciliated cells (Southgate 1970). The cilia of the oncomiracidia of D. sagittata (present study), E. soleae(see Lyons 1973), Euzetrema knoeffleri(see Fournier 1976), P. integerrimum(see Fournier 1979), and Pseudodiplorchis americanus(see Cable and Tinsley 1992) all have only a single horizontal cross-striated rootlet, while the cilia on the oncomiracidia of the monocotylids, Neoheterocotyle rhinobatidis and Monocotyle spiremae have two rootlets, a well-developed horizontal rootlet and a much less-developed vertical rootlet ). However, contrary to the vertical rootlets of turbellarians, which originate from the basal bodies, the vertical rootlets of these monocotylids originate from the basal parts of the horizontal rootlets . It was considered unlikely that they were homologous in the two groups and were therefore termed "false vertical rootlets" in the monocotylid species ). In the oncomiracidium of the polyopisthocotylean Z. seriolae, vertical rootlets are missing, although bundles of fine filaments extend from the basal bodies into the cytoplasm of the epidermal cells, straddling horizontal rootlets of cilia in the same longitudinal rows (Rohde 1998). These filaments may anchor the cilia in the cell body. In the ciliated cells of D. sagittata, long electrondense strands were observed running parallel with the mitochondria and, in many places, contact the cell surface between cilia; similar structures were described in the cili a t e d c e l l s o f E . s o l e a e ( s e e L y o n s 1 9 7 3 ) and P. americanus(see Cable and Tinsley 1992). These strands may serve as a supportive skeleton that prevent ciliated cell cytoplasm from collapse during locomotive strikes of the Fig. 4 a Schematic drawing showing the ultrastructure of the tegument of adult Discocotyle sagittata in a longitudinal section. bl, basal lamina; bm, basal plasma membrane; cm, circular muscle fiber; cx, cell body connection; dv, highly electron-dense vesicle; Go, Golgi body; ger, granular endoplasmic reticulum; lm, longitudinal muscle fiber; lv, large vacuole; m, mitochondria; mv, moderately electron-dense vesicle; N, nucleus; st, syncytial tegumental layer; tcb, tegumental cell body; ti, tubular invaginations of the basal plasma membrane; tv, translucent vesicle; tw, terminal web. Scale bar= 2 μm. b Schematic drawing of the ultrastructure of uniciliated presumed sense organ in the oncomiracidium. bb, basal body; c, cilium; co, collar; dr, electron dense rings; inc, interciliary cytoplasm; m, mitochondrion; nb, nerve bulb; nt, neurotubules; nv, neurosecretory vesicle; sd, septate desmosomes; tv, translucent vesicle. Scale bar = 1 μm. c Schematic drawing of the ultrastructure of multiciliated presumed sense organ in the oncomiracidium. ms, membranous strands. The body tegument of subadult and adult D. sagittata follows the general pattern described in other monogeneans (Smyth and Halton 1983;El-Naggar et al. 1991;Cribb et al. 2003;Hodová et al. 2010;Poddubnaya et al. 2016). There were no detectable differences in the tegument of adult and subadult specimens (with a single pair of clamps on the opisthaptor) of D. sagittata which had been naturally dislodged from their host and recovered in screening water, although there is an increase in the surface layer vacuolation of adult specimens which had been maintained in vitro for 24 h. The present findings indicate that the physiological activities of the tegument may continue for a specific period after natural dislodgement of the worms and during this period, they might be able to reattach to a host. The syncytial tegumental layer of adult D. sagittata contains membrane-bound vesicles, most of which are translucent. These vesicles are likely manufactured by Golgi complexes in association with the granular endoplasmic reticulum. There was evidence that some vesicles release their contents into the ground substance of the syncytial layer. Possibly this contributes to the formation of the fibrous terminal web, which may have protective and supportive functions. Exocytosis of the contents of the tegument vesicles was reported in many other monogeneans, for example, Allodiscocotyle diacanthi(see Ramasamy et al. 1995) and it has been suggested that they are involved in glycocalyx maintenance or the provision of a protective layer of mucus over the apical plasma membrane to minimize mechanical osmotic and immunological damage to the tegument surface. Uniciliated and multiciliated, presumed sensory receptors were detected penetrating the interciliary syncytium of the oncomiracidium while only the uniciliated receptor was observed on the body tegument of adult D. sagittata. Similar sensory receptors have been recorded in other monogeneans. Uniciliated receptors were reported in the adult and oncomiracidium of E. solea, adult Gyrodactylus spp., adult Leptocotyle minor, juvenile Amphibdella, and adult Diclidophora and Polystomoides spp. (see Smyth and Halton 1983). Most of these receptors are located at the anterior region of both the adult and oncomiracidia, possibly serving as tangoreceptors or rheoreceptors. The compound multiciliate receptors observed in the head region of adult and larva of E. solea, spike sensilla of Gyrodactylus spp., and adult Polystomoides may have chemosensory or tangoreceptive functions (Smyth and Halton 1983). There may be more types of undetected sensory structures in the oncomiracidium and adult of D. sagittata; this requires silver nitrate staining histology and further electron microscopy. Authors' contributions JC and RCT designed the study; JC collected all data and together with MME drafted the manuscript to which all authors provided comments and approved the final version. Funding information Open Access funding provided by Cardiff University. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
2021-01-13T06:17:21.733Z
2021-01-12T00:00:00.000
{ "year": 2021, "sha1": "2ee94e6fe87da3bb01d50d2c425339d97cf1a8f7", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00436-020-07045-z.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "32b4e20bd6ba288a24795acd908ec61e3265bcda", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
237386202
pes2o/s2orc
v3-fos-license
Discovery of ASKAP J173608.2-321635 as a Highly-Polarized Transient Point Source with the Australian SKA Pathfinder We report the discovery of a highly-polarized, highly-variable, steep-spectrum radio source, ASKAP J173608.2-321635, located $\sim$4\,deg from the Galactic center in the Galactic plane. The source was detected six times between 2020 January and 2020 September as part of the Australian Square Kilometre Array Pathfinder Variables and Slow Transients (ASKAP VAST) survey at 888\,MHz. It exhibited a high degree ($\sim 25$\%) of circular polarization when it was visible. We monitored the source with the MeerKAT telescope from 2020 November to 2021 February on a 2--4 week cadence. The source was not detected with MeerKAT before 2021 February 07 when it appeared and reached a peak flux density of 5.6\,mJy. The source was still highly circularly polarized, but also showed up to 80\% linear polarization, and then faded rapidly with a timescale of one day. The rotation measure of the source varied significantly, from $-11.8\pm0.8$\,rad\,m$^{-2}$ to $-64.0\pm1.5$\,rad\,m$^{-2}$, over three days. No X-ray counterpart was found in follow-up \textit{Swift} or \textit{Chandra} observations about a week after the first MeerKAT detection, with upper limits of $\sim 5.0\times10^{31}$\,erg\,s$^{-1}$ (0.3--8\,keV, assuming a distance $\sim10$ kpc). No counterpart is seen in new or archival near-infrared observations down to $J=20.8$\,mag. We discuss possible identifications for ASKAP J173608.2-321635 including a low-mass star/substellar object with extremely low infrared luminosity, a pulsar with scatter-broadened pulses, a transient magnetar, or a Galactic Center Radio Transient: none of these fully explains the observations, which suggests that ASKAP J173608.2-321635 may represent part of a new class of objects being discovered through radio imaging surveys. INTRODUCTION Many types of Galactic sources are known to be variable at radio wavelengths, including pulsars, stars and magnetars. For example, Staelin & Reifenstein (1968) detected giant radio pulses from the Crab pulsar, Hallinan et al. (2007) found periodic radio bursts from the M9 dwarf TVLM 513-46546, and Camilo et al. (2006) detected transient pulsed radio emission from the magnetar XTE J1810-197. Exploring the radio variability can help us better understand extreme astrophysical phenomena and probably find unexpected sources (Fender et al. 2015). The development of large field-of-view radio interferometers, such as the Australian Square Kilometre Array Pathfinder (ASKAP; Hotan et al. 2021), enables us to investigate variable and transient phenomena more systematically over a wider parameter space. The ASKAP survey for Variables and Slow Transients 1 (VAST; Murphy et al. 2013), is designed to search for such sources. The VAST Phase I Pilot Survey (VAST-P1; Murphy et al. 2021) was conducted between 2019 August and 2020 August. The footprint of VAST-P1 consists of six regions including a ∼250 deg 2 region covering the Galactic Center (with ∼ 356 • < l < 10 • , |b| <∼ 5 • ). We used the VAST Transient detection pipeline (Pintaldi et al. 2021;Murphy et al. 2021) to search for highly variable radio sources. Given its high stellar density and ongoing star formation, the Galactic Center (GC) is a promising region for finding variable and transient radio sources (e.g., Lazio et al. 2006). Aside from transients of known origin like X-ray binaries (e.g., Bower et al. 2005;Zhao et al. 2020), 1A 1742-28 (Davies et al. 1976) and the Galactic Center Transient (GCT; Zhao et al. 1992) were the first two radio transients detected, and are only ∼arcmin away from the GC. Three Galactic Center Radio Transients (GCRTs) were discovered in the 2000s at lower frequencies: GCRT J1746-2757 (Hyman et al. 2002), GCRT J1745-3009 (Hyman et al. 2005), and GCRT J1742-3001 (Hyman et al. 2009). Unlike A1742-28 and GCT, the GCRTs are about a degree away from the GC, but they are all at low Galactic latitudes 1 https://vast-survey.org/ (|b| < 0.6 • ). Though the radio properties for these three GCRTs are not identical to each other, the spectra of all three GCRTs are very steep and none of them has a clear counterpart at other wavelengths. The most wellstudied of the three, GCRT J1745-3009, was detected in at least two different states: it emitted ∼1 Jy bursts every 77 minutes in 2002, and gave off weaker (∼50 mJy) single bursts in . Hyman et al. (2007 suggest that GCRT J1745-3009 likely belongs to a new class of coherent emitters, while most radio transients are incoherent synchrotron sources. And there are yet further candidates in need of confirmation and follow-up (e.g., Chiti et al. 2016). In this paper we report the discovery of a highly polarized, variable source near the Galactic Center, ASKAP J173608.2−321635, detected at 888 MHz in VAST-P1 observations with ASKAP, and redetected at 1.29 GHz with MeerKAT (Jonas & MeerKAT Team 2016;Camilo et al. 2018). We present the observations, including radio imaging, pulsar searching, X-ray searches, and nearinfrared imaging in Section 2, and discuss the possible nature of the source in Section 3. ASKAP Observations ASKAP J173608.2−321635 was first discovered as a compact radio source in a transients search of VAST-P1 data (Project Code AS107) using the VAST transient detection pipeline (Figure 1). It was detected in the adjacent fields 1724−31A and 1752−31A, observed 13 times between 2019 April 28 and 2020 August 29. The VAST-P1 survey incorporates the Rapid ASKAP Continuum Survey (RACS, Project Code AS110; McConnell et al. 2020) as its first epoch. Both RACS and VAST-P1 were conducted at a central frequency of 888 MHz with a bandwidth of 288 MHz and they shared the same tiling footprints. The integration time for RACS was 15 mins while that for VAST-P1 was 12 mins, achieving an rms noise of 0.36 mJy beam −1 and 0.40 mJy beam −1 for regions near the GC, respectively. Details of these survey observations and data reduction are given by McConnell et al. (2020) and Murphy et al. (2021). . Each image is 10 on a side, with north up and east to the left. We show the "off" image observed on 2019 April 28 in panel (a), the "on" image observed on 2020 January 11 in panel (b) and Stokes V image from 2020 January 11 in panel (c). The color scales are the same for all of these images. Lower panels: MeerKAT L-band images of ASKAP J173608.2−321635. Each image is 10 on a side, with north up and east to the left. We show the "off" image observed on 2021 January 19 in panel (d), the "on" image observed on 2021 February 07 in panel (e) and Stokes V image from 2021 February 07 in panel (f). The color scales are the same for all of these images. lar polarization. There were four additional ASKAP observations that cover our source (Table 1). These observations were calibrated using PKS B1934-638 for both the flux density scale and the instrumental bandpass. All observations were processed using standard procedures in the ASKAPsoft package (Guzman et al. 2019). We note that there was a ∼50 mJy detection in a 10-hour observation at 943 MHz on 2020 November 01. However, the systematic error is high due to the source being located near the edge of the beam. To check for any shorter timescale variability we imaged the source using data from the 2020 November 01 ASKAP observation with an integration time of 15 min (resulting in 40 images in total). This lightcurve showed a relatively low modulation index (standard deviation divided by the mean) of ∼13%, and had a reduced χ 2 relative to a constant model (a measure of the significance of the variability, see e.g., Swinbank et al. 2015) of 1.6 for 39 degrees-of-freedom (40 observations minus one parameter for the mean). Overall we did not see any evidence for hour-scale variability ( Figure 3). Parkes Observations Motivated by the possibility that ASKAP J173608.2−321635 is a pulsar, we conducted follow-up observations with the 64-m Parkes telescope of ASKAP J173608.2−321635 on 2020 April 20 and 2020 July 29 using the pulsar searching mode with the Ultra-Wideband Low (UWL) receiver (Hobbs et al. 2020), which provides simultaneous frequency coverage from 704 to 4032 MHz. Each observation was 30 mins with 32 µs time-resolution and high frequency resolution (1024 channels per 128 MHz subband). We used Presto (Ransom 2001) to perform a standard pulsar search. We found no candidates in a search of dispersion measures (DMs) spanning 0-3000 pc cm −3 , corresponding to 25 kpc based on the YMW16 electron-density model (Yao et al. 2017, hereafter YMW16) or about two times the highest DM for pulsars discovered to-date (e.g., Shannon & Johnston 2013), period < 25 s and accelerations up to ∼20 m s −2 (assuming a pulsation period of 1 ms). We also found no single pulse above a SNR of 8 using the single pulse search procedure for Presto. . Full radio lightcurve for ASKAP J173608.2−321635, including non-detections (times of X-ray observations are also indicated). The circular polarization fraction V /I is shown in the bottom panel for the detections. In the upper right panel, we show the detections with ASKAP from 2020 January. In the lower right panel, we show the observations close to the MeerKAT detections from 2021 February. We fit an exponential decay of the form S ∝ e −t/τ for the four 1.3 GHz detections (blue dashes line) and find the timescale of decay to be ∼26 hours. We scale the UHF-band (800 MHz) detection to L-band (1.3 GHz) with the spectral index α ∼ −2.7 and show the scaled flux density as the purple diamond. However, the lack of simultaneous imaging meant we cannot determine whether the source was radio-loud during these observations. These non-detections (with an upper limit of ∼0.05 mJy, assuming the duty cycle of the pulsar (W/P) to be 10%) therefore do not rule out the presence of a pulsar. MeerKAT Observations To simultaneously search for pulsed and continuum emission from ASKAP J173608.2−321635, we observed it using the MeerKAT radio telescope with a central frequency of 1.28 GHz and a two-week cadence starting from 2020 November 19 (project code DDT-20201005-DK-01). Each observation had 12 minutes on the target, achieving an rms noise of 40 µJy beam −1 . Imaging and pulsar searching were performed simultaneously in all MeerKAT observations. We used PKS J1830-3602 for bandpass, flux density scale and phase calibration. We reduced the image data using Oxkat 2 (v1.0; Heywood 2020), where the Common Astronomy Software Applications (CASA; McMullin et al. 2007) package and Tricolour 3 were used for measurement sets splitting, cross calibration, self-calibration, flagging and Wsclean (Offringa et al. 2014) was used for continuum imaging. We did not detect any source to a 5σ limit of 0.04 mJy in the first five epochs. However, we detected a source in our observation on 2021 February 07 at a flux den-sity of 5.67 ± 0.04 mJy but did not detect any pulsations. The best-fit position of the source is: (J2000) RA 17 h 36 m 08. s 19 ± 0. s 03, Dec −32 • 16 35. 0 ± 0. 3 with Galactic coordinates l, b = (356.08 • , −0.04 • ) based on the MeerKAT detection, where the uncertainties are based on a comparison of the positions of field sources to their RACS matches. We imaged the source with an integration time of 16 seconds (resulting in 40 images in total). The lightcurve showed a relatively low modulation index of ∼4% and had a reduced χ 2 of 0.8 for 39 degreesof-freedom, with no evidence for minute-scale variablity ( Figure 3). The source was moderately circularly polarized (V /I=+8%) and had a steep radio spectrum within the bandpass (α = −2.7 ± 0.1 4 , where S ν ∝ ν α ). We also found the source to be highly linearly polarized (|L|/I ∼ 80%) with a moderately low Faraday rotation measure (RM) of −11.2 ± 0.8 rad m −2 . The source also exhibited depolarization behavior towards lower frequencies: the fractional total polarization is nearly 100% at 1.6 GHz but only ∼20% at 0.9 GHz ( Figure 4). We performed further tests to verify the polarization and RM variability, as discussed in Appendix A. We show the circular polarization as green squares, and linear polarization as red diamonds. We fit a simple depolarization equation Π = Π0 exp(−2σ 2 λ 4 ) to the linear polarization data, which is shown as the red dashed line, where σ = 5.7 m −2 is the RM dispersion of the Faraday screen (Farnes et al. 2014). Further radio observations showed a very rapid decline with an exponential timescale of ∼ 26 hrs (Figure 2 inset). Our ASKAP observation 20 hours after the first 4 Subband calibration has not been properly evaluated, and hence we are aware that our estimates may include ∼ 10% calibration error. MeerKAT detection gave a flux density of 2.4 ± 0.3 mJy at 1.3 GHz. Two further MeerKAT observations over the following days demonstrated that the source continued to fade exponentially, while the spectral shape remained similar (α = −3.4 ± 0.3). We found the source was still highly linearly polarized in these observations, although the RM changed significantly, from −11.2 ± 0.8 rad m −2 on 2021 February 07 to −63.3 ± 1.5 rad m −2 on 2021 February 09. The ionosphere usually contributes to Faraday rotation of order ∼ 1 rad m −2 (Sotomayor-Beltran et al. 2013), which can potentially cause RM variations between epochs. We used IonFR 5 to model the ionospheric Faraday depth at the dates of the observations. The ionospheric Faraday rotation is +0.65 ± 0.05 rad m −2 and +0.75 ± 0.06 rad m −2 on 2021 February 07 and 2021 February 09 respectively. The corrected RM of the source is therefore −11.8 ± 0.8 rad m −2 and −64.0±1.5 rad m −2 on these days, after ionospheric RM corrections. The intrinsic polarization angle was consistent between the epochs (see justifications in Appendix A). We also obtained three 12-min observations in the Ultra high frequency band (UHF; 544-1088 MHz) with MeerKAT, about one hundred hours after the first MeerKAT detection. There was no detection in these single observations, but there was a ∼5σ detection when all three were summed coherently (see blue diamonds in Figure 2). This UHF-band detection is a factor of two higher than what we expected from the exponential decay (we corrected the UHF-band detection to 1.3 GHz assuming a spectral index of α = −2.7), suggesting that the spectrum may have steepened to α < −4 or that the decay slowed. During imaging observations with MeerKAT, the FB-FUSE (Filterbanking Beamformer User Supplied Equipment; Barr 2017) instrument was used to produce hightime-resolution Stokes-I beams to enable pulsar and fast-transient searching. At both L-and UHF-band, FBFUSE was configured to produce a tiling pattern of 7 coherent beams with the central beam positioned at (J2000) RA 17 h 36 m 08. s 20, Dec −32 • 16 33. 0. The beams were arranged in a close-packed hexagonal grid with an overlap at their 70% power points (see W. Chen et al., submitted, for detail of FBFUSE beam tiling). At Lband FBFUSE produced 4096-channel data covering the 856 MHz band with a time resolution of 76.56 µs. At UHF-band the instrument produced 4096-channel data covering the 544 MHz band with a time resolution of 120.47 µs. Data streams from FBFUSE were recorded to disk on the APSUSE (Accelerated Pulsar Search User Supplied Equipment; Barr 2017) cluster. The data were dedispersed to dispersion measures in the range 0-2000 pc cm −3 at UHF-band and 0-3000 pc cm −3 at Lband, with the different maximum DMs chosen to have roughly constant scattering timescales between the two bands. The resultant trials were searched for periodicities up to 10 s using the GPU-accelerated Peasoup 6 software with the resultant candidates folded modulo the detected periodicities using PulsarX 7 . To retain sensitivity to binary systems, the data were time-domain resampled (Johnston & Kulkarni 1991) to constant acceleration values between −150 and 150 m s −2 before searching. Folded candidate signals were inspected by eye. No significant pulsed emission was detected above a signal-to-noise threshold of 9. The MeerTRAP real-time single-pulse pipeline running on the TUSE instrument (Transients User Supplied Equipment; Stappers et al. in prep) was run in parallel with all of the MeerKAT observations. It operated on the same central beam that the pulsar search described above with a time resolution of 306.24 µs for the L-band observations and 361.4 µs for the UHF observations. Single pulses that are greater than a S/N limit of 8, were searched for over dispersion measures from 23-5000 pc cm −3 in L-band and 23-1500 pc cm −3 in the UHF over a range of widths from the time resolution up to 196 ms and 231 ms for the two frequencies respectively. No astrophysical pulses were detected above the S/N threshold. ATCA observations After our ASKAP and MeerKAT monitoring observations ended, we observed ASKAP J173608.2−321635 with the Australia Telescope Compact Array (ATCA) in three bands (centered at 2.1 GHz, 5.5 GHz, 9.0 GHz) for 80 mins each on 2021 April 25 (project code: C3431). The observation was calibrated using PKS B1934−638 for the flux density scale and the instrumental bandpass. PMN J1733−3722 was used for phase calibration. We used Miriad (Sault et al. 1995) to perform the data calibration and Casa to perform the continuum imaging. We detected a source with a flux density of 4.41 ± 0.14 mJy at 2.1 GHz. We did not find any detection at 5.5 GHz or 9.0 GHz, which places 3σ upper limits of 78 µJy beam −1 and 60 µJy beam −1 at 5.5 GHz and 9.0 GHz respectively. The non-detection at higher frequency (5.5 GHz) constrains the spectral index to 6 https://github.com/ewanbarr/peasoup 7 https://github.com/ypmen/PulsarX.git be α < −4.2. We measured the spectral index to be α = −5.6 ± 0.1 across the L-band (2.1 GHz) bandpass ( Figure 5), which is consistent with the constraints from the non-detection at 5.5 GHz. The source was moderately circularly polarized, with V /I ∼ +6%, which is consistent with the MeerKAT observation that fractional circular polarization is lower at higher frequencies ( Figure 4). X-ray Observations and Analysis We identified archival observations covering ASKAP J173608.2−321635 with the Neil Gehrels Swift Observatory (Swift; Gehrels et al. 2004), restricting observations to those using the X-ray Telescope (XRT; Burrows et al. 2005) in the photon counting mode. We used 4 observations between 2012 February 01 and 2012 September 09, with a summed exposure time of 2.3 ks. There was no source within 15 of ASKAP J173608.2−321635, and we determine an 95% count-rate upper limit of 8.4 × 10 −4 s −1 (over the default energy range of 0.2-10 keV). Following the MeerKAT detections of ASKAP J173608.2−321635, we were awarded Director's Discretionary Time observations with Swift (observation IDs 00014071001 and 00014071002). We obtained 1.7 ks on 2021 February 10.95 and another 0.8 ks on 2021 February 11.35. There was 1 count within 15 of ASKAP J173608.2−321635, but this is consistent with the background (mean expectation with 15 of 0.3 counts). So we set an upper limit of 1.2 × 10 −3 s −1 (0.2-10 keV). We estimated the upper limit of H I column density for the position of our source based on the H I 4π survey (HI4PI Collaboration et al. 2016) using the HEASARC web-based PIMMS to be 1.59 × 10 22 cm −2 (through the entire Galaxy). Assuming a power-law photon index of Γ = 2.0 (Hyman et al. 2021), the non-detection in Swift observations yields an upper limit on the unabsorbed flux (0.3-8 keV) of 2.0 × 10 −13 erg cm −2 s −1 . The upper limit for the X-ray luminosity at a distance of d is ∼ 2.4 × 10 33 (d/10 kpc) 2 erg s −2 . Finally, we were awarded Director's Discretionary Time with the Chandra X-ray Observatory. We used the back-illuminated ACIS-S3 detector with the thin filter, and the 1/8 subarray to maintain sub-second temporal resolution. ASKAP J173608.2−321635 was observed on 2021 February 17.61 for 25.1 ks (observation ID 24966). We filtered the data to 0.3-10 keV. There are 0 events within 1 , and based on the observed background rate we set a 95% upper limit of 1.0 × 10 −4 s −1 . Likewise, we estimate the upper limit of the X-ray luminosity (0.3-8 keV) based on the Chandra non-detection to be ∼ 5.0 × 10 31 (d/10 kpc) 2 erg s −1 . Near-Infrared Data We searched for near-IR counterparts in the VISTA Variable in the Via Lactea Survey (VVV, Minniti et al. 2010). There is no counterpart visible in the VVV DR2 catalog. We find 3σ upper limits of J > 19.25, H > 17.65 mag, and K s > 16.70 mag from VVV within a 2. 5 radius (corresponding to a 5σ positional error). We observed the source using Gemini Flamingos-2 in J-band (1.2 µm) for 40 mins on 2021 April 28 and 2021 April 29, and in K s -band (2.15 µm) for 18.5 mins on 2021 May 24 (project code GS-2021A-FT-210). We used Gemini Dragons (Labrie et al. 2019) to reduce the data and Sextractor (Bertin & Arnouts 1996) to perform the photometry. We used the VVV catalog as both astrometric and photometric references to correct the Gemini data. For astrometry, we used 340 sources that we identified as not blended or badly saturated for J-band and 96 sources for K s -band. The uncertainty is ≈ 0.15 in each coordinate. For photometry, we used fewer sources to avoid sources that showed signs of saturation or non-linearity. We used 270 sources in J-band and 90 sources in K s -band. We estimated zero-point uncertainties of 0.02 mag for J-band and 0.04 mag for K s -band. The seeing of both observations was ∼ 0. 7. There is a faint source within 2. 5 of the radio position with J = 20.8 ± 0.2 mag and K s = 17.6 ± 0.1 mag. This infrared source is just within the 5σ error circle of the radio position ( Figure 6), therefore we consider it unlikely to be associated with the radio source, but we examine this in more detail in Section 3.1. Finally, just to the south of that source is a fainter source visible in both J and K s bands but with magnitudes at or fainter than our 3σ limit. We are unable to measure its properties reliably, but given the density of such sources in the image we do not believe the association to be significant. (Minniti et al. 2010). The small pink contour is the best astrometry constraint from MeerKAT (at 5σ confidence level). We show one well-detected source from Gemini observations that is within 2. 5 of the radio position as the red star in the inset; there is a fainter source just to the south of that, but it is consistent with our upper limits. Archival Radio Data This source was not detected in previous radio surveys including the quick look images from the Karl G. Table 1. We have also searched for any archival VLA and ATCA data but did not find any other observation that covers our source. 4.41 ± 0.14 f 0.29 ± 0.05 a The upper limit is derivated from Equation 1. We assumed the duty cycle of the pulsar (W/P) to be 10%. b The location of the source is close to the edge of the primary beam. The systematic error can be as high as ∼ 10 mJy. d The spectral index across the bandpass is α = −3.4 ± 0.3. RM is −64.0 ± 1.5 rad m −2 after ionospheric RM correction. e We combined these three UHF observations and got a detection with flux density of 0.73 ± 0.17 mJy beam −1 f The spectral index across the bandpass is α = −5.6 ± 0.3. We can summarize the most important characteristics of ASKAP J173608.2−321635 before we discuss interpretations: • Factor of > 100 variability over a timescale of a week at 900 MHz with a peak flux density of ∼ 10 mJy. • Persistent emission for a few weeks, but can decline as fast as 1 day. • High degree of circular polarization and steep radio spectrum. • High degree of linear polarization with a small RM, and depolarization toward the lower frequencies. RM changes significantly across the observations within three days. • No radio pulsations (searching the DM from 0-3000 pc cm −3 and exploring the acceleration up to 150 m s −1 ). Based on its low RM, ASKAP J173608.2−321635 may be a Galactic source. We show the pulsars with known RM and DM within 2 • of the source from the ATNF pulsar catalog (Manchester et al. 2005) 8 and extragalactic sources with known RM within 2 • from RMTable 9 (v0.1.8, Van Eck et al., in prep.) in Figure 7. The absolute values of the RMs for almost all nearby sources are much higher than that for our source. Furthermore, according to Hutschenreuter et al. (2021), the RM towards the direction of the source is ∼ +450 rad m −2 , mainly contributed by the Milky Way. If we assume the source is extragalactic, a low RM for our source would require a large ∼ −450 rad m −2 additional contribution to cancel the Galactic RM. The shortest rise and decay timescales we can constrain for our source are τ ∼ 1 day, based on the factor of ∼ 2 rise between 2020 January 18 and 2020 January 19, and the ∼day-long decay following the MeerKAT detection on 2021 February 7, although the rise in particular is only weakly constrained . If we assume that the emitting region is less than cτ in size, then the brightness temperature of our source is T B ∼ 10 12 K(d/1 Mpc) 2 . The low RM for our source suggests that it is nearby, with d 10 kpc. If there is not any shorter timescale variability, we can constrain that T B 10 8 K, which is far lower than the limit for coherent emission, ∼ 10 12 K (Readhead 1994). However, this limit can not help us discriminate between coherent and incoherent source, as some coherent emission can have brightness temperature well below 10 12 K (e.g., type II, III solar bursts, Reid & Ratcliffe 2014). Even so, the high degree of circular polarization suggests some coherent process such as electron cyclotron maser emission may be operating (e.g., Dulk 1985;Pritchard et al. 2021, and see below). Significant changes in rotation measure as seen for ASKAP J173608.2−321635 are rare. Sources with short timescale RM variations are usually extragalactic, such as AGNs with extreme environments (e.g., Zavala & Taylor 2003;Lico et al. 2017;Anderson et al. 2019), and some FRBs (FRB 121102, Hilmarsson et al. 2021). RM variations for Galactic sources are usually slow and small (e.g., Yan et al. 2011;Wahl et al. 2021) except the Galactic Center magnetar PSR J1745-2900: Desvignes et al. (2018) found large changes in observed RM for PSR J1745-2900 by up to 3500 rad m −2 over four years. Even more interestingly, they found that the RM for PSR J1745-2900 changed by about 7.4 rad m −2 per day in 2017. The RM variations is thought to come from a minimum scale of magneto-ionic fluctuations in the scattering screen. As we see no change in the intrinsic polarization angle for our source (Appendix A), we infer that the RM variation for ASKAP J173608.2−321635 is not intrin-sic to the source but is probably external, related to a change in the intervening interstellar medium (ISM). With only two RM measurements, it is hard to put a strong constraint on the property of the ISM along the line of sight. However, given the observational features of ASKAP J173608.2−321635, we can still describe the medium as well as the source more broadly. Based on the typical magnetic field values for interstellar medium (Ferrière 2001;Han 2017), the length scale of the Faraday region to give the change in RM is l RM ∼ 250 pc (B/2 µG) −1 (n e /10 −1 cm −3 ) −1 , where B is the magnetic field and n e is the electron density of the interstellar medium. Since we see no turnover in our radio spectrum, this suggests that the turnover frequency should be lower than ∼ 1 GHz if the source is a synchrotron emitter, which means the magnetic field of the source is 3 × 10 4 G (e.g., Kellermann & Pauliny-Toth 1981): which is consistent with the argument above but not very constraining; moreover, the high degree of circular polarization suggests that this is not typical synchrotron emission. The optical depth of free-free absorption at the frequencies we observed should be much smaller than one, which implies n e 10 2 cm −3 (T /10 4 K) 0.675 (l abs /250 pc) −0.5 , where T is the temperature and l abs is the length scale of the absorber (e.g., Osterbrock 1989): again, consistent but not necessarily constraining. If we assume there is no change in magnetic field, the RM variation implies a DM variation to be ∼ 30 pc cm −3 (B/2 µG) −1 in three days, much higher than those measured in pulsar timing (e.g., You et al. 2007;Demorest et al. 2013;Lam et al. 2018;Donner et al. 2020). It is still high (∼ 1 pc cm −3 yr −1 ) even if we assume the magnetic field can be as high as that near the Galactic Center (∼ 0.8 mG, Eatough et al. 2013). Only a few types of radio sources are known to emit circular polarization at more than a few percent of their total intensity emission at low frequencies (< 5 GHz). These include stars (e.g., Lynch et al. 2017) and pulsars (e.g., Johnston & Kerr 2018). Circular polarization has also been seen from jets in binaries but the fractional polarization is low, ∼ 0.5% (e.g., Fender 2003;Macquart 2003), with similar values seen in extragalactic sources (e.g., Macquart et al. 2003). And indeed recent circular polarization searches have identified both new pulsars (Kaplan et al. 2019) and the first brown dwarf discovered at radio wavelength (Vedantham et al. 2020). In this section we discuss these possibilities. Stellar interpretation Low-mass flare stars and chromospherically-active binaries such as RS CVns often show polarized flares (e.g., . Color-magnitude diagram for the field of ASKAP J173608.2−321635. We plot J − Ks color versus Ks magnitude. We show sources from VVV and our deeper Gemini observations (both 3 in radius) as black dots and blue dots respectively. The red star shows the possible infra-red counterpart candidate of ASKAP J173608.2−321635. We also plot the error for certain pairs of (J − Ks, Ks) values in the center left as a reference. Purple dashed line shows the detection thresholds of our Gemini observation, with J < 21.5 mag and Ks < 19.1 mag. The red dashed line shows the location of the red clump for different distances. We assume the intrinsic color for the red clump to be J − Ks = 0.75 and the intrinsic luminosity MK = −1.65 (Wainscoat et al. 1992;Hammersley et al. 2000). We adopt the extinction coefficients in Yuan et al. (2013) and assume an average extinction in the visual band of AV /d ≈ 1.8 mag kpc −1 (Whittet 1992). A reddening vector for AV = 5 mag is also plotted. Zic et al. 2019;Mutel et al. 1987). We show the colormagnitude diagram for sources within the field of our Gemini observation in Figure 8, with additional sources from VVV. We investigate the possibility that the Gemini source in Figure 6 is a nearby cool dwarf associated with ASKAP J173608.2−321635 (RS CVns would be far brighter, e.g., Driessen et al. 2020). According to Pecaut & Mamajek (2013) 10 cool dwarfs (spectral type of M/L/T/Y) have typical colors of J − K s from −1.0 to 1.0 and absolute magnitudes in K s band of M Ks 6. For a cool dwarf with observed color of J − K s ≈ 3.0, it would need at least an extinction in V-band of A V ≈ 12 mag (we use the extinction coefficients in Yuan et al. 2013). With an average extinction of A V /d ≈ 1.8 mag kpc −1 (Whittet 1992), it requires a source at a distance of ∼ 7 kpc, which implies the mag-nitude in K s -band would be 21 mag (including the effect of extinction). As our source is about 3 magnitudes brighter than this limit, it is hard for this source to be a cool dwarf: more likely is a more distant red giant branch/red clump star. In general it does not stand out at all compared to the surrounding population, suggesting that it is not a unique object. We come to a similar but less robust conclusion about the fainter object in Figure 6, which we cannot measure reliably (also see Kaplan et al. 2008). The high radio flux density of ASKAP J173608.2−321635, together with non-detections at X-ray and near-IR wavelengths, also makes a stellar interpretation unlikely. X-ray and radio luminosities for various types of active stars are typically correlated (the Güdel-Benz relation; Güdel & Benz 1993;Driessen et al. 2020). In contrast, ASKAP J173608.2−321635 has an X-ray upper limit too low by at least 2 orders of magnitude. Even for ultracool dwarfs (known to be radio over-luminous relative to their X-ray luminosity; e.g., Williams et al. 2014), the X-ray limit of our source is lower than most of the ultracool dwarfs ( Figure 9) 11 . Similarly, based on the brightest possible object that we cannot rule out in infrared (excluding the object in Figure 6), we measure J > 20.8 mag from our Gemini observation. Empirically, we can examine different types of active stars with circularly polarized emission ( Figure 10). The vast majority of stars across different types (L/T dwarfs, magnetic CVs, and radio fluxlimited samples) have radio to near-IR flux ratios of 1. For the radio-discovered T dwarf BDR J1750+3809, the ratio is near 10. Except for the youngest, most energetic pulsars, this ratio is typically 10 3 (e.g., Zyuzin et al. 2016). ASKAP J173608.2−321635 itself has a ratio > 10 3 , depending on the radio state. We can do the same analysis a different way, based on the ratio of radio to bolometric flux from ultra-cool dwarfs. We determined a lower limit on the distance of a stellar/substellar counterpart (spectral type from late L to mid-M) to be ∼150-1400 pc based on the observed population of ultra-cool dwarfs (Reid et al. 2008). At this distance, we would expect low extinction, about 0.5 mag in J-band. Based on this lower limit on the distance (applying the extinction correction), we calculated upper limits on the radio flux density at 888 MHz to be < 0.3 − 0.6 µJy, assuming L radio /L bol = 10 −7 (which is the typical value for M dwarfs, Berger et al. 2010). The ratio of radio luminosity to bolometric luminosity for later L dwarfs can be as high as 10 −5 : for 11 Also see https://github.com/AstroLaura/GuedelPlot. BDR J1750+3809, it can reach 2 × 10 −5 . The limit would give an expected radio flux density of < 120 µJy. Even for a slightly beamed emission (such as for Jupiter, Burningham et al. 2016), the expected flux density would be 0.9 mJy. This is considerably lower than our measured values of ∼10 mJy, suggesting that ASKAP J173608.2−321635 is either a star with an extreme near-IR to radio ratio or another kind of source entirely. To summarize, we excluded ASKAP J173608.2−321635 as a star based on • Compared to its color (J −K s ), the IR source that we detect is too bright in K s -band (Figure 8). • The ratio of X-ray luminosity to radio luminosity is too low for stars (Figure 9). • The source is too bright in radio compared to Jband ( Figure 10). Figure 9. Soft X-ray versus radio luminosity plot for active stars from Güdel & Benz (1993); Benz & Güdel (1994); Williams et al. (2014) and references therein, adapted from Figure 12 of Driessen et al. (2020). Gray circles are RS CVn binaries, red triangles are dM/dMe stars, blue diamonds are dKe stars, and green pentagons are ultracool dwarfs . Black dashed lines connect the same source at different states, quiescent state as hollow markers and flaring state as solid markers. We plot the X-ray luminosity upper limit (0.04-2 keV, based on the model we assumed earlier) for ASKAP J173608.2−321635 at different distances (as labeled) as the black dot-dashed line, limiting the source to the shaded region to the upper left. Pulsar interpretation Though we found no pulsations in our data, the high degree of polarization and steep spectrum suggest the Figure 10. Fractional circular polarization versus radioto-near-IR flux ratio for stellar sources. We show stars measured in the Faint Images of the Radio Sky at Twenty Centimeters (FIRST) survey at 1.4 GHz as small green circles (Helfand et al. 1999, no polarization information was available), magnetic CVs measured at 8 GHz as orange pentagons (Barrett et al. 2020), auroral emission from L/T dwarfs measured at 6 GHz as the cyan pentagons for quiescence (open symbols) and peak (filled symbols; Kao et al. 2016), the T dwarf BDR J1750+3809 measured at 150 MHz as the blue hexagon (Vedantham et al. 2020), and stars identified in RACS as blue circles (Pritchard et al. 2021). ASKAP J173608.2−321635 is the large red star. When available, dashed lines connect different radio states for the same source. The near-infrared data were taken from VVV (Minniti et al. 2010) and the Two Micron All Sky Survey (2MASS; Skrutskie et al. 2006). source may be a pulsar. We can use our MeerKAT observations to constrain the pulsar-like properties of ASKAP J173608.2−321635. The expected signal-to-noise ratio of a pulsar at the beam center can be estimated as (Lorimer & Kramer 2012): where S is the flux density of the pulsar, G = 2.8 K/Jy is the gain of the MeerKAT telescope, N pol = 2 is the number of polarizations recorded, τ obs = 700 s is the length of the observation, ∆ν ∼ 856 MHz is the bandwidth, T sys ∼ 40 K is the system temperature (which includes the sky temperature in this direction), β is a correction factor due to downsampling, W is the pulse width of the pulsar and P is the period of pulsar. The effective pulse width is a combination of its intrinsic pulse width, pulse broadening due to dispersion, and scattering: where W e is the effective pulse width, δt disp = 8.3 × 10 6 DM ν −3 MHz δν ms (δν is the channel bandwidth in units of MHz) is the smearing time due to dispersion across a channel observed at frequency ν MHz , and δt scat is the smearing time due to scattering. We considered scattering as a function of DM based on Bhat et al. (2004, note that this is consistent with the very high degree of scattering from Camilo et al. 2021). A wide effective pulse width can reduce the pulsation SNR. For example, Hyman et al. (2021) argues that C1709-3918 and C1748-2827, with steep spectra, 10%-20% circular polarization, but no pulsations detected, may be pulsars with scatter-broadened pulses. At the most conservative, if ASKAP J173608.2−321635 is a pulsar with 1 ms pulsation period, considering the effect of dispersion and scattering, the non-detection in our MeerKAT pulsar search with S/N = 9 threshold suggests the duty cycle (W/P ) of the pulsar would be > 99% for a source with a DM below 200 pc cm −3 (∼ 3 kpc based on YMW16). For longer pulse periods we would have a duty cycle limit of > 99% at DMs up to 1000 pc cm −3 (∼ 6 kpc based on YMW16). Compared to pulsars in ATNF pulsar catalog, the highest duty cycle is ∼ 80% (see Figure 11). However, at the highest DMs considered in our search (up to 3000 pc cm −3 ), we would not be sensitive to even the longest period pulsars with typical scattering behavior. Observing at higher frequencies can help us minimize the effect of scattering. We will also employ fast folding algorithms (Staelin 1969) to search for longer periods when they become available for MeerKAT data. An alternative way to smear pulsations would be through orbital acceleration in a tight binary (e.g., Maan et al. 2018;de Gasperin et al. 2018). Our MeerKAT searches were shorter than the Parkes observations, so most binary orbits would not be too smeared out. Based on the range of accelerations searched, we exclude pulsars in a binary system with orbital period P B 5 hrs (assuming circular edge-on orbit, pulsar mass of 1.4 M , and companion mass of 0.1 M ). The decline in flux seen in our MeerKAT detections (lower right panel of Figure 2) is a factor of >10 faster than the initial detections seen with ASKAP (upper right panel in Figure 2), with several intermediate values between the "high" state and non-detections. This suggests that what we see is not "on versus off" behavior, like might be expected for a standard intermittent pulsar (Kramer et al. 2006;Lyne 2009). These intermediate flux levels may also rule out effects such as random sampling of eclipses from a "black widow" (e.g., Fruchter et al. 1988) or "redback" (e.g., Roberts 2013) system, where radio pulses can be periodically eclipsed when the companion wind's obscures the line of sight, and this can both smear out pulsa- Duty Cycle Lower Limit Period(s) 10 −3 s 10 −2 s 10 −1 s 1 s 10 s Figure 11. Duty cycle lower limit for a non-detection in the pulsar search for the MeerKAT data from 2021 February 07 (at 856-1712 MHz). We considered pulse broadening effects from dispersion and scattering (Bhat et al. 2004) as a function of DM. We also plot DM versus duty cycle (width of pulse at 50% peak/period) of the pulsars in the ATNF pulsar catalog (Manchester et al. 2005) with colors to indicate their period. tions (e.g., Stappers et al. 1996) and block the continuum flux (Broderick et al. 2016;Polzin et al. 2020). Typical orbital periods for those are < 10 hr, so samples days/weeks apart would be very unlikely to end up during the short (< 1 hr) ingress/egress periods. Some systems have been observed to have more complex flux density/eclipse variations (e.g., Polzin et al. 2020), but still generally not the large degree of continuum flux variability seen here. Similarly, the precession of a pulsar will result in emission that comes and goes with a timescale of hours (e.g., Zhu & Xu 2006). The multiple detections with fading behavior over 50 hours in 2021 February, and multiple non-detections over three months make eclipsing and precession unlikely interpretations. Hence we conclude the observed emission is unlike to be due to common pulsar-related origins. Magnetars are neutron stars with extreme strong magnetic fields (up to ∼ 10 15 G; Duncan & Thompson 1992;Kaspi & Beloborodov 2017). There are 31 known magnetars and magnetar candidates to date 12 (Olausen & Kaspi 2014), but only five are detected in the radio as pulsars (Camilo et al. 2006(Camilo et al. , 2007Levin et al. 2010;Eatough et al. 2013;Shannon & Johnston 2013;Rea et al. 2013;Karuppusamy et al. 2020;Lower et al. 2020). All the radio detections of magnetars happened during periods of X-ray outburst (Kaspi & Beloborodov 2017;Esposito et al. 2021), and faded eventually. Magne-12 http://www.physics.mcgill.ca/ ∼ pulsar/magnetar/main.html tars with confirmed radio pulsations show large pulseto-pulse variability, including pulse morphology (Kaspi & Beloborodov 2017) and polarization (e.g., Dai et al. 2019). The persistent X-ray luminosity for these radio magnetars is typically ∼ 10 33 erg s −1 (Rea et al. 2012), and can reach as high as ∼ 10 36 erg s −1 during an outburst (e.g., Rea & Esposito 2011). Our upper limit based on the Chandra observation is comparable to the persistent luminosity of radio magnetars but much lower than those during outbursts (Figure 12). All radio magnetars show very high degrees of polarization, but their flat radio spectra (Shannon & Johnston 2013), in contrast to what we see for ASKAP J173608.2−321635, makes a magnetar an unlikely interpretation (although see Pearlman et al. 2018). Similarly, the rotation period of magnetars is typically ∼ 1 − 10 s (Kaspi & Beloborodov 2017), and that range is excluded based on our MeerKAT searches for most sources (DM 1000 pc cm −3 , corresponding to 6 kpc based on YMW16; Figure 11). As we discussed earlier, pulsations can be smeared out due to scattering, but the Galactic Center magnetar PSR J1745−2900 has scattering of only 1.3 s at 1 GHz, considerably lower than that expected from DM models (Spitler et al. 2014;Pearlman et al. 2018), so we may actually be sensitive to higher DMs than Figure 11 implies. Regardless, higher radio frequency observations may help to rule out or confirm a magnetar origin. We noted that our search did not exclude sources with extreme long period, such as an ultra long period magnetar 1E 161348-5055.1 (with a rotation period of 6.67 hrs, De Luca et al. 2006). Further monitoring observations may help us find such periodic activity. Other transient classes We now consider whether ASKAP J173608.2−321635 could be an X-ray binary or extragalactic transient. The polarization and extremely steep spectrum are inconsistent with expectations for emission produced by a steady jet (α ∼ 0 at these radio frequencies) such as from lowmass X-ray binaries, or optically thin ejecta (α ∼ −0.7) such as gamma-ray bursts (e.g., Fender 2006). The short timescale (∼days) of our source also rules out sources such as supernovae (∼years; e.g., Dubner & Giacani 2015) and tidal disruption events (∼months; e.g., Gezari 2021). Variability due to extrinsic effects The large variability (∼100×) is inconsistent with standard diffractive scintillation, which has a modulation index of order unity (e.g., Cordes & Lazio 1991;Narayan 1992) for compact sources. Refractive scintillation will produce even less variability, particularly Esposito et al. (2020). Note that the radio flux density at an outburst state may not match the X-ray flux corresponding to the same outburst event, as not all sources were measured in both bands for the same outbursts. Radio fluxes of magnetars can be very variable on very short timescales and we used the average flux density here. due to the proximity of ASKAP J173608.2−321635 to the Galactic centre. While the total electron column density is unclear due to the unknown source distance, the expected variability due to refractive scintillation at 900 MHz ranges from a few tens of percent if it is nearby, to as little as 2% if it is more distance (Walker 1998;Cordes & Lazio 2002). Intraday variables (IDVs) similarly have typical modulation of up to a factor of two (e.g., Quirrenbach et al. 1992), although it can be a slightly higher (e.g., Dennett-Thorpe & de Bruyn 2000). The linearly polarized flux can vary with higher amplitude and on faster timescales, but the polarization fraction is < 10% (Kraus et al. 2003), so inconsistent with ASKAP J173608.2−321635. However, we consider whether the observed emission could be caused by other forms of extrinsic variability, or a combination of both extrinsic and intrinsic effects. For example, one could invoke a compact radio source undergoing an Extreme Scattering Event (ESE; e.g., Fiedler et al. 1987;Bannister et al. 2016), in which the emission is lensed by plasma in the intervening medium; however, this does not explain the high circular polarization we observe. In this case, the lightcurve variability would be caused by propagation effects, while the change in polarization between the two periods of detectability would be intrinsic to the source. Similar variability could be caused by gravitational lensing or plasma lensing. In general, gravitational lensing is achromatic while plasma lensing is highly chromatic (e.g., Wagner & Er 2020), but this is only true when the source is unresolved by the lens. If the source has finite size with spectral variations across it, then even gravitational lensing can have a chromatic effect as different regions are magnified/demagnified. For instance, a star with an active region could have different parts of that region magnified, which could increase the radio flux relative to other bands (Sec. 3.1) and give rise to highly polarized emission. However, it might still be difficult to explain multiple lensing events with similar magnifications. Further multi-wavelength searches during bright states and better characterization of the light curve could help resolve this scenario. A GCRT-like interpretation As the source is located only 4 degrees from the Galactic Center, we consider whether it could be another Galactic Center Radio Transient. The GCRT sources share some properties with ASKAP J173608.2−321635. GCRT J1742-3001 has a spectral index of −2 and GCRT J1745-3009 has a spectral index varying from −4 to −13, while that for our source varies from −2.7 to −5.6. Both our source and GCRT J1745-3009 are highly polarized. GCRT J1745-3009 was ∼100% circular polarized at 325 MHz (Roy et al. 2010). Our source was found to be 100% linearly polarized at ∼1.6 GHz and as high as ∼40% circularly polarized at ∼0.9 GHz. The source would have a flux density of ∼0.25 Jy extrapolated to 300 MHz, which is comparable to that for GCRT J1745-3009 (∼ 0.3 Jy). There is no X-ray detection for any of the GCRT sources when they are radio-bright. However, some properties of our source are different from those of the GCRTs. GCRT J1745-3009 is thought to be a coherent emitter based on the very rapid variability (∼ 10 min), while our source shows no rapid variability and therefore may not emit coherently. The variability timescale for GCRT J1742-3001 is of order of one month, comparable with the initial "flare" detected in ASKAP but much longer than the timescale for the lat-est detections. GCRT J1745-3009 varies on much faster timescales: it emits flare-like emission for about 10 mins out of 77 mins period at a relative constant flux density. Hyman et al. (2007) showed that GCRT J1745-3009 has been detected in three different states. We have detected ASKAP J173608.2−321635 in two significant observation states so far (bright for a week versus fast fading). In general the sparse observations of ASKAP J173608.2−321635 and the GCRTs limits conclusions based on their temporal properties, and it is not even clear that all of the GCRTs share a common origin. Further monitoring will help resolve this. CONCLUSIONS We have presented the discovery and characterization of ASKAP J173608.2−321635: a highly-polarized, variable radio source located near the Galactic Center and with no clear multi-wavelength counterpart. We have largely ruled out most possible origins of ASKAP J173608.2−321635 including stars, normal neutron stars, and X-ray binaries. An intriguing remaining possibility comes from similarities to steep-spectrum radio sources discovered in recent imaging surveys (e.g., de Gasperin et al. 2018;Maan et al. 2018). Galactic sources with steep spectra are usually pulsars (e.g., Bates et al. 2013). However, pulsation searches for most of these sources have been unsuccessful (e.g., Crawford et al. 2000;Maan et al. 2018;Hyman et al. 2019;Crawford et al. 2021). As discussed by Maan et al. (2018) andde Gasperin et al. (2018), the explanations for unsuccessful pulsar searching include short period or eccentric binary systems (Ng et al. 2015), scattering in the interstellar medium, bias towards short period pulsars in the searching, or alignment of the magnetic and rotation (Perry & Lyne 1985). Our searches, especially the short MeerKAT observations, should have had sufficient sensitivity to detect binary systems, but the other two effects may be at play here as well. Or, these sources along with ASKAP J173608.2−321635 may belong to a new class of steep spectrum sources, possibly related to the GCRTs. In order to constrain the origin of ASKAP J173608.2−321635, continued radio monitoring, pulsations searches at higher frequencies, and multiwavelength observations are necessary. ASKAP J173608.2−321635 is one of the first sources identified from our searches for transient, polarized sources in the VAST-P1 Survey , and while it is among the most extreme in terms of its variability and polarization properties, it is not the only transient polarized source. However, most other such sources have straightforward identification with known stars Pritchard et al. in prep). Some do not, and these are the subject of further investigation (e.g., Y. Wang et al. in prep). ASKAP J173608.2−321635 is further notable for its location toward the Galactic Center, although we do not yet know whether that is a coincidence or if that location is related to its nature: similar questions could be raised about the GCRT sources. Future comprehensive searches will quantify the exact number of such sources at different locations in the sky, including the Galactic plane, highlatitude regions, and the Magellanic Clouds (see Murphy et al. 2021 for the VAST Pilot-1 sky coverage). We found three variable sources above a modulation index of 0.9, from which ASKAP J173608.2−321635 easily stood out as it is the most variable source, the only polarized source, and the only source with no clear infra-red counterpart. Given that ASKAP J173608.2−321635 is typically not detected and can turn off on timescales from several weeks to as quickly as a day, our sparse sampling (12 epochs over 16 months) suggests that there could be other similar sources in these fields. Increasing the survey cadence and comparing the results of this search to other regions will help us understand how truly unique ASKAP J173608.2−321635 is and whether it is related to the Galactic plane, which should ultimately help us deduce its nature. Council of Canada (NSERC) through grant RGPIN-2015-05948, and of the Canada Research Chairs program. This research was supported by the Sydney Informatics Hub (SIH), a core research facility at the University of Sydney. This work was also supported by software support resources awarded under the Astronomy Data and Computing Services (ADACS) Merit Allocation Program. ADACS is funded from the Astronomy National Collaborative Research Infrastructure Strategy (NCRIS) allocation provided by the Australian Government and managed by Astronomy Australia Limited (AAL). Parts of this research were conducted by the Australian Research Council Centre of Excellence for Gravitational Wave Discovery (OzGrav), project number CE170100004. The Australian Square Kilometre Array Pathfinder is part of the Australia Telescope National Facility which is managed by CSIRO. Operation of ASKAP is funded by the Australian Government with support from the National Collaborative Research Infrastructure Strategy. ASKAP uses the resources of the Pawsey Supercomputing Centre. Establishment of ASKAP, the Murchison Radio-astronomy Observatory and the Pawsey Supercomputing Centre are initiatives of the Australian Government, with support from the Government of Western Australia and the Science and Industry Endowment Fund. We acknowledge the Wajarri Yamatji as the traditional owners of the Murchison Radio-astronomy Observatory site. The MeerKAT telescope is operated by the South African Radio As-tronomy Observatory, which is a facility of the National Research Foundation, an agency of the Department of Science and Innovation. The scientific results reported in this article are based in part on observations made by the Chandra X-ray Observatory. This work made use of data supplied by the UK Swift Science Data Centre at the University of Leicester. The Australia Telescope Compact Array is part of the Australia Telescope National Facility which is funded by the Australian Government for operation as a National Facility managed by CSIRO. This research has made use of the VizieR catalogue access tool, CDS, Strasbourg, France. This research has made use of NASA's Astrophysics Data System Bibliographic Services. (Bertin & Arnouts 1996), matplotlib (Hunter 2007), scipy (Virtanen et al. 2020), astropy (Astropy Collaboration et al. 2013. APPENDIX A. POLARIZATION VERIFICATION ASKAP J173608.2−321635 appears to be circularly polarized. We examined if the Stokes V detection is intrinsic or is the result of polarization leakage. We identified a few field sources with Stokes V detections at > 5σ significance in individual observations. As shown in Figure A1, the field sources with Stokes V detections are usually bright sources (detection SNRs > 100), and Stokes V detections are due to a modest level of leakage (< 1%). We can confirm that the circular polarization from our source is real, as the fractional circular polarization is much higher than 1% (also see Kaplan et al. 2019). We attempted to verify whether the change in rotation measure for ASKAP J173608.2−321635 was due to instrumental effects or if it was intrinsic to ASKAP J173608.2−321635. Besides measuring the RM value based on RM-synthesis and after RMClean, we also used direct λ 2 − χ fitting method to measure the RMs. As is shown in Figure A2 the RMs we measured from different methods are consistent and both of methods show clear changes between the epochs. Stokes Q and Stokes U spectra ( Figure A3) clearly show that the RM is different in the two epochs. We also found a linearly polarized field source (J173641.8−320029) with RM of +259.9 ± 1.9 rad m −2 and +256.2 ± 2.2 rad m −2 in the two epochs. This field source demonstrates that the RM stability between epochs is suitable to draw the conclusion about the temporal variability of ASKAP J173608.2−321635's RM. The absence of a dedicated polarization calibration means that we cannot trust the absolute intrinsic polarization angle of our data. However, the changes of intrinsic polarization angle between epochs for ASKAP J173608.2−321635 and the field source (J173641.8−320029) are consistent. The intrinsic polarization angle for our source changed from 10 0 10 1 10 2 10 3 Stokes I flux density (mJy) 10 −2 10 −1 Stokes V / Stokes I 2020-01-11 2020-01-19 2021-02-07 2021-02-09 Field ASKAP J1736 Figure A1. Fractional circular polarization in our images. We show the V /I flux density fraction against Stokes I flux density in four ASKAP observations with V /I detections. ASKAP J173608.2−321635 is shown as a diamond, and field sources (dominated by leakage) are shown as circles. All sources are detected at > 5σ in the Stokes V images, but the field sources have V /I < 1%. 109.7±0.7 deg to 18.6±5.4 deg, while that for the field source changed from 116.0±20.0 deg to 23.9±22.6 deg. Therefore it is likely that the intrinsic polarization angle for ASKAP J173608.2−321635 did not change between epochs.
2021-09-03T01:16:14.563Z
2021-09-02T00:00:00.000
{ "year": 2021, "sha1": "07acb862167690d34f973cbfc2d2fcf9ff1a9617", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2109.00652", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "07acb862167690d34f973cbfc2d2fcf9ff1a9617", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
237616958
pes2o/s2orc
v3-fos-license
Phase 2 Randomized Study of Oral Ibrexafungerp Versus Fluconazole in Vulvovaginal Candidiasis Abstract Background Vulvovaginal candidiasis affects approximately 75% of women in their lifetime. Approved treatment options are limited to oral or topical azoles. Ibrexafungerp, a novel, first-in-class oral triterpenoid glucan synthase inhibitor, has demonstrated broad fungicidal Candida activity and a favorable tolerability profile. The primary objective of this dose-finding study was to identify the optimal dose of oral ibrexafungerp in patients with acute vulvovaginal candidiasis. Methods Patients with vulvovaginal signs and symptoms score ≥7 were randomized equally to 6 treatments groups: 5 treatment doses of oral ibrexafungerp or oral fluconazole 150 mg. The primary endpoint was the percentage of patients with a clinical cure (complete resolution of vulvovaginal signs and symptoms) at the test-of-cure visit (day 10). Results Overall, 186 patients were randomized into the 6 treatment groups. Results, using the modified intent-to-treat population (baseline positive culture), are reported for ibrexafungerp 300 mg twice daily (BID) for 1 day (n = 27), which was the dose selected for phase 3 studies, and fluconazole 150 mg for 1 day (n = 24). At day 10, the clinical cure rates for ibrexafungerp and fluconazole were 51.9% and 58.3%, respectively; at day 25, patients with no signs or symptoms were 70.4% and 50.0%, respectively. During the study ibrexafungerp patients required less antifungal rescue medications compared with fluconazole (3.7% vs 29.2%, respectively). Ibrexafungerp was well tolerated, with the most common treatment-related adverse events being mild gastrointestinal events. Conclusions Ibrexafungerp is a well-tolerated novel antifungal with comparable efficacy to fluconazole in the treatment of acute vulvovaginal candidiasis. Clinical Trials Registration NCT03253094 Vulvovaginal candidiasis (VVC), more commonly known as vaginal yeast infections, is one of the most common causes of vaginitis. In 80%-95% of women, VVC is caused by Candida albicans with fewer remaining infections caused by non-albicans Candida species including Candida glabrata, Candida parapsilosis, Candida tropicalis, Candida krusei, or other fungi [1]. Approximately 75% of women will have ≥1 episode of VVC in their lifetime, and 40%-45% of women will experience ≥2 episodes [1,2]. By the age of 25 years, an estimated 50% of all women will have experienced ≥1 episode of VVC after the onset of sexual activity [3]. Current treatment options for VVC are predominantly limited to the azole class of fungistatic agents and include short courses of topical formulations of various agents or oral fluconazole given as a 150-mg single or multiple dose regimen [2,4,5]. Limitations of available treatments include concerns of intolerance, adverse events (AEs), and with fluconazole, a potential risk of miscarriage and fetal harm [6][7][8][9]. More recent VVC treatment limitations include an increasing prevalence of fluconazole resistance and intrinsic resistance or low susceptibility of non-albicans Candida species to azole antifungals [10,11]. Currently, there are no Food and Drug Administration (FDA)-approved oral non-azole treatment options available for patients with VVC. This absence negatively impacts VVC treatment options for patients with azole-resistant organisms, those not responding to or not tolerating fluconazole or for whom its use is contraindicated. New agents should ideally have broader fungal coverage, minimal AEs, no risk of fetal harm, and limited drug to drug interactions. Ibrexafungerp (formerly SCY-078) is a first-in-class triterpenoid antifungal [12]. Similar to the echinocandins, its mechanism of action targets the glucan synthase enzyme, resulting in decreased (1,3)-β-D-glucan polymers, which weakens the fungal cell wall leading to fungal cell lysis and death. Due to its unique structure, ibrexafungerp binds to a site on glucan synthase that only partially overlaps with the echinocandin binding site [13]. As a non-azole antifungal, ibrexafungerp exerts broad spectrum activity against an extensive range of Candida isolates, including many with fks1 and fks2 point mutations that cause echinocandin resistance among C. glabrata and Candida auris [14]. Because glucan synthase is uniquely found in fungal cell walls and not human cells, there is less chance of off-target effects (eg, cytochrome P450 interactions) as observed with azole treatments [15]. Ibrexafungerp has demonstrated activity against several Candida species, including C. albicans and non-albicans Candida species such as C. glabrata, C. krusei, and C. auris [14,16]. Preclinically, ibrexafungerp demonstrated good vaginal penetration, with tissue levels 2-to 9-fold higher than plasma levels [16,17], and unlike fluconazole, the activity of ibrexafungerp is not negatively affected by low vaginal pH (<4.5) typical of vaginal milieu in VVC patients [16]. Given the high combined clinical cure rate (35/50 [70%] at day 24; ibrexafungerp 1250 mg loading dose followed by either 750 mg for 2 or 4 days) and favorable tolerability profile demonstrated in a phase 2 proof-of-concept study evaluating ibrexafungerp in women with moderate-to-severe acute VVC, we furthered the investigation of ibrexafungerp [18]. In this study, we selected a range of doses that were well-tolerated in previous phase 1 studies [19,20]. Here we report results of DOVE (double Blind Oral SCY-078 Acute VVC Evaluation; ClinicalTrials.gov NCT03253094), a phase 2, double-blind, randomized, active-control, dose-finding study of ibrexafungerp compared with fluconazole 150 mg with a focus on patients receiving ibrexafungerp 300 mg twice daily (BID) for 1 day, the dose selected for phase 3 study evaluation based on patient convenience, safety, and efficacy. Study Design and Patients Enrolled patients were ≥18 years of age with a diagnosis of symptomatic moderate-to-severe acute VVC determined by a vulvovaginal signs and symptoms (VSS) score of ≥7. Other eligibility criteria included a positive microscopic examination with 10% potassium hydroxide (KOH) in a vaginal sample revealing yeast forms (hyphae/pseudohyphae) or budding yeasts and a vaginal pH of ≤4.5. Exclusion criteria included patients with any vaginal condition that would interfere with the diagnosis and evaluation of VVC (suspected or concurrent causes of vulvovaginitis and/or cervicitis including bacterial vaginosis, Trichomonas, active herpes virus or human papillomavirus infection, positive tests for Neisseria gonorrheae or Chlamydia trachomatis, or other mixed infections), the use of antifungal treatments (topical or systemic) within 28 days of baseline visit, vaginal contraceptives, use of CYP3A4/3A5 inducers and strong time-dependent CYP3A4/3A5 inhibitors 14 days before enrollment and during treatment, strong or moderate reversible CYP3A4/3A5 inhibitors including azoles or grapefruit juice 48 hours before enrollment and during treatment, select CYP2C8 substrates (ie, amiodarone, amodiaquine, paclitaxel, repaglinide, montelukast, pioglitazone, and rosiglitazone) within 48 hours before enrollment and during treatment, or select P-gp substrates (ie, digoxin, colchicine) 48 hours before enrollment and during treatment, patients menstruating at the baseline visit, patients with uncontrolled diabetes mellitus, HIV infection, active cervical or vaginal cancer, and patients who were pregnant. This study was conducted in accordance with the general principles of the Declaration of Helsinki. Each study site obtained institution review board approval for the protocol, informed consent form, recruitment flyers, and other written information before study initiation. Randomization and Masking Patients were randomized in equal allocations (at a 1:1:1:1:1:1 ratio) to 1 of 6 treatment groups: oral ibrexafungerp at doses of 750 mg day 1; 300 mg BID for 1 day; 450 mg BID for 1 day; 150 mg BID days 1-3; 300 mg BID days 1-3; or fluconazole 150 mg day 1. Randomization was completed electronically through an interactive web response system. Fluconazole capsules were encapsulated to maintain treatment blinding. All randomized patients received matching ibrexafungerp placebo tablets and/or matching fluconazole placebo capsules based on treatment assignment, in a double-blind, double-dummy fashion. Both active and placebo ibrexafungerp tablets were manufactured by Corealis Pharma; fluconazole active tablets were manufactured by Teva Pharmaceuticals and encapsulated by Corealis Pharma for blinding purposes. All site and sponsor personnel were blinded to treatment assignment except for a sponsor representative who was involved in safety assessments. Study Assessments Vulvovaginal samples for 10% KOH microscopic assessment (assessed locally) and fungal culture (assessed by a central laboratory) were collected at baseline, test-of-cure (TOC, day 10), and follow-up (day 25) visits. Susceptibility testing was performed per Clinical Laboratory Standards Institute M27-A3 guidelines for all cultures positive for Candida species. A vulvovaginal sample was also assessed at baseline for pH and other pathogens (eg, bacterial vaginosis, trichomoniasis, N. gonorrheae, C. trachomatis). VSS were assessed using a standardized, predefined scale for which each sign and symptom was given a numerical rating based on severity (absent = 0, mild = 1, moderate = 2, and severe = 3) to calculate a total composite score (range, 0-18). Vulvovaginal signs (edema, erythema, excoriation, or fissures) were rated by the investigator (scale of 0-3; maximum score of 9) and vulvovaginal symptoms (itching, burning, irritation) were rated by the patient (scale of 0-3; maximum score of 9) at baseline, days 1-10 (TOC), and day 25. Safety was assessed by physical exams, hematology and blood chemistry laboratory tests, and vitals at baseline and day 10, and continuous AE monitoring throughout the study. Outcomes The primary objective was to identify an optimal dose of oral ibrexafungerp in patients with moderate-to-severe acute VVC. The primary efficacy endpoint was the percentage of patients with a clinical cure (complete resolution of signs and symptoms; VSS = 0) at TOC; clinical failure was defined as no response to therapy or incomplete resolution of VSS or need for additional vulvovaginal or systemic antifungal therapy before the TOC visit. Secondary objectives were to evaluate the efficacy of oral ibrexafungerp in patients with VVC based on mycological and clinical outcomes and to evaluate the safety and tolerability of ibrexafungerp. Secondary efficacy endpoints were the percentage of patients with mycological eradication (negative fungal culture) at TOC and day 25, percentage of patients with both clinical cure and mycological eradication at TOC, percentage of patients with both absence of VSS and mycological eradication at day 25, and percentage of patients with continued clinical response (ongoing absence of symptoms in patients achieving clinical cure at TOC) at day 25. Exploratory endpoints included percentage of patients at day 25 who were symptom-free (absence of symptoms regardless of clinical outcome at TOC; patients that received additional antifungal therapy were considered not being free of VSS at day 25). Post hoc analyses included VSS score ≤1 (clinical improvement) at TOC and day 25 and the use of antifungal rescue medications. Statistical Analysis As a dose-finding study with no formal sample size calculation performed, this study was not statistically powered. All analyses are descriptive in nature. Approximately 180 patients were planned to be enrolled and equally randomized to the 6 study treatment groups. Thirty patients per group was estimated to be adequate for an initial assessment of safety and potential efficacy. The intent-to-treat (ITT) population included all randomized patients. The modified ITT (mITT) population included all randomized patients who had a positive KOH test and a confirmed positive mycological culture for yeast at baseline; all efficacy results will be reported using the mITT population. The safety population included all randomized patients who received ≥1 dose of study drug and had ≥1 postbaseline evaluation. Role of the Sponsor The role of the sponsor in the design, execution, analysis, reporting, and funding is fully disclosed. The authors' personal interests, financial or nonfinancial, relating to this research and its publication have been disclosed. RESULTS Between August 2017 and May 2018, 293 patients were screened for eligibility, and 186 were enrolled and randomized into 1 of 6 treatment groups (Figure 1, Supplementary Table 1); 153 patients had a confirmed culture for yeast at baseline and were included in the mITT population. All patients included in the mITT population had moderate to severe VVC with VSS scores ranging from 7.0 to 16.0. Based on patient convenience with 1 day dosing and our results demonstrating an increase in gastrointestinal treatment-related treatment-emergent adverse events (TEAEs) with larger doses without a corresponding increase in efficacy, ibrexafungerp 300 mg BID for 1 day was selected as the dose in phase 3 studies (Supplementary Tables 2 and 4). Therefore, results reported here will be limited to ibrexafungerp 300 mg BID for 1 day and fluconazole. A total of 62 patients were enrolled in these 2 treatment groups (ibrexafungerp 300 mg BID for 1 day, n = 30; fluconazole, n = 32); 51 patients were included in the mITT population (ibrexafungerp 300 mg BID for 1 day, n = 27; fluconazole, n = 24). Results for the other treatment groups are provided in the supplement ( Supplementary Tables 1, 2, and 4). Overall, there were no differences between treatment groups for baseline characteristics (Table 1). In a post hoc analysis, antifungal rescue medication use was reported in 3.7% of patients receiving ibrexafungerp 300 mg BID for 1 day versus 29.2% of patients receiving fluconazole ( Figure 2C). Details of rescue medication use are provided in Supplementary Table 3. Treatment-related TEAEs were reported in all treatment groups, with higher proportions seen with ibrexafungerp compared with fluconazole (Supplementary Table 4 AEs or deaths were reported for any treatment group. Two patients in the 750 mg day 1 group discontinued treatment due to gastrointestinal-related TEAEs that resolved within a day. A normal pregnancy and delivery with no birth defects was reported in 1 patient in the ibrexafungerp 150 mg BID days 1-3 group. DISCUSSION Based on patient convenience and efficacy and safety data in this study, ibrexafungerp 300 mg BID for 1 day was the dosage selected for further evaluation in phase 3 studies. Our study suggested comparable clinical cure rates between ibrexafungerp 300 mg BID for 1 day and fluconazole at TOC. In this phase 2 study, we found it encouraging that certain parameters, such as improved and sustained VSS scores, mycological eradication at day 25, and the need for rescue medication appeared to be improved with ibrexafungerp compared with fluconazole. Because our study was not statistically powered, these initial findings will need further study to see if the observed differences are meaningful. In 2019, the FDA provided pharmaceutical industry guidance for drug development in the treatment of VVC and recommended clinical cure, defined as the complete absence of all VSS, as the primary efficacy endpoint [21]. In our study, a clinical cure was defined as the complete resolution of VSS (VSS score = 0) by the TOC visit without the need for further antifungal treatment. Mycological eradication was not a primary endpoint, as Candida is normally found in the vagina [2]. Given variations in efficacy outcome definitions in previous studies of VVC, historical comparisons are difficult [22][23][24][25]. However, some previous studies have reported a decreased sustained response with azole treatment from day 14 to day 35 [24,25] , ibrexafungerp was generally well tolerated, with self-limited (generally 1-day duration), mild to moderate gastrointestinal TEAEs. The incidence and nature of treatment-related TEAEs for fluconazole were similar to those reported for single-dose use in VVC [7]. The results of our study have clinical implications. Since the approval of fluconazole for the treatment of VVC, no other medications have been approved for this indication. This exploratory phase 2 study, which included a single-dose fluconazole treatment group, suggests that ibrexafungerp may have a potential role in managing a very common infection. With its novel fungicidal mechanism of action [15] and ongoing in vitro activity at lower vaginal pH values where fluconazole activity is decreased [16], ibrexafungerp has theoretical Efficacy outcomes for ibrexafungerp 300 mg BID for 1 day from baseline to TOC (day 10), and follow-up (day 25) visits. A, Clinical cure, VSS scores ≤1, and rate of mycological eradication at TOC visit (day 10). B, VSS score of 0, VSS scores ≤1, and rate of mycological eradication at follow-up (day 25). C, Percentage of patients who required antifungal rescue medication while participating in the study. Clinical cure was defined as a complete resolution of VSS of acute vulvovaginal candidiasis at the TOC visit (day 10); the VSS scale is a standardized, predefined assessment for which each sign and symptom was given a numerical rating based on severity (absent = 0, mild = 1, moderate = 2, and severe = 3) to calculate a total composite score (range, 0-18); mycological eradication was defined as a negative fungal culture. Rescue medications used in these 2 treatment arms included fluconazole, Lotrisone (clotrimazole-betamethasone dipropionate), and terconazole. Abbreviations: BID, twice daily; TOC, test-of-cure; VSS, vulvovaginal signs and symptoms. advantages over existing azole therapies. Furthermore, because preclinical penetration of ibrexafungerp into vaginal tissue is 2-to 9-fold higher than plasma levels, as opposed to ratios of 0.4-0.7 reported clinically with fluconazole [6,15,16], the drug seems to be delivered very effectively to the area that requires treatment. However, it is uncertain how the benefit of these preclinical characteristics of ibrexafungerp will translate in clinical use. This dose-finding study is limited as no formal sample size calculation was performed and therefore, the study was not statistically powered. All analyses were descriptive in nature only. Also, sample sizes for the treatment groups were small. One strength of the study was use of an active comparator to ibrexafungerp, fluconazole, instead of a placebo. Although patients in this study received only 1 dose, we recognize that current practice guidelines recommend patients receive 2-3 doses of fluconazole for patients with severe Candida vulvovaginitis, which would have most likely affected the outcomes of the fluconazole arm in our study [2,5]. Single dose fluconazole was used to provide an active FDA-approved comparator to ibrexafungerp. In conclusion, ibrexafungerp provides a safe, new, and novel treatment for moderate-to-severe acute VVC. Establishing the clinical role for single day ibrexafungerp treatment in women with acute VVC will be forthcoming following analysis of the 2 large phase 3 studies, VANISH-303 (NCT03734991) and VANISH-306 (NCT03987620), in which the potential advantages of ibrexafungerp against azole resistant isolates and different Candida species can be evaluated. Supplementary Data Supplementary materials are available at Clinical Infectious Diseases online. Consisting of data provided by the authors to benefit the reader, the posted materials are not copyedited and are the sole responsibility of the authors, so questions or comments should be addressed to the corresponding author.
2021-09-25T06:16:59.615Z
2021-09-23T00:00:00.000
{ "year": 2021, "sha1": "0ccc5bc374a16bd9cecb0881d93e75fd6c1fd5a0", "oa_license": "CCBYNCND", "oa_url": "https://academic.oup.com/cid/advance-article-pdf/doi/10.1093/cid/ciab841/41189634/ciab841.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "a17f6b2f1a6cdf4a56ef9dd8f0f8b863888f9234", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
220282421
pes2o/s2orc
v3-fos-license
Elevated β1,4-Galactosyltransferase I in Highly Metastatic Human Lung Cancer Cells The elevated levels of β1,4-galactosyltransferase I (GalT I; EC 2.4.1.38) are detected in highly metastatic lung cancer PGBE1 cells compared with its less metastatic partner PGLH7 cells. Decreasing the GalT I surface expression by small interfering RNA or interfering with the surface of GalT I function by mutation inhibited cell adhesion on laminin, the invasive potential in vitro, and tyrosine phosphorylation of focal adhesion kinase. The mechanism by which GalT I activity is up-regulated in highly metastatic cells remains unclear. To investigate the regulation of GalT I expression, we cloned the 5′-region flanking the transcription start point of the GalT I gene (–1653 to +52). Cotransfection of the GalT I promoter/luciferase reporter and the Ets family protein E1AF expression plasmid increased the luciferase reporter activity in a dose-dependent manner. By deletion and mutation analyses, we identified an Ets-binding site between nucleotides –205 and –200 in the GalT I promoter that was critical for responsiveness to E1AF. It was identified that E1AF could bind to and activate the GalT I promoter by electrophoretic mobility shift assay in PGLH7 cells and COS1 cells. A stronger affinity of E1AF for DNA has contributed to the elevated expression of GalT I in PGBE1 cells. Stable transfection of the E1AF expression plasmid resulted in increased GalT I expression in PGLH7 cells, and stable transfectants migrated faster than control cells. Meanwhile, the content of the β1,4-Gal branch on the cell surface was increased in stably transfected PGLH7 cells. GalT I expression can also be induced by epidermal growth factor and dominant active Ras, JNK1, and ERK1. These data suggest an essential role for E1AF in the activation of the human GalT I gene in highly metastatic lung cancer cells. The enzyme ␤1,4-galactosyltransferase I (GalT I 1 ; EC 2.4.1.38) is a constitutively expressed type II membrane-bound glycoprotein in vertebrates (1). It is unusual that it resides in two distinct subcellular compartments, the trans-Golgi network and the cell surface (2,3). In the trans-Golgi complex, GalT I is one of the key enzymes involved in the sugar chain synthesis that catalyzes the transfer of galactose from UDP-Gal to terminal N-acetylglucosamine, forming the Gal␤134GlcNAc structure (4). Cell surface GalT I acts as a recognition molecule and participates in a number of cellular interactions, including neurite extension, cell growth, spermegg interaction, cell spreading, and migration (5)(6)(7)(8)(9). Neoplasms undergo various changes in the carbohydrate moieties of their glycoconjugates, which indicate that the glycosyltransferases themselves may change in malignancies. Consistent with this hypothesis, the importance of specific sialyltransferases, fucosyltransferases, N-acetylglucosaminyltransferase in tumorigenesis, and metastasis has been demonstrated (10 -12). Although the precise role of oligosaccharides in metastasis is presently unknown, accumulated evidence has shown that a number of highly metastatic murine and human cell lines are characterized by the elevated levels of cell surface GalT I (13,14). In seven of eight human adrenal carcinoma cell lines, the levels of GalT I correlate with their relative degree of in vitro invasiveness. Additionally, in two B16 murine melanoma sublines with distinct in vivo metastatic abilities, cell surface GalT I activity is elevated in the highly metastatic variant. Moreover, the degree of metastasis is actually influenced by the relative expression of cell surface GalT I (15). Increasing cell surface GalT I expression in cells of low metastatic potential promoted their invasive potential in vitro, and decreasing the cell surface GalT I expression in highly metastatic cells reduced their invasive potential in vitro and metastatic potential in vivo. In a nude mouse model, the number of peritoneal dissemination foci of the antisense GalT I-transfected ovarian tumor cells was smaller than that of the control cells, which indicated that GalT I was involved in the invasive and metastatic potentials of ovarian cancer (16). However, the mechanism by which GalT I activity is differentially up-regulated in highly metastatic cells is still unknown. Metastasis of cancer cells is a complex process involving multiple steps (17). Metastatic characteristics are partly derived from the deregulation of genes whose normal role is to control the division, differentiation, and migration of embryonic cells (18). The Ets transcription factor family has been reported to be involved in tumor metastasis through enhancement of angiogenesis and the expression of genes such as vascular endothelial growth factor, urokinase plasminogen activator, matrix metalloproteases, and integrins in a variety of cancer cell lines and tumor tissues (19 -21). Recent studies demonstrated that Ets-1 played a significant role in regulating N-acetylglucosaminyltransferase V expression in a variety of cancer cells and might be involved in tumor metastasis via the up-regulation of N-acetylglucosaminyltransferase V (22). In this study, we sought to determine which transcription factor was preferentially involved in the human GalT I gene upregulation in highly metastatic lung cancer cells. Differential GalT I expression was detected in PGLH7 and PGBE1 cells, two lung cancer cell sublines with different metastatic potentials. Our results indicated that the up-regulation of GalT I in highly metastatic cells was mediated by E1AF on the GalT I promoter. EXPERIMENTAL PROCEDURES Materials-Restriction enzymes, bovine calf serum, RPMI 1640 medium, Trizol reagent, and the mammalian expression vector pcDNA3.0 was from Invitrogen. G418, PMSF, aprotinin, pepstatin, and epidermal growth factor (EGF) were from Sigma. Prime-A-Gene random primer labeling kit was from Promega. Hybond TM Nϩ nylon membrane, [␣-32 P]dATP, [␥-32 P]dATP, and the enhanced chemiluminescence (ECL) assay kit were from Amersham Biosciences. Sialidase was from Roche Applied Science. Takara RNA PCR kit (AMV version 2.1) and Takara MutanBEST kit was from Takara. PEA3 antibody (sc-113 and sc-113X), anti-human FAK, and anti-human FAK-P antibody were from Santa Cruz Biotechnology. Anti-human ␤1 integrin antibody was from Pharmingen. Anti-GFP antibody was purchased from Roche Applied Science. Anti-human F-actin antibody was from Oncogene. Anti-mouse HRP secondary antibody and anti-rabbit HRP secondary antibody were purchased from New England Biolabs. Other reagents were commercially available in China. Cell Lines and Cell Transfections-PGLH7 and PGBE1 cells, two cell sublines isolated from the metastatic human lung giant cell carcinoma (PG) with different spontaneous metastatic potentials (23), were obtained from the Department of Pathology, Peking University Health Science Center, and were cultured in RPMI 1640 medium containing 10% bovine calf serum, 100 units/ml penicillin, and 50 g/ml streptomycin at 37°C in a humidified CO 2 incubator (5% CO 2 , 95% air). COS1 cell were maintained in Dulbecco's modified Eagle's medium supplemented with 10% fetal bovine serum and antibiotics (100 units/ml penicillin and 50 g/ml streptomycin). Cell transfections were performed with Lipofectamine (Invitrogen) according to the manufacturer's instructions. 48 -72 h after transfection, cells were harvested. For stable transfection, 72 h after transfection the cells were selected in the RPMI 1640 medium containing G418 (400 g/ml). After a 2-3-week growth in G418-containing medium, the individual G418-resistant clones were selected and expanded. Preparation of cDNA Probe and Northern Blot Analysis-To prepare the GalT I cDNA probe, RT-PCR products were separated and recovered from agarose electrophoresis. After purification and quantification, it was labeled with [ 32 P]dATP, using a Prime-A-Gene random primer labeling kit (Promega) according to the manufacturer's instructions. Northern blot analysis was performed as described previously (25). Briefly, 40 g of total RNA was separated on formaldehyde gels and transferred to Hybond TM Nϩ nylon membrane. The membranes were hybridized with a GalT I fragment as the probe and glyceraldehyde-3-phosphate dehydrogenase fragment as an internal control. The blotted membranes were washed and exposed to x-ray film (Kodak) with an intensifying screen at Ϫ80°C for 72 h. Preparation of Nuclear Extracts and Western Blot Analysis-Nuclear proteins were isolated according to the method of Schreiber et al. (26). Briefly, cell pellets were resuspended in 400 l of buffer A (10 mM HEPES (pH 7.9), 10 mM KCl, 0.1 mM EDTA, 0.1 mM EGTA, 1 mM dithiothreitol, 0.5 mM PMSF) on ice for 15 min, and then 25 l of 10% Nonidet P-40 was added. After centrifugation, the nuclear pellets were resuspended in 50 l of ice-cold buffer C (20 mM HEPES (pH 7.9), 0.4 M NaCl, 1 mM EDTA, 1 mM EGTA, 1 mM dithiothreitol, 1 mM PMSF), and the tubes were vortexed at 4°C for 15 min. After centrifugation, the supernatants were collected, and protein concentration was determined by using the method of Lowry et al. (27). A total of 30 g of protein from each sample was electrophoresed by 10% SDS-PAGE and transferred to a PVDF membrane. After blocking with TBS containing 5% nonfat milk and 0.1% Tween 20 for 2 h, the membrane was incubated with the primary antibody at 4°C overnight. After washing with TBS containing 0.1% Tween 20 three times, each for 5 min, the membrane was then incubated with horseradish peroxidase (HRP)-labeled secondary antibody for 2 h at room temperature. The membrane was then developed by using the enhanced chemiluminescent (ECL) detection systems. Immunoprecipitation of ␤1 Integrin-The cultured cells were washed with cold PBS and lysed by the addition of 200 l of lysis buffer (50 mM HEPES (pH 7.4), 150 mM NaCl, 100 mM NaF, 1 mM MgCl 2 , 1.5 mM EGTA, 1% Nonidet P-40, 10 g/ml leupeptin and pepstatin, and 1 mM PMSF). Cell lysate containing 500 g of protein (determined by the method of Lowry) was incubated with 2 g of monoclonal antibody to ␤1 integrin at 4°C for 1 h. Then 20 l of Protein G Plus-agarose suspension was added, and the sample was further incubated at 4°C for 3 h to immunoprecipitate the integrin, followed by centrifugation and washing of the pellet. Finally, the protein of ␤1 integrin samples was adjusted to the same concentration (30 g/ml). The immunoprecipitated integrin subunits were treated with neuraminidase to remove the terminal sialic acids of the N-glycans on the integrins by using a routine method in our laboratory. After washing, the 0.45-g sample was subjected to SDS-PAGE, then transferred to a PVDF membrane, and treated with 1:300 HRP-RCA1 conjugate or 1:1000 diluted antibody to ␤1 integrin followed by a 1:500 HRP-labeled secondary antibody. Finally, the membrane was developed with ECL reagents, and the membrane was put under x-ray film for exposure. Lectin Blotting-Cells were harvested, rinsed with PBS, and lysed with 1% Triton X-100 in PBS. Cell lysates containing 30 g of protein were boiled in SDS sample buffer with ␤-mercaptoethanol, loaded on 8% SDS-polyacrylamide gels, and then transferred onto a PVDF membrane. After being blocked with 5% BSA, the membrane was incubated with 1:100 dilution of HRP-RCA1 for 2 h at room temperature. The blots were washed and developed with the ECL detection system using x-ray film. RNA Interference Assay-RNA interference was undertaken using the pSilencer2.0 vector (Ambion Inc.). RNA interference target sequences were selected from the human GalT I sequence (GenBank TM /EBI accession number Y09723). Each candidate target sequence was analyzed by BLAST search to ensure that the hit would be unique to the GalT I mRNA. Target oligonucleotides were synthesized (AL1, 5Ј-AA-GGCCGAGATCAGCAAAGTTCAAGAGACTTTGCTGATCTCGGCCT-TTTTTTT-3Ј; and AL2, 5Ј-AATTAAAAAAAAGGCCGAGATCAGCAAA-GTCTCTTGAACTTTGCTGATCTCGGCCTTGGCC-3Ј), annealed, and cloned into pSilencer vector between the BamHI and HindIII sites. Recombinant plasmid DNA was prepared and tested for silencing activity against a GalT I-myc chimeric mRNA expressed from myc-pcDNA3.1 (Clontech) as an N-terminal fusion of GalT I with Myc. A negative control vector comprising a scrambled sequence was also prepared. The increasing amounts of siGalT constructs were cotransfected with myc-pcDNA3.1-GalT1 and EGFPN1 (Clontech) into COS1 cells or PGLH7 cells. 48 h later, lysates were prepared, and the levels of Myc and GFP were examined by immunoblotting. Specificity was assessed either by using the empty vector pSilencer plasmid, a vector containing an unrelated insert, or by cotransfecting siGalT with myc-pcDNA3.1-HBO1 vector. Promoter Deletion Constructs-A 1705-bp fragment (containing nucleotides Ϫ1653 to ϩ52) of GalT I promoter was prepared by PCR amplification of human genomic DNA using a sense primer containing an XhoI restriction site and an antisense primer containing a HindIII restriction site. Primers were synthesized on the basis of the reported genomic sequence for human GalT I, forward 5Ј-GTCTCGAGGTGTG-TAAGGAGTAGGTTGCTGAG-3Ј and reverse 5Ј-ATAAGCTTGCTTTA-AGAAGGGTGTGGGCTACAG-3Ј. Genomic DNA extracted from human peripheral blood was used as a PCR template. Following digestion with restriction enzymes, the GalT I promoter fragment was directionally cloned into the pGL2-Basic firefly luciferase expression vector (Promega) to generate a "full-length" GalT I reporter construct, and the correct insertion was confirmed by sequencing. Reporter genes containing sequentially truncated fragments (Ϫ930/ϩ52, Ϫ571/ϩ52, Ϫ495/ϩ52, Ϫ318/ϩ52, Ϫ261/ϩ52, Ϫ215/ϩ52, Ϫ139/ϩ52, Ϫ26/ϩ52, and Ϫ261/Ϫ138) of the GalT I promoter region were prepared in a similar manner using sense primers containing XhoI restriction sites and the antisense primer that was used to generate the full-length GalT I reporter construct. Site-directed Mutagenesis-To prepare mutated promoters, the putative Ets transcription factor-binding site CTTCCC between nucleotide positions Ϫ205 and Ϫ200 was changed to CAACCC and named p-215M-luc. The mutation was created from p-215-luc by PCR using Takara MutanBEST mutagenesis kit. Mutated constructs were sequenced, and the correct ones were selected for further experiments. Luciferase and ␤-Galactosidase Assay-72 h after transfection, cells were lysed with 1ϫ reporter lysis buffer (Promega), and luciferase and ␤-galactosidase activities were measured. Luminescence was measured over a 10-s interval on a plate luminometer and expressed in arbitrary units. ␤-Galactosidase was measured spectrophotometrically at 420 . The luciferase activity of each sample was normalized by ␤-galactosidase activity from the same sample and standardized. RCA-I Lectin Staining Procedures-To demonstrate binding reactions, the avidin-biotin-peroxidase complex (ABC) technique was employed according to Hsu et al. (28). RCA-I staining procedures were as described previously (29). Cells were plated on the dishes. To eliminate terminal sialic acid moieties, cells were digested with sialidase. Endogenous peroxidase activity was blocked with 0.3% H 2 O 2 in methanol for 30 min. To minimize nonspecific binding reactions, specimens were covered for 15 min with 0.1% bovine calf serum and with a solution of avidin and then with a solution of biotin in PBS. Following this, cells were rinsed three times in PBS and incubated at room temperature for 45 min in the presence of biotinylated lectins (10 g/ml) in a humidified chamber. Subsequently, the samples were rinsed three times in PBS and incubated for 60 min with the ABC reagent and again washed in PBS. The peroxidase-binding sites were visualized by incubation with a fresh solution of 0.02% hydrogen peroxide and 0.1% diaminobenzidine hydrochloride for 5 min. The cells were rinsed in tap water, followed by distilled water. Finally, the samples were dehydrated, cleared, and mounted. The mean density of RCA-I-positive labeling was from six different regions within the transfected PGLH7 cells and the controls. The values are expressed as the mean labeling density Ϯ S.D. from three independent experiments using image cytometry. Invasion and Migration Analysis-Boyden chamber invasion assay was performed basically as described previously by Albini et al. (29). Polycarbonate filters with 8-m pores were coated with 500 g/ml of Matrigel (BD Biosciences). The coated filters were washed with serumfree medium and dried immediately. Then cells were added to the upper compartment of the chamber (1 ϫ 10 5 /100 l of medium containing 0.1% BSA), and 800 l of medium (containing 0.1% BSA) was added into the lower chamber. Cells were incubated and allowed to migrate for 24 h. After removal of nonmigrated cells, cells that had migrated through the filter were counted under a microscope in five fields at a magnification of ϫ400. Wound healing assays were performed as described (31). Briefly, subconfluent cells in 6-well plates were serumstarved overnight. Over 20 wounds were made on the cell monolayer by scratching with a 200-l sterile tip. Cells were rinsed three times with PBS. Complete growth media were then added to the plates, and cells were allowed to migrate for 0, 24, 48, and 72 h. For cells migrating out of the agarose drop explants, 80% confluent cells were trypsinized and resuspended. 100 l of agarose drop mixture was prepared (containing 1 ϫ 10 6 cell suspension and a final concentration of 0.3% agarose). Each agarose drop explants contained 1.5 l of mixture. On each of the following hours, the distance of the leading edge of migrated cells from the edge of the agarose droplet was determined on eight sides of each droplet, and five drops were used for each point. Gel Shift Assay-Gel mobility shift assay was carried out using Gel Shift Assay System (Promega) as follows. The double-stranded oligonucleotide 5Ј-GCCCCGCCTTCCCGCCCTCGTCCAGAAAA-3Ј and 3Ј-CGGGGCGGAAGGGCGGGAGCAGGTCTTTT-5Ј (corresponding to human GalT I promoter sequence Ϫ212/Ϫ184) and Ets-1/E1AF consensus oligonucleotide 5Ј-GATCTCGAGCAGGAAGTTCGA-3Ј and 3Ј-CTA-GAGCTCGTCCTTCAAGCT-5Ј were annealed, end-labeled with 32 P using T4 polynucleotide kinase, and purified using Sephadex G-25 quick spin columns (Roche Applied Science). Nuclear proteins were preincubated for 10 min with 9 l of electrophoretic mobility shift assay buffer. Then the 32 P-end-labeled duplex oligonucleotide (1 l, 10 fmol) was added, and the reaction was incubated for 20 min on ice. For competition experiments, unlabeled DNA probes were included at 100-fold molar excess over the 32 P-labeled DNA probe. For supershift experi-ments, 2 g of rabbit anti-E1AF polyclonal antibody (Santa Cruz Biotechnology) was added to the reaction mixtures and incubated for 30 min prior to addition of the 32 P-labeled DNA probe. DNA-protein complexes were separated on 5% nondenaturing polyacrylamide gels in 0.5ϫ Tris borate/EDTA (pH 8.4) at 4°C and 35 mA. The gels were dried, and the DNA-protein complexes were visualized by autoradiography. Statistics and Presentation of Data-All experiments were repeated at least three times. All numerical data were expressed as mean Ϯ S.D. Data were analyzed using the two-tailed t test. RESULTS Highly Metastatic PGBE1 Cells Have Higher GalT I mRNA Level Than Low Metastatic PGLH7 Cells-PGLH7 and PGBE1 cells, isolated from metastatic human lung giant cell carcinoma (PG), were two cell sublines with different spontaneous metastatic potentials. We analyzed the cell behavior of PGLH7 and PGBE1 cells in a wound healing test, agarose drop explants assay and Boyden chamber assay. As shown in Fig. 1A, PGBE1 cells readily migrate out of the agarose drop explants or into the wound in vitro (mechanical scratch made on the surface of growing cell culture) relative to PGLH7 cells. The difference in their invasive potentials was confirmed by Boyden chamber assay. PGBE1 cells showed higher ability to migrate through Matrigel-coated 8-m pore-size membranes (Fig. 1B). We next analyzed GalT I mRNA expression in PGLH7 and PGBE1 cells by Northern blot analysis and semi-quantitative RT-PCR. As shown in Fig. 1C, GalT I mRNA expression was higher in highly metastatic lung cancer PGBE1 cells than in low metastatic PGLH7 cells. PGLH7 and PGBE1 Cells Have Similar Galactosylation Levels of Glycoproteins-To determine whether the levels of GalT I at the cell surface or global alteration in galactosylation of glycoprotein might be associated with metastatic potentials of PGLH7 and PGBE1 cells, we examined total galactosylated glycoprotein using RCA lectin blotting. Total cell lysates from PGLH7 and PGBE1 cells were separated by electrophoresis and labeled with biotinylated RCA lectin, which interacts specifically with oligosaccharides terminating with the Gal␤134 GlcNAc group (32). The results showed no significant differences in the galactosylation profiles of PGLH7 and PGBE1 cells ( Fig. 2A), although the possibility exists that differences in one or a small number of glycoproteins would not be detected in this assay. Tumor cell binding to components of the basement membrane triggers intracellular signaling pathways, which PGBE1cells. The galactosylation of the ␤1 integrin immunoprecipitated from PGBE1 and PGLH7 was observed (Fig. 2C). But the levels of expression and galactosylation of ␤1 integrin were not changed in PGBE1 and PGLH7 cells (Fig. 2, B and C). (Fig. 3A) (9). In order to visualize GalT I fusion constructs on the surface of live cells, we fused the GalT I and TL-GalT I with GFP. Transient transfection showed that both GalT I-GFP and TL-GalT I-GFP were readily detected in the plasma membrane in COS1 cells and PGLH7 cells (data not shown) as reported previously (6). To evaluate precisely the relationship between GalT I expression and invasive behavior in vitro, we designed and synthesized three different duplex siRNAs complementary to human GalT I mRNA. GalT I siRNA specifically suppressed GalT I-myc expression (Fig. 3B), whereas they had no effect on HBO1-myc expression (Fig. 3C). Because cell surface GalT I mediates fibroblast spreading and migration on laminin but does not participate during cell interactions with fibronectin (34), PGBE1 cells transfected transiently with TL-GalT I-GFP and siGalT I were plated on laminin (15 g/ml). The differences in adhesion and spreading of pSliencer2, myc pcDNA3.1 vector-transfected cells were indistinguishable, whereas fewer siGalT I-transfected and TL-GalT I cells showed cell spreading (Fig. 3D). We next reduced surface GalT I expression by siRNA or introduced the dominant negative mutant GalT I in highly metastatic PGBE1 cells, and we tested the invasion behavior using a modified Boyden chamber (15). As expected, PGBE1 cells transfected with siGalT1 or FIG. 5. Ets family transcription factors involved in activation of GalT I promoter. A, 267-bp GalT I regulatory region carries the majority of the basal promoter activity. GalT-Luc constructs containing various length of GalT I promoter regions were transiently transfected into PGLH7 cells. Luciferase activity was normalized to ␤-galactosidase activity and standardized to the normalized activity from pGL2-Basic. Each value is the mean Ϯ S.D. of at least three independent experiments. B, activation of GalT I promoter by Ets transcription factor E1AF. p-1653-luc construct and vectors containing Ets-1, Ets-2, E1AF, ETV1, ETV5, Elk-1, Net, or the empty control vector were cotransfected into the PGLH7 cells. Normalized luciferase activity was standardized to p-1653-luc with vector alone. Each value is the mean Ϯ S.D. of at least three independent experiments. FIG. 6. Activation of GalT I promoter by E1AF. A, elevated expression of E1AF protein in nuclear extracts from PGBE1 cells. 30 g of nuclear extracts from each cell type was loaded onto a 10% denatured polyacrylamide gel, and E1AF protein levels were determined by Western blotting using the anti-PEA3 antibody. The size of E1AF protein was 60 kDa. B, E1AF dose dependence of GalT I promoter activation. Increasing amounts of E1AF expression plasmid were cotransfected into PGLH7 cells along with p-930-luc constructs. Results shown are the means Ϯ S.D. of at least three independent experiments. TL-GFP were less invasive than PGBE1 controls (Fig. 3E). Clustering of cell surface GalT I induces transient tyrosine phosphorylation of focal adhesion kinase in NIH3T3 (36). To address the effects of decreasing surface GalT I expression or targeted mutation in surface GalT I on laminin-mediated signaling, the levels of FAK expression and FAK phosphorylation in PGBE1 cells transfected with siGalT1 or TL-GFP were analyzed. The results showed that FAK phosphorylation was decreased in PGBE1 cells transfected with siGalT1 or TL-GFP. But the level of FAK expression was not altered (Figs. 3F and 4G). All these results suggested that cell surface GalT I was involved in the invasion and metastasis of PGBE1 cells. Cloning of Human GalT I Gene 5Ј-Flanking Region and Identification of Major Regulatory Region-We next investigated the transcriptional regulation of GalT I gene in highly metastatic PGBE1 cells. A search of the GenBank TM human genomic sequences resulted in the identification of genomic sequences upstream of the GalT I transcriptional start site (designated as ϩ1). To determine whether this sequence (GenBank TM accession number NT_008421, at nt 732212-742549) included the GalT I promoter region, a fragment extending from Ϫ1653 to ϩ52 was amplified by PCR from human genomic DNA and cloned into a promoterless pGL2-Basic, creating the reporter plasmid p-1653-luc. Transient transfection of the PGLH7 cells with this plasmid resulted in luciferase levels some 110-fold higher than the promoterless control plasmid pGL2-Basic. Computer analysis of the human GalT I promoter revealed a highly GC-rich content in its promoter region. The GalT I promoter lacks a typical TATA box, as seen with many GC-rich promoters. The TRANSFAC search program predicted a number of potential transcription factor-binding sites near or upstream of the putative transcription initiation site, including Sp1, AP4, C/EBP, Ets-1, E1AF, and GATA-1 (Fig. 4). To examine the promoter region for GalT I basal transcription, luciferase reporter constructs containing progressive deletions of the 1705-bp genomic DNA fragment were generated. Each construct as well as the control vector pGL2-Basic were transiently transfected into PGLH7 cells and assayed for reporter activity. Our results showed that deletion of sequences from nt Ϫ1704 to Ϫ215 did not appreciably reduce promoter activity (Fig. 5A). In contrast, the p-139-luc construct had much lower activity than the p-215-luc construct, indicating that sequences between nt Ϫ215 and Ϫ139 were critical for basal GalT I transcription. The deletion analysis in transiently transfected HeLa and SMMC-7721 cells also demonstrated that construct p-215-luc had minimal luciferase activity (data not shown). E1AF Can Induce the GalT I Promoter Activity-To assess the importance of the members of the Ets transcription factor family in the regulation of the GalT I promoter activity, we cotransfected PGLH7 cells with the plasmid p-1653-luc and vectors containing Ets family members, such as Ets-1, Ets-2, E1AF, ETV1, ETV5, Elk-1, Net, or the empty control vector. The fold stimulation of luciferase was calculated as normalized luciferase activity obtained in cells expressing Ets family members divided by the luciferase activity of samples originating from vector-transfected control cells (Fig. 5B). The highest activation of the GalT I promoter was obtained by E1AF. Expression of Ets-1 also stimulated the luciferase reporter gene 2.4fold compared with the control vector, whereas Ets-2, ETV1, ETV5, and Elk did not show significant activation of the GalT I promoter. On the contrary, Net reduced the GalT I promoter activity. In conclusion, these results demonstrate that the Ets family member E1AF can mediate regulation of the GalT I gene. Highly Metastatic PGBE1 Cells Have Higher E1AF Levels Than Low Metastatic PGLH7 Cells-To test further the hypothesis that E1AF activates the GalT I expression in highly metastatic lung cancer cells, we analyzed the expression of E1AF in PGLH7 and PGBE1 cells. Western blot analysis showed that nuclear extracts from PGBE1 cells had increased expression of E1AF protein (Fig. 6A). The elevation of E1AF protein correlated with an increased level of mRNA in PGBE1 cells, as assayed by semi-quantitative RT-PCR (data not shown). To determine trans-activating effects of E1AF on the GalT I gene, transfection studies using the GalT I reporter construct p-930-luc and increasing amounts of E1AF expression plasmid were performed. The forced expression of E1AF potently stimulated the GalT I promoter in a dose-dependent manner in PGLH7 cells, with a maximum activation of 7.8-fold (Fig. 6B). Identification of the Cis-elements Responsible for the Effect of E1AF-Deletion analysis was then performed to define functionally important cis-elements in this 1705-nt region. Luciferase assays showed that a deletion from Ϫ215 to Ϫ139 resulted in a drastic decrease in the promoter activity and loss of E1AF activation as compared with that of the p-261/Ϫ138-luc construct (Fig. 7A). The minimal inducible promoter activity is located within the Ϫ215/Ϫ139 region of the GalT I promoter. Inspection of this 76-nt region revealed potential Ets proteinbinding sites. To determine whether this potential binding site was necessary for GalT I transcription, we introduced site-directed mutagenesis into this Ets element (Ϫ205 to Ϫ200). It was found that the luciferase reporter activity was decreased to almost the same level as pGL2-basic. Mutation of the consensus Ets site deprived E1AF of responsiveness (Fig. 7B). These results indicate that the Ets element is an important cis-element for the transcriptional activation of the human GalT I. Identification and Characterization of Transcription Factors Binding to the Ets-binding Element by EMSA-By having shown that the Ets-binding site upstream of the GalT I transcription start site is necessary for E1AF responsiveness, it was imperative to identify the protein interacting with the site. Incubation of the double-stranded 28-mer oligonucleotide probe (Table I) containing Sp1-binding sites and one Ets-binding site between nt Ϫ212 and Ϫ184 with nuclear extracts and analysis by EMSA revealed at least three specific protein-DNA complexes (Fig. 8A, 1st lane). The bands with * were markedly reduced by incubation with the labeled Ets mutation probe M1, which contains mutations in the Ets-binding sites (Fig. 8A, 2nd lane). But the bands with * were not reduced by incubation with the labeled SP1 mutation probes M2 and M3 (Fig. 8A, 3rd and 4th lanes). Thus, bands with * represented proteins binding to the Ets site. To identify specific proteins that bind to the Ets-binding site, we used antibodies against E1AF. It was found that antibody against E1AF supershifted protein-DNA complexes (Fig. 8B, 6th lane), consistent with our competition experiments. The formation of these complexes was inhibited by the addition of a 50-and 100-fold excess amount of the unlabeled oligonucleotide (Fig. 8B, 7th and 8th lanes). We then asked whether elevated E1AF binding to the Etsbinding site contributes to the increased promoter activity in PGBE1 cells. To address this question, we examined the binding capability of the same amount of nuclear extract from PGBE1 and PGLH7 to the GalT I promoter in EMSA. Our results shown in Fig. 9C indicated that nuclear proteins of PGBE1 formed much stronger bands than PGLH7. It is concluded that E1AF, the Ets family member, binds to the Etsbinding site between nt Ϫ205 and Ϫ200 in GalT I promoter, promotes GalT I transcription, and contributes to the different expression of GalT I in PGBE1 and PGLH7 cells. E1AF Can Induce the GalT I Promoter Activity in COS1 Cells-Our results demonstrated that E1AF can promote GalT I transcription in PGLH7 and PGBE1 cells. To ensure that the observed response is not limited to PGLH7 and PGBE1 cells, we used COS1 cells that expresses relativity low levels of PEA3 (37). Myc-tagged E1AF plasmids were expressed in COS1 cells (Fig. 9A), and E1AF protein was located exactly in the nucleus (data not shown). E1AF increased GalT I promoter activity 5-6-fold compared with mock-transfected cells (Fig. 9B). To address whether E1AF can bind to the Ets-binding site between nt Ϫ205 and Ϫ200 in the GalT I promoter in COS1 cells, EMSAs were performed by using nuclear extracts from COS1 cells transfected with WT-E1AF-myc. The nuclear extracts from WT-E1AF-myctransfected COS1 cells formed a complex with the probe (Fig. 9C, 2nd lane). Anti-Myc antibody added to the EMSA reaction mixture resulted in supershift of the band (Fig. 9C, 4th lane), whereas anti-actin antibody did not shift any bands (Fig. 9C, 3rd lane). E1AF was demonstrated to bind to and activate GalT I promoter in COS1 cells. Expression of GalT I in E1AF-transfected PGLH7 Cells-pcDNA3.0-E1AF was stably transfected into PGLH7 cells, and its effect on GalT I expression and its biological activities were assessed. The results of Fig. 10A showed that there is an increase in the GalT I mRNA following transfection with the pcDNA3.0-E1AF vector. We further compared the GalT I promoter activity in PGLH7 cells and E1AF-transfected PGLH7 cells (Fig. 10B). E1AF-transfected PGLH7 cells showed about three times higher GalT I promoter activity than PGLH7 cells. Because the gene expression of GalT I was altered, whether the galactosylation of proteins was also changed was further in-vestigated. In order to determine whether Gal␤134GlcNAc was expressed differently on N-glycans in E1AF-transfected PGLH7 cells, cell samples were subjected to RCA-I lectin staining analysis. It was found that E1AF-transfected PGLH7 cells could enhance the content of ␤1,4-Gal branch in the cell surface glycoconjugates (Fig. 10C). Overexpression of E1AF-promoted Cell Migration-We next examined the differences of cell migration ability between PGLH7 cells and E1AF-transfected PGLH7 cells. It was found that E1AF-transfected PGLH7 cells migrated faster out of the agarose drop explants than PGLH7 cells (Fig. 10D). The PGLH7 cells were still not ready to migrate out of the explants 18 h after agarose drop explants were prepared, whereas E1AF transfectants had already migrated outside. GalT I Expression Can be Induced by EGF and Dominant Active Ras-Because Ets transcription factors have been well defined as nuclear effectors of a central signal transduction pathway, the Ras/MAPK signaling pathway (38), we next explored the possible relationship between Ras/MAPK and E1AF in GalT I induction. GalT I mRNA levels in serum-starved and EGF-stimulated HeLa cells were assessed by Northern blot analysis. Fig. 11A shows the GalT I mRNA induction by EGF (10 ng/ml). GalT I mRNA increased gradually following the addition of EGF. Analysis of time-response relationships demonstrated maximal GalT I mRNA activation after 4 h of EGF exposure, which corresponds well with the results obtained in GalT I promoter studies (Fig. 11B). Transient transfection of reporter plasmids containing GalT I reporter construct p-215luc into PGBE1 cells showed dose-dependent reporter gene activity in response to serum stimulation (3.1-fold increase, Fig. 11C). To determine whether RAS signaling pathways were involved in serum-induced GalT I transcriptional activation, we transiently cotransfected PGLH7 with reporter plasmids containing the GalT I p-215-luc promoter and either the dominant negative expression construct RAS-DN or a constitutively activated RAS-DA expression construct. As expected, expression of RAS-DN decreased the GalT I promoter activity in a dose-dependent manner, whereas expression of RAS-DA caused a similarly dependent activation (Fig. 11D), indicating a role for RAS in GalT I induction. Ras signal can alter gene expression by three distinct MAPK cascades (39 -41). To investigate further the importance of MAPK in mediating the activity of GalT I p-215-luc promoter, a series of transient transfections were performed (Fig. 11E). Transient overexpression of ERK1 or JNK1 in PGLH7 cells led to a significant increase in GalT I p-215-luc promoter activity. Site-directed mutagenesis of the putative Ets site at position Ϫ205 to Ϫ200 abolished the activation of GalT I promoter by RAS-DA (Fig. 11F). DISCUSSION Cancer metastasis is a complex process. It requires the coordinated expression or the activation of multiple genes so that cells migrate from the primary site, enter the circulatory system, arrest, and proliferate at a secondary site. In this study, we have provided evidence that E1AF-induced GalT I expres- FIG. 11. Activation of GalT I by EGF and dominant active Ras. A, induction of GalT I mRNA level by EGF. HeLa cells were cultured in serum-free RPMI 1640 medium for 24 h. After that, the cells were stimulated by EGF and harvested at various times. Total RNA (30 g) was electrophoresed and probed with GalT I cDNA and glyceraldehyde-3-phosphate dehydrogenase (GAPDH) as an internal control. B, time-dependent induction of the GalT I promoter by EGF. HeLa cells were transfected with p-1653-luc construct. 36 h after transfection, cells were incubated in serum-free medium for 24 h. After that, cells were stimulated by EGF and harvested at various times. Luciferase assays were performed as described above. Results shown are the means Ϯ S.D. of six replicates. C, induction of GalT I promoter by serum. p-215-luc was transiently transfected into PGLH7 cells. After transfection, cells were treated with the indicated FBS concentrations. Luciferase values are presented as fold activation over those observed in 0% FBS-treated samples. PGLH7 cells were transiently cotransfected with 0.4 g of the wild-type GalT1-Luc plasmids and increasing amounts of plasmids expressing the constitutively active form of RAS (RAS DA) or dominant negative RAS (RAS DN) (D) and ERK1 AND JNK1 (E). F, GalT I promoter constructs p-215-luc and p-215M-luc were cotransfected with DA-ras into PGLH7 cells, and luciferase activity was determined as described above. sion is necessary for lung cancer cell migration and metastasis. The mechanism of how GalT I influences the tumor cell invasive potential is still unknown. In this study, we found highly metastatic PGBE1 cells had a higher GalT I mRNA level than low metastatic PGLH7 cells. The invasive capacity was significantly reduced by decreasing surface GalT I or introducing dominant negative GalT I in PGBE1 cells. GalT I has been shown to exert various functions other than a catalytic enzyme (5)(6)(7)(8)(9). We then asked whether cell surface GalT I acts catalytically or in a lectin-like fashion. By using RCA lectin blotting, we found no difference between PGBE1 cells and PGLH7 cells. The ␤1 subunit-containing integrins are receptors mainly for extracellular matrix proteins such as laminin and fibronectin and are responsible for cell anchorage and motility. We also found ␤1 integrin could be modified by galactosyltransferase. But there was no difference in ␤1 integrin expression between PGBE1 and PGLH7 cells. Notably, the highly metastatic PGBE1 cells could interact with the extracellular matrix protein laminin and induce FAK phosphorylation. Decreasing surface GalT I expression or targeted mutation of surface GalT I in PGBE1 cells resulted in decreased FAK phosphorylation, but the level of FAK expression was not altered. Taken together, these observations raise the intriguing possibility that galactosyltransferase promote tumor cell invasion by inducing transient tyrosine phosphorylation of focal adhesion kinase. Cell surface GalT I has been implicated in tumor invasion and metastasis. But the mechanisms regulating its expression in highly metastatic cancer cells have not been defined. The 5Ј-flanking region of the mouse GalT I gene has been studied. We compared the promoters of the human and mouse GalT I gene and did not find a high homology (data not shown). In this report, we investigated the involvement of Ets factors in the transcriptional regulation of GalT I in highly metastatic human lung cancer cells. Sequence analysis revealed that the human GalT I promoter is a TATA-less, GC-rich promoter, which is consistent with the notion that GalT I belongs to the family of housekeeping genes. The ets genes, which currently comprise nearly 30 members, encode transcription factors bearing conserved DNA binding domains (the ETS domain) (42). E1AF is believed to play important roles in tumor invasiveness and metastasis through transcriptions of metastasis-related genes (43,44). Expression of E1AF is correlated with the metastasis phenotype of breast cancer (45)(46)(47) and invasive phenotype of neuroblastoma (48), oral squamous cell carcinoma (49,50), and non-small-cell lung cancers (51)(52)(53)(54). It was found in this study that expression of E1AF was increased in highly metastatic lung cancer cells compared with its low metastatic counterpart cells, which suggested that E1AF might be involved in lung cancer cell metastasis phenotype. Ets proteins are capable of regulating transcription by binding to the Ets-binding site (EBS) in the promoters of their target genes, and EBS comprises the highly conserved core sequence 5Ј-GGA(A/T)-3Ј (42). The GalT I promoter region was analyzed by using transient transfection experiments. Cotransfection with E1AF resulted in a 7.8-fold increase in luciferase activity as compared with vector alone, whereas the transfection with Ets-2, ETV1, ETV5, Elk, and Net, other members of Ets transcription factors, failed to increase luciferase activities, indicating a specific effect of E1AF on the GalT I promoter. It was found by deletion analysis that the region between nt Ϫ215 to Ϫ139 in the GalT I promoter is critical for activation by E1AF. Mutation of the consensus EBS in this region (position Ϫ205ϳ Ϫ200) led to a complete loss of responsiveness to E1AF. EMSA analysis showed specific binding of E1AF to this EBS in PGLH7 cells and COS1 cells. Nuclear extract from PGBE1 cells formed stronger band with the GalT I promoter than PGLH7 cells. All these results suggested that E1AF bound to DNA with specificity and activated transcription of GalT I promoter bearing Ets-responsive element, accounting for the increased GalT I mRNA levels found in highly metastatic PGBE1 cells. To the best of our knowledge, this is the first evidence associating Ets transcription factors and galactosyltransferase in human tumor metastasis. There are several potential Sp1 sites near the putative Ets site in GalT I promoter. The involvement of juxtaposed PEA3/ SP-1 sites has been reported for other genes, such as HTLV1 long terminal repeat, caspase-8, and parathyroid hormonerelated protein (55,56). Additionally, Sp1 plays an essential role in the transcriptional activity of the GalT V gene in cancer cells (57). In this study, mutation of the Sp1 sites adjacent to the EBS site (Ϫ205-200) did not affect the binding capacity of E1AF to GalT I promoter. Thus the possible involvement of Sp1 in up-regulation of GalT I in metastatic cells is excluded by EMSA analysis. To elucidate the role of the Ets protein in lung cancer cells, we stably transfected E1AF into the PGLH7 cells. It is important to emphasize that in the present study the expression levels of GalT I mRNA in the E1AF-transfected cell lines were higher than that in control cells. RCA-I staining intensities of membrane glycoproteins in the E1AF-transfected cells changed, suggesting E1AF enhanced expression of Gal␤134GlcNAc on N-glycans. At the same time, these cells migrated faster than control PGLH7 cells. Indeed, all these results suggest that E1AF induces GalT I expression in stably transfected PGLH7 cells, which may contribute to the highly metastatic potential of lung cancer cells. The activity of E1AF has been reported to be activated by Ras-MAP kinase signaling (20,38). The importance of Ets factor activity for Ras function has been shown by the finding that dominant negative Ets block Ras mediated cell transformation (58). It was found in this study that constitutively activated Ras is capable of enhancing the promoter activity of GalT I by 8.3-fold, whereas dominant negative Ras decreased GalT I promoter activity. Additionally, the transient overexpression of ERK1 or JNK1 in PGLH7 cells led to a significant increase in GalT I promoter activity. Whereas site-directed mutagenesis of the putative Ets site at position Ϫ205 to Ϫ200 abolished activation of GalT I promoter by Ras. All these results indicated the involvement of MAPK and E1AF in GalT I activation in highly metastatic lung cancer cells. GalT I is one of the seven known ␤1,4-galactosyltransferase polypeptides (35). The expression of ␤1,4-galactosyltransferase II-VII was also analyzed in PGLH7 and PGBE1 cells. It was found that ␤1,4-galactosyltransferase IV was increased in highly metastatic PGBE1 cells, whereas other family members remained unchanged. 2 The possible involvement of ␤1,4-galactosyltransferase IV has yet to be investigated. Additional studies on the relationship between glycosylation and metastasis should provide important insights into mechanisms of cell-cell interactions and tumor progression to the metastatic stage. It is likely that rapid progress will be made toward understanding the connections between GalT and tumor metastasis.
2019-08-17T13:49:02.905Z
2005-04-01T00:00:00.000
{ "year": 2005, "sha1": "c5cccb3564ade39f5ab525b1611f6d8bba49c058", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/280/13/12503.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "68541fed68bd27e61cb0b6c31793135d83d1753e", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
247845374
pes2o/s2orc
v3-fos-license
Nexmifa Regulates Axon Morphogenesis in Motor Neurons in Zebrafish Nexmif is mainly expressed in the central nervous system (CNS) and plays important roles in cell migration, cell to cell and cell-matrix adhesion, and maintains normal synaptic formation and function. Nevertheless, it is unclear how nexmif is linked to motor neuron morphogenesis. Here, we provided in situ hybridization evidence that nexmifa (zebrafish paralog) was localized to the brain and spinal cord and acted as a vital regulator of motor neuron morphogenesis. Nexmifa deficiency in zebrafish larvae generated abnormal primary motor neuron (PMN) development, including truncated Cap axons and decreased branches in Cap axons. Importantly, RNA-sequencing showed that nexmifa-depleted zebrafish embryos caused considerable CNS related gene expression alterations. Differentially expressed genes (DEGs) were mainly involved in axon guidance and several synaptic pathways, including glutamatergic, GABAergic, dopaminergic, cholinergic, and serotonergic synapse pathways, according to Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway annotation. In particular, when compared with other pathways, DEGs were highest (84) in the axon guidance pathway, according to Organismal Systems. Efna5b, bmpr2b, and sema6ba were decreased markedly in nexmifa-depleted zebrafish embryos. Moreover, both overexpression of efna5b mRNA and sema6ba mRNA could partially rescued motor neurons morphogenesis. These observations supported nexmifa as regulating axon morphogenesis of motor neurons in zebrafish. Taken together, nexmifa elicited crucial roles during motor neuron development by regulating the morphology of neuronal axons. INTRODUCTION Motor neuron diseases (MNDs) are characterized by muscle weakness and/or spastic paralysis and are an etiologically heterogeneous group of disorders resulting from motor neuron degeneration (Babin et al., 2014). Thus, exploring mechanisms underpinning motor neuron development may support and advance therapeutic strategies for MND. The zebrafish model is a highly practical in vivo research tool for studying developmental mechanisms, as their transparent embryos, at all developmental stages, are easy to image and manipulate (Nozawa et al., 2017). In particular, the motor neurons of the spinal cord are excellent in vivo systems for studying mechanisms controlling axon extension and synaptic formation (Babin et al., 2014). Growth cones at axon tips navigate using environmental cues, therefore, axons constantly follow stereotypical pathways to their targets and rarely deviate (Hilario et al., 2010). Zebrafish contain two different type of spinal motor neuron, i.e., primary motor neurons (PMNs) and secondary motor neurons (SMNs), which are based on several morphological features: soma shape, size, position, and axon diameter (Myers, 1985;Myers et al., 1986). PMNs are further divided into three groups: caudal primary motor neurons (Cap), middle primary motor neurons (Mip), and rostral primary motor neurons (Rop) in accordance with specific axonal pathways and soma positions within the spinal cord Westerfield et al., 1986). Although the somata of the three identifiable PMNs are localized at different positions in the spinal cord, their axons travel to the myoseptum via a common exit point; after leaving the spinal cord, PMNs extend their axons via a common pathway to the horizontal myoseptum . Finally, Cap, Mip, and Rop neurons extend their axons according to specific pathways to innervate dorsal, middle, and ventral trunk musculature, respectively (Myers, 1985;Moreno and Ribera, 2009). When compared with PMNs, SMNs are localized more ventrally in the motor column, with typically smaller somatas and thinner axons, which are born 5-6 h later than PMNs . Because of this unique stratification, PMNs are excellent cell systems for elucidating motor axon guidance mechanisms (Beattie, 2000;Beattie et al., 2002). In vertebrates, motor neuron development is regulated by many genes. For example, mecp2 knockdown in zebrafish increases abnormal axonal branching of aps and decreases motor activity (Nozawa et al., 2017). lncrps25 is co-expressed with mnx1 at the spinal cord and is essential for motor neuron development, but neurons lacking lncrps25 result in Cap axon truncation and abnormal branching, however, these defects are rescued by olig2 overexpression (Gao et al., 2020). Similarly, HuD mutants exhibit decreased motor axon branches, dramatically fewer dendrites, and movement defects (Hao le et al., 2017). Deleted or overexpressed colXIX causes stumpy-like Cap axon defects (Hilario et al., 2010), whereas colXVIII knockdown causes Cap axon stalling soon after exiting the spinal cord (Schneider and Granato, 2006). Ccdc80-l1 is implicated in motor neuron axonal path-finding, however, loss-of-function does not prevent PMN formation and axon projection, but leads to PMN disorganization (Brusegan et al., 2012). In our previous studies, we reported that insm1a, kinesin-12, and sox2 had key roles in motor neuron development (Xu et al., 2014;Gong et al., 2017Gong et al., , 2020. For example, the zebrafish insm1a mutant showed motor neuron loss and defects in PMN axons, including truncated length, excessive Cap branches, and disorganized distances between adjacent Caps, which were caused by the ectopic departure of motor axons from the spinal cord. In sox2 mutant zebrafish, besides truncated length and excessive Cap branches, defective PMNs also included changes in axon morphology and reductions in Mips and Rops. Nexmif (also called KIDLIA, KIAA2022, or Xpn) is a novel gene localized to Xq13.2 (Gilbert et al., 2020). Cantagrel et al. (2004) first reported the gene in two males with intellectual disability. However, very little is known about nexmif. Previous studies reported that nexmif mRNA was strongly expressed in the cortex, hippocampus, cerebellum, and olfactory bulb (Allen Institute for Brain Science, 2004;Cantagrel et al., 2009). At the protein level, nexmif is specifically distributed in post-mitotic neuron nuclei but not in glia. Strong protein expression is also detected from the E17 developmental stage through adulthood in mice (Gilbert and Man, 2016). Thus, nexmif may have key roles in brain development. In zebrafish, nexmif has two paralogs, nexmifa and nexmifb. Protein homology indicates 34 and 42% identity to the human protein, respectively. In humans, patients with nexmif mutations present with moderate to severe intellectual impairment, autism spectrum disorder (ASD), dystonia, intellectual disability, epilepsy, microcephaly, and facial deformities (Van Maldergem et al., 2013;de Lange et al., 2016). In animal models, nexmif was shown to participate in neurite morphological development, regulate cell migration, cell to cell and cell to matrix adhesion, and maintain normal synaptic formation and function (Ishikawa et al., 2012;Magome et al., 2013;Gilbert et al., 2020). For example, in the nexmif knockdown mouse, synapse density, spine density, and the expression of synaptic related proteins, such as AMPAR, PSD-95, and gephyrin were decreased. Also, immature spines were increased, synaptic transmission functions were defective, and mice exhibited ASD behaviors (Gilbert et al., 2020). In other work, KIAA2022 (nexmif alias) knockdown markedly suppressed neurite growth, including both dendrites and axons in cultured rat hippocampal neurons (Van Maldergem et al., 2013). Moreover, KIDLIA (nexmif alias) knockdown altered in vivo neuron migration, reduced dendritic growth, and disorganized apical dendrite projections in mouse layer II/III cortical neurons (Gilbert and Man, 2016). Magome et al. (2013) found that nexmif knockout inhibited cell migration by enhancing cell to cell and cell to matrix adhesion mediated by N-cadherin and β1integrin in PC12 cells. However, no study has yet focused on the effects of nexmif on spinal motor neurons. Evidentially, nexmif deficiency leads to ASD behaviors, and 50-80% of patients with ASD show motor dysfunction (Kaur et al., 2018), therefore, we hypothesized nexmif exerted effects on the development of spinal motor neurons. To verify our hypothesis, we assessed nexmifa expression using whole in situ hybridization (WISH) and reverse transcription-polymerase chain reaction (RT-PCR) in zebrafish. We then investigated nexmifa function during PMN morphogenesis via knockdown and knockout strategies in the Tg(mnx1:GFP)ml2 transgenic zebrafish line and investigated possible molecular mechanisms. Zebrafish Lines and Breeding Zebrafish embryos and adults were maintained in at the Zebrafish Center of Nantong University in accordance with guidelines outlined in previous studies (Xu et al., 2014;Gong et al., 2017Gong et al., , 2020. The transgenic zebrafish line, Tg(mnx1:GFP)ml2 and Tg(kdrl:EGFP) line have been described in the previous work (Flanagan-Steet et al., 2005;Jin et al., 2005). Cell Separation, RNA Isolation, Reverse Transcription, Quantitative RT-PCR and RT-PCR At 72 h post-fertilization (hpf), we collected 300-400 Tg(mnx1:GFP) zebrafish embryos and washed them three times in phosphate-buffered saline with Tween 20 and the same again in calcium-free Ringer's solution. Embryos were digested in 0.25% trypsin, then 10% fetal bovine serum was added to terminate the reaction. The volume was filtered through 100 and 40 µm filter membranes. Samples were then analyzed by flow cytometry (BD, Franklin Lakes, NJ, United States). Cells expressing GFP were identified as positive cells. Total RNA was extracted from zebrafish embryos and cells separated via flow cytometry by TRIzol reagent according to manufacturer's instructions (Invitrogen, Waltham, MA, United States). Contamination was removed by DNaseI (Roche, Basel, Switzerland) and then 2 µg total RNA was reversibly transcribed using a reverse firststrand cDNA synthesis kit (Fermentas, Waltham, MA, United States) and stored at −20 • C. Quantitative RT-PCR was performed using corresponding primers (Supplementary Table S1) in a 20 µL final reaction volume with 10 µL SYBR premix (Takara, Kyoto, Japan). Elongation factor 1a was used as the internal control. All samples were analyzed in triplicate. RT-PCR was performed using corresponding primers (Supplementary Table S1) in a 50 µL final reaction volume, with 25 µL 2 × Taq enzyme mix (Vazyme, Nanjing, China). After amplification, 20 µL was taken for gel electrophoresis and sequencing. Whole in situ Hybridization A 424-base pair (bp) cDNA fragment from a wild-type embryo was amplified using nexmifa F1 and R1 primers (Supplementary Table S1). Digoxigenin (DIG)-labeled sense and antisense probes were synthesized using a linearized pGEM-Teasy vector and sub-cloned with the nexmifa fragment by in vitro transcription with a DIG-RNA labeling kit (Roche, Basel, Switzerland). We collected zebrafish embryos at different developmental stages (20, 48, 72, and 96 hpf), then fixed them in 4% paraformaldehyde for 2 h at room temperature or overnight at 4 • C. They were then dehydrated through a series of increasing methanol concentrations, and finally stored in 100% methanol at −20 • C. WISH was performed as previously described (Gong et al., 2020). The sgRNA/Cas9 mRNA Synthesis and Injection Cas9 mRNA was generated by in vitro transcription with the linearized pXT7-Cas9 plasmid as previously described (Gong et al., 2017). sgRNAs were transcribed from the DNA templates that amplified by PCR with a pT7 plasmid as the template, a specific forward primer and a universal reverse primer (Supplementary Table S1; Gong et al., 2017Gong et al., , 2020. The transgenic zebrafish line Tg(mnx1:GFP)ml2 was naturally mated to obtain embryos for microinjection. Then, 1-2 cell stage zebrafish embryos were injected in a 2-3 µL solution containing 250 ng/µL Cas9 mRNA and 15 ng/µL sgRNA. At 24 hpf, embryos were randomly sampled for genomic DNA extraction according to previous methods to identify a founder. Mutant sites were verified by comparison to the wild-type unaffected sequences (chimerism). Chimeric zebrafish were mated with wild-type fish to obtain F1 fish. After examine its genotype by sequence, heterozygotic mutants were mated with Tg(mnx1:GFP) transgenic fish to breed the F2 generation. At last, nexmifa+/+ and nexmifa−/− littermates were obtained by F2 in-cross followed by fluorescence selection and PCR genotyping for the following experiments (Gong et al., 2017). Morpholino, mRNA Synthesis, and Microinjection The nexmifa splice-blocking Morpholino (MO) and the standard control MO (Std MO) were synthesized by Gene Tools. The sequences are: 5 -AAAATGGTAGGAGTTATAAATGAGT-3 and 5 -CCTCTTACCTCAGTTACAATTTATA-3, respectively. MOs were diluted to 0.3 mM in RNase-free water, injected into one-cell stage embryos, and then raised in E3 medium at 28.5 • C to generate nexmifa knockdown embryos (morphants). To perform rescue experiments, we generated nexmifa mRNA, efna5b mRNA, and sema6ba mRNA in vitro. Briefly, we cloned zebrafish nexmifa, efna5b, and sema6ba separately into PCS2 + vectors. Next, we linearized plasmids, then in vitro synthesized mRNA using the mMESSAGE mMACHIN Kit (Ambion, Austin, Texas, United States) according to manufacturer's instructions. Finally, we purified Capped mRNAs using the RNeasy Mini Kit (Qiagen, Hilden, Germany). MOs or mRNAs were injected into the yolk of one cell stage embryos using borosilicate glass capillaries (Sarasota, Florida, United States) and a PV830 pneumatic picopump (Sarasota, Florida, United States). The cDNA Library Preparation and RNA Sequencing We extracted total RNA from nexmifa morphants and wildtype zebrafish at 72 hpf using TRIzol reagent (Invitrogen) and calculated RNA integrity and purity by NanoDRop 2000 (Thermo Fisher Scientific Inc., Waltham, MA, United States). Only highquality RNA samples (OD 260/280 = 1.8-2.2, RNA Integrity Number ≥ 8.0) were used to construct the sequencing library. We next quantified and sequenced the final cDNA libraries using the Illumina NovaSeq 6000 platform, with 2 × 150-bp pair-end reads (Illumina, San Diego, CA, United States). Locomotion Analysis of Zebrafish Larvae To determine whether nexmifa deficiency impaired motility and whether this impaired motility could be rescued by overexpress the possible downstream gene, larva zebrafish at 7 days postfertilization (dpf) in different groups were placed into 24-wellculture plates (one larva/well) and transferred to the Zebralab Video-Track system (Zebrabox, Lyon, France). The unit was equipped with a sealed opaque plastic box insulated from the environment, and an infrared filter and monochrome camera. After 30 min adaptation, larval distances and average speeds were recorded for 30 min. Microscopy After anesthetizing zebrafish embryos by tricaine (Sigma, Saint Louis, Missouri, United States), they were embedded in 0.8% low melting agarose and examined using a Leica TCS-SP5 LSM confocal imaging system. Criteria for zebrafish embryos with abnormal PMNs were as follows: firstly, Caps length or the number of Caps branches per 1mm was less than 70% of the average of normal wild-type zebrafish. Secondly, PMNs abnormal in more than two hemisegment in the spinal cord in one fish. Otherwise, the embryo was normal. In situ hybridization images were Captured on an Olympus stereomicroscope MVX10. Statistical Analysis Statistical data comparisons were performed using Student's t-test or one-way analysis of variance if the data follow a normal distribution and variance between groups was uniform, Nexmifa Is Expressed in the Spinal Cord and PMNs of Zebrafish To analyze nexmifa temporal and spatial expression patterns, we performed WISH using a DIG-labeled nexmifa probe at different times. Nexmifa was strongly expressed in the central nervous system (CNS), including the brain and spinal cord at 20, 48, 72, and 96 hpf. At 48 h, expression in the brain and spinal cord was the highest, but then decreased gradually (Figures 1A-D, A -D , A -D ). To further assess if nexmifa was expressed in motor neurons in the spinal cord, we separated motor neurons from Tg(mnx1:GFP)ml2 cells and extracted RNA, as Tg(mnx1:GFP)ml2 motor neurons were GFP labeled. RT-PCR demonstrated that both mnx1 and nexmifa were present in selected neuron cells (Figure 1E) suggesting nexmifa was expressed in zebrafish motor neurons. Moreover, we performed RT-PCRs on nexmifa-negative tissue. The results showed that no nexmifa and mnx1 signal were detected in the GFP-positive cells sorted from the Tg(kdrl:EGFP) line (Figure 1F), in which endothelial cells were labeled with GFP. Nexmifa Loss Causes Motor Neurons Defects To explore if nexmifa regulated motor neuron morphogenesis in the spinal cord, we established a nexmifa knockout in Tg(mnx1:GFP)ml2 transgenic zebrafish (nexmifa mutant) to characterize PMN morphology. The selected sgRNA-Cas9 system effectively inserted a 159 bp frameshift mutation that prematurely altered protein translation and produced a truncated protein (Figure 2). There was no obvious difference in appearance between the two groups of zebrafish in the bright field (Supplementary Figure S1). In order to better understand the morphological changes of motor neurons, firstly, we drew a schematic for three different PMNs in one hemisegment in the spinal cord ( Figure 3A). Secondly, we observed abnormal PMNs at 48 and 72 hpf under fluorescence microscope. We found the abnormalities in nexmifa mutant included the loss of Cap and/or Mip, motor neuron loss, reduced Cap length, and abnormal Cap branches ( Figure 3B). Statistical analyses revealed the percentage of embryos with normal PMNs was lower than controls (55.5% ± 3.7% vs. 97% ± 1.4%) at 48 hpf and 57.5% ± 5.1% vs. 96% ± 1.9% at 72 hpf ( Figure 3C). Cap development was also restricted, e.g., the Cap length in axons in nexmifa mutants was shorter than controls (103.8 µm ± 29.3 µm (D) Cap length in control and nexmifa mutants at 48 hpf (n = 20 and 31, respectively) and 72 hpf (n = 15 and 28, respectively). (E) Number of branches per 1 mm Cap axon in control and nexmifa mutants at 48 hpf (n = 8 and 10, respectively) and 72 hpf (n = 9 and 12, respectively). Bar represent the mean ± standard deviation (SD). * * p < 0.01. vs. 173.5 µm ± 10.6 µm) at 48 hpf. When embryos developed to 72 hpf, mutant Cap lengths had grown, however, they remained shorter than controls (130.8 µm ± 28.8 µm vs. 203.8 µm ± 13.7 µm) ( Figure 3D). These observations indicated that truncated axons had not completely recovered. In addition, Cap branches were also abnormal between controls and mutants; branch numbers were significantly lower in mutants than controls (47 ± 11 vs. 160 ± 17) at 48 hpf, and at 72 hpf, branches were less than controls (95 ± 21 vs. 175 ± 19) and were more disordered ( Figure 3E). To specifically confirm that motor neuron defects were caused by nexmifa loss, we established a knockdown nexmifa fish model by injecting a splice-blocking MO into one-cell stage zebrafish embryos. At 72 hpf, post-nexmifa-MO injection, spliceblocking MO effects were checked and quantitated by RT-PCR, then confirmed by sequencing. MO-nexmifa generated a Figure S2B). After sequencing, we confirmed that nexmifa-MO injection had caused intron 2 (181 bp) to be retained in nexmifa mRNA (Supplementary Figure S2C), resulting in a reading frame shift to generate successful nexmifa knockdown. We also investigated PMN morphology in nexmifa-MO fish at 48 and 72 hpf; the results were similar to those in nexmifa mutants. We also performed rescue experiments by co-injecting nexmifa mRNA with nexmifa-MO to confirm phenotypic changes induced by nexmifa-MO injection. We observed that this strategy partly rescued abnormal motor neuron (Supplementary Figure S3A). For example, the percentage of normal embryos was recovered from 62.3% ± 3.5% to 75.5% ± 4.0% at 48 hpf and 60.5% ± 6.0% to 73.4% ± 5.8% at 72 hpf (Supplementary Figure S3B). Also Cap length was recovered from 101.5 µm ± 16.2 µm to 133.2 µm ± 21.5 µm at 48 hpf and from 126.1 µm ± 34.4 µm to 175.6 µm ± 25.6 µm at 72 hpf (Supplementary Figure S3C). The number of Cap branches was also recovered from 57 ± 12 vs. to 126 ± 29 at 48 hpf and from 88 ± 21 to 119 ± 34 at 72 hpf, and were less disordered than the nexmifa morphant group (Supplementary Figure S3D). Nexmifa Knockout Mutants Display Impaired Motility To investigate if motor neuron defects affected motor ability, video-tracked swimming activities were performed for 30 min at 7 dpf. As shown (Figure 4), movement trajectories in nexmifa mutants were significantly decreased when compared with controls ( Figure 4A). The swimming distance per 5 min decreased in nexmifa mutants when compared with controls ( Figure 4B), consistent with movement trajectories. Transcriptomic Profiling of Nexmifa Morphants and Control Zebrafish To identify mechanisms where nexmifa may have affected motor neuron morphogenesis, we performed RNA sequencing using RNA samples from control and nexmifa morphant zebrafish at 72 hpf. We identified 6,556 differentially expressed genes (DEGs), with 3,770 up-regulated and 2,786 down-regulated DEGs between the two groups (fold change > 2 or < 0.5, p < 0.05) ( Figure 5A and Supplementary Table S2). According to Kyoto Encyclopedia of Genes and Genomes (KEGG) annotations, many DEGs were involved in axon guidance pathways and various synaptic pathways. In particular, and according to Organismal Systems, DEG numbers involved in axon guidance (84) were the highest when compared with other groups (Figure 5B). We also observed 45 down-regulated DEGs in the axon guidance pathway (Figure 5C). Among these 45 DEGs, the top 3 genes with the largest fold changes are efna5b, bmpr2b and sema6ba. To verify the reliability of RNA-seq, we not only further test the expression of efna5b, bmpr2b and sema6ba but also test other 17 down-regulated DEGs randomly by qRT-PCR at 72 hpf. Then we found the expression of the most genes including efna5b and sema6ba were consistent with the RNA-seq results ( Figure 6A). Moreover, we test the expression of the above 20 genes between Ctrl and nexmifa mutant by RT-PCR and gained the similar trend change (Figure 6B). Efna5b and Sema6ba Overexpression Rescues the Motor Neuron Defects and Impaired Motility in Nexmifa Mutant Embryos As the downregulation of efna5b and sema6ba in nexmifa loss of function embryos, we hypothesized if nexmifa regulated motor neurons in zebrafish by down-regulating efna5b and sema6ba expression. To confirm this, we synthesized efna5b and sema6ba mRNA in vitro, and injected molecules into the yolk of a one-cell stage nexmifa mutant embryos. Then, we found the relative mRNA expression of efna5b and sema6ba were significantly up-regulated compared with nexmifa mutants by qRT-PCR, which indicate the successful of overexpression (Figures 7A-B). We observed nexmifa mutant embryos had significantly reduced motor neuronal defects caused by nexmifa loss (Figure 7C). Only 54.5% ± 4.6% of embryos presented normal PMNs in nexmifa mutants at 48 hpf, whereas this percentage increased to 73.3% ± 7.9% after efna5b mRNA injection and 76.7% ± 7.9% after sema6ba mRNA injection ( Figure 7D). As shown (Figure 8A), the movement trajectory was dramatically increased when the mutants were injected with efna5b RNA or sema6ba RNA compared with that in the nexmifa mutant at 7 dpf. Consistent with movement trajectory, the swimming distances per 5 min was also dramatically increased when the mutants were injected with efna5b RNA or sema6ba RNA compared with that in the nexmifa mutant ( Figure 8B). The results demonstrated that efna5b and sema6ba overexpression could rescue the motor neuron defects and impaired motility which caused by loss of nexmifa. DISCUSSION Previous studies demonstrated that nexmif was involved in neurite morphological development, cell to cell and cell to matrix adhesion, cell migration, and maintained normal synaptic formation in neurons (Ishikawa et al., 2012;Magome et al., 2013;Gilbert et al., 2020), however, little was known about nexmif function in spinal motor neuron development. Additionally, most studies have explored the in vitro effects of nexmif on neurite morphology (Van Maldergem et al., 2013), but none have done this in vivo. Here, our in vivo nexmifa expression and deficiency phenotype data provided new insights on nexmifa functions in regulating the morphogenesis of spinal motor neuron in zebrafish. In mouse brains, nexmif mRNA expression commences as early as E10.5, increases throughout development and peaking at P3, but continues at lower levels into adulthood (Cantagrel et al., 2009;Ishikawa et al., 2012). Our WISH data showed that almost all nexmifa was expressed in the brain and spinal cord; expression was observed from 20 hpf, whereas at 48 hpf, expression peaked and then gradually decreased. These spatiotemporal expression patterns in zebrafish were similar to mice. Furthermore, using flow cytometry, we sorted motor neurons from the Tg(mnx1:GFP)ml2 transgenic zebrafish line as these motor neurons were labeled by GFP. RT-PCR data showed that nexmifa was highly expressed in GFP-positive cells, indicating that nexmifa may directly regulate motor neuron development in the spinal cord. Embryo and larva motor neurons are similar in morphology and projection patterns with respect to adult motor neurons. All primary motoneurons are born between 9 and 16 hpf. During PMN development, these cells extend their axons along stereotyped pathways and develop branches to invade into the myotome to form distributed neuromuscular synapses to nerve musculature. At 48 hpf, the Cap somata are located within a short distance of the ventral root, with axons following a stereotyped pathway down the middle of the segment, making a collateral or varicosity at the horizontal septum. At the ventral edge of the musculature, each axon turns dorsally and laterally grows along the rostral myoseptum . At 72 hpf, exuberant branches are formed and further invade into the myotome to form distributed neuromuscular synapses (Liu and Westerfield, 1990;Downes and Granato, 2004). To explore if nexmifa was involved in the morphogenesis of motor neurons in the spinal cord, we established knockout and knockdown fish models. Our data showed that both models exhibited obvious motor neuron loss and defects in PMN axons. Moreover, after coinjecting nexmifa mRNA with nexmifa-MO, truncated Cap and disordered branches were partly rescued. Thus, nexmifa helped regulate axon morphology. Motoneurons establish important connections between the CNS and muscle. If they develop incorrectly, they cannot form the required connections, resulting in movement defects or paralysis (Hao le et al., 2017). Previous studies showed several motor defects were related to abnormal PMN development in zebrafish (Brusegan et al., 2012;Gong et al., 2017). In our study, impaired motility was consistent with the motor neuron defects seen in nexmifa knockout zebrafish. Previous studies also showed that when motor neuron dendrites are reduced, motoneurons receive less innervation, leading to decreased activity (Gao et al., 2020;Zhu et al., 2021). As swimming involves alternating side-muscle contractions caused by alternating motor neuron activation, less active motor neurons could lead to a reduction in alternating muscle contractions and less distance moved (Hao le et al., 2017). Thus, we hypothesized this impaired motility was due to a decreased number of branches induced by nexmifa loss. Both musculature and motor neuron are responsible for embryonic motility (Menelaou et al., 2008), however, whether nexmifa affects muscle development warrants further study. (n = 150, 242, 250, and 257, respectively). * p < 0.05 and * * p < 0.01. ns: non-significant. Many genes are involved in motor neuron development via morphogenesis regulation (Dong et al., 2019;Koh et al., 2020). In this study, RNA-sequencing was performed on control and nexmifa morphant embryos to explore nexmifa-mediated morphogenesis mechanisms. Several DEGs were related to CNS development, e.g., DEGs were involved in the axon guidance pathway and various synapse pathways, consistent with mouse data (Ishikawa et al., 2012;Gilbert et al., 2020). This consistency not only indicated successful model establishment (knockdown), but also demonstrated the conserved function of nexmif. Of the 45 down regulated DEGs related to axon guidance, efna5b, bmpr2b and sema6ba are the top three genes with the largest fold changes. Using qRT-PCR at 72 hpf, we found that expression of most DEGs including efna5b and sema6ba were consistent with RNA-sequencing results, thereby proving RNA-sequencing data reliability. Efna5b, or ephrin-A5b, belongs to the family of epha, Eph receptor tyrosine kinases and their cognate ligands. Ephrins are a important class of axon guidance molecules (Lisabeth et al., 2013;Cayuso et al., 2015). EphrinA6 drastically reduces BDNF-induced axon branching (Poopalasundaram et al., 2011), whereas Caenorhabditis elegans ephrin EFN-4 promotes primary neurite outgrowth in AIY interneurons and D-class motor neurons (Schwieterman et al., 2016). Sema6ba belongs to the semaphorins (Semas), another large class of proteins that function throughout the nervous system to guide axons. In Sema-2b loss-of-function embryos, specific motor neuron and interneuron axon pathways display guidance defects (Emerson et al., 2013). Sema5A was also expressed in the myotome during the period of motor axon outgrowth, the lack of sema5A in zebrafish result in delayed in motor axon extension into the ventral myotome and aberrant branching of these motor axons (Hilario et al., 2009). We showed that nexmifa deficiency caused a significant decrease in efna5b and sema6ba expression levels. Furthermore, efna5b and sema6ba overexpression rescued the motor neuron defects and inactive swimming behavior in nexmifa mutant embryos. These data suggested that nexmifa regulated motor neuron development, at least in part, by regulating efna5b and sema6ba expression. In the future, we will perform dual-luciferase reporter gene assays to confirm interactions between nexmifa and efna5b and sema6ba. DATA AVAILABILITY STATEMENT The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: NCBI BioProject PRJNA797475. ETHICS STATEMENT The animal study was reviewed and approved by Administration Committee of Experimental Animals, Jiangsu Province, China. AUTHOR CONTRIBUTIONS HN and Y-JW designed the study. Y-QZ and G-HS performed the experiments. DL and H-YL analyzed the data and are responsible for the statistical analysis. Y-QZ wrote the manuscript. All the authors have reviewed and approved this version of the manuscript. The percentage of embryos with normal PMNs in the three groups at 48 hpf (n = 113, 231, and 218) and 72 hpf (n = 104, 214, and 221, respectively). (C) Cap axon lengths in the three groups at 48 hpf (n = 18, 29, and 21, respectively) and 72 hpf (n = 17, 23, and 25, respectively). (D) The number of branches per 1 mm Cap axons in the three groups at 48 hpf (n = 7, 9, and 10, respectively) and 72 hpf (n = 8, 10, and 10, respectively). Bars represent the mean ± standard deviation (SD). * * p < 0.01.
2022-04-01T13:32:30.529Z
2022-03-31T00:00:00.000
{ "year": 2022, "sha1": "5d8edddac6ee56e130258147d4d0dcf3ec943edf", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "5d8edddac6ee56e130258147d4d0dcf3ec943edf", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
247812399
pes2o/s2orc
v3-fos-license
A Case Report: Acute Symptomatic Seizure Associated Brain Metastasis in Pregnant Woman Pregnant women can present with a wide variety of neurological conditions. Conditions including epilepsy, eclampsia, facial nerve palsy, pituitary tumor, cerebrovascular disorders, myasthenia gravis, multiple sclerosis, and nonpituitary intracranial tumors could be encountered. (Cheung, 1997).We describe a patient who presented with Acute Symptomatic Seizure Associated Brain Metastasis In Pregnant Woman. She came to the hospital because of a fit event and sudden left hemiparalysis and was diagnosed as a stroke, but further examinations revealed unexpected results. Brain imaging suggested brain metastasis was encountered, but histopathological examination did not support this data. More investigations are needed to determine the true etiology of her fit attacks. This patient’s diagnosis and management were even more challenging since she’s now at nine weeks gestation. A 32-year-old nulliparous woman at nine weeks of gestation presented with a sudden left hemiparalysis and secondary generalized tonicclonic seizure. There was no history of attacks before. She denied having previous severe headaches and visual disturbances. Initial physical examination revealed a hemodynamically stable woman (blood pressure 140/90, pulse 99/min) who was alert and oriented with slight slurring of speech, a mild left-sided facial droop, and bilateral sixth nerve palsies. Her motor examination revealed 0 strength of the left upper and lower extremities. The remainder of her physical exam was within normal. The occurrence of brain tumors during pregnancy is unusual; when this happensjeopardizes the lives of both mother and infant. Its occurrence is infrequent. II. Case Illustration We reported a patient who presented with Acute Symptomatic Seizure In Pregnant Woman. She came to the hospital because of a fit event and sudden left hemiparalysis and was diagnosed as a stroke, but further examinations revealed unexpected results. Imaging examinations revealed brain metastasis. This patient's diagnosis and management were even more challenging since she's now at nine weeks gestation. A thirty-two-year-old G 3 P 2 at nine weeks gestation was admitted to the Neurology department of Dr. M. Djamil Hospital Padang, West Sumatera,with Sudden paralysis of the left limbs since 6 hours before being hospitalized. It was felt suddenly during the activity. The patient suddenly fell to the left and could not move her left limbs at all. She remained conscious. The left arms' weakness was as severe as in the left legs. These complaints were also accompanied by asymmetrical lips, slurred speech, and secondary generalized tonic-clonic seizure. The seizure began with rigidity on the left lower limbs and spread to the upper limbs for 30 seconds, followed by a clonic event for 1-2 minutes. She was unconscious during the fit and wetting her pants. She looked sleepy and tired after the seizure attack and remained conscious. The fit occurred six times with the same pattern and interval of one hour. She remained aware between the reasonableperiod. She also complained of headaches before the attack and vomited twice. There was no history of seizure before and hypertension, diabetes, stroke back, head trauma, central nervous system infection. No family member suffers from hypertension, heart disease, diabetes, and stroke. The patient is a nurse who lives with her husband and one daughter. The patient is nine-week pregnant. She doesn't smoke and drinks coffee. No history of using any hormonal birth control.She was born at 38 weeks of gestation after spontaneous delivery-norecord of delayed developmental and physical problems. Initial physical examination revealed a hemodynamically stable woman (blood pressure 140/90, pulse 99/min) who was alert and oriented with slight slurring of speech, a mild left-sided facial droop, and bilateral sixth nerve palsies. Her motor examination revealed 0 strength of the left upper and lower extremities. The remainder of her physical exam was within normal. Laboratory findings and ECG recordings were normal. She was treated in the Emergency room with oxygen, Ringer Lactate infusion/12 hours, tranexamic acid injection 6qh, Citicholin injection 2qh, carbamazepine 200 mg. The patient was diagnosed with brain metastasis after a Brain MRI had been conducted. Neurosurgeon consultation also supported the diagnosis and was planned for craniotomy. The histopathological examination was astonishing since it revealed only chronic inflammation. There were no cell tumors. The patient was discharged after rehabilitation therapy was conducted three times. But still, her muscle strength was zero. After two weeks of being discharged, she came with better improvement. Her muscle strength was 333 at left lower limbs and 444 at left upper limbs. III. Discussion A 32-year-old nulliparous woman at nine weeks of gestation presented with a sudden left hemiparalysis and secondary generalized tonic-clonic seizure. There was no history of attacks before. She denied having previous severe headaches and visual disturbances. Initial physical examination revealed a hemodynamically stable woman (blood pressure 140/90, pulse 99/min) who was alert and oriented with slight slurring of speech, a mild left-sided facial droop, and bilateral sixth nerve palsies. Her motor examination revealed 0 strength of the left upper and lower extremities. The remainder of her physical exam was within normal limits. The patient was admitted for further evaluation with the underlying suspicion of a cerebrovascular event. Her hemoglobin, hematocrit, platelet count, coagulation profile, liver, and renal function tests were normal. Brain CT Scan revealed an intracranial mass with the hemorrhagic lesion. Magnetic resonance imaging (MRI) of the head revealed hyperintense multiple lesions seenonthe temporoparietooccipital region of the right cerebral hemisphere with extensive edema and left parietooccipital part with unclear border. After contrast administration, the lesion becomes more hyperintense. Midlineshift to the right. The suspicion was of anintracranial metastasis with subfield herniation (hemorrhagic metastasis). The result ofMRI withcontrast:hyperintensemultiple lesion seenonthe temporoparietooccipital region of right cerebralhemisphere with extensive oedema and left parietooccipital regionwith unclear border. After contrast administration, the lesion becomes more hyperintense. midlineshift to the right. Ventricle system, Pons andCPA, cerebellum are normal. Conclusion: intracranial metastasis with subfalc herniation (hemorrhagic metastasis) The patient's clinical course was complicated by the moderate progression of the neurologic symptoms on the third hospitalization day. Dexamethasone intravenous therapy was given to reduce brain edema (dexamethasone 10 mg intravenously every 6 hours). Obstetrically, there was a normal fetus from fetomaternal USG. The neurosurgeon suggested craniotomy for biopsy and planned for radiotherapy or chemotherapy based on biopsy results. After the patient underwent craniotomy, the brain tissues were sent for histopathological examination. But apparently, the result was quite astonished since the result revealed only chronic inflammation. The occurrence of brain tumors during pregnancy is unusual may jeopardize the lives of both mother and infant. Tumors are characterized by preferential involvement of white matter with sparing of cortical gray matter, round, or infiltrating shape, and are not confined to a specific vascular distribution. MRI is more sensitive than CT in detecting intracranial mass lesions because of the intrinsically higher soft-tissue contrast resolution and because the associated edema is easily observed on FLAIR and T2-W1. The peritumoral edema is more extensive than the tumor itself (Parizel et al., 2010). It is in accordance with our case. The imaging of this patient revealed extensive vasogenic edema and multiple lesions that are common in brain metastasis. Vasogenic edema is caused by a breakdown of the blood-brain barrier (BBB), which allows excess fluid to pass from the capillaries into the extracellular space. Vasogenic edema extends along white matter tracts and generally spare the cortical gray matter. (Parizel et al., 2010;Sawaya et al., 2011). During pregnancy, the plasma volume increases from the sixth week to reach a maximum of approximately 3600 ml by the 32± 34th week. The cardiac output increases by 20% (5.5 l/min at conception to 6 l/min) during the first trimester and remains the same to term. These changes depend on increased production of estrogen and progesterone by the trophoblast in anticipation of the fetal needs. (Naidoo&Bhigjee, 1998) During pregnancy, brain tumors may increase in size, leading to clinical symptoms. To explain this phenomenon, early authors focused on theories such as accelerated growth rates, vascular engorgement, and increased fluid content of the tumor. These intracranial neoplasms have at times demonstrated hormone-related growth, with the progression of neurologic symptoms during pregnancy and remission postpartum. Steroid receptors have been identified primarily in meningiomas, with progesterone receptors notably more prominent than estrogen receptors. Ultimately, multiple factors probably play a role in this process. (Elwatidy et al., 2011) Clinical presentation during pregnancy may be mild in onset or may emerge rapidly, as in our case. Typical presenting symptoms include headache, nausea and vomiting, motor dysfunction, visual disturbances, seizures, loss of consciousness, and incontinence. In the acute presentation, prompt care is imperative to stabilize the mother and ensure fetal well-being before the onset of any further complications such as intracranial hemorrhage or cerebral herniation. (McKenzie et al., 2005) The symptoms of brain tumors during pregnancy on this patient have been attributed to water retention, engorgement of the vessels, and the effect of progesterone in the appearance and development of brain tumours. Sex hormones occurring during gestation may have a profound effect on tumor growth, recurrence, time of recurrence, and dedifferentiation, worsening the neurological symptoms and harming the mother. (Lynch et al., 2011) Patients with brain tumors tend to have subacute progressive syndrome and occasionally present with sudden severe neurological signs or symptoms. These cases usually result from hemorrhage into the lesions, nonconvulsive status, and stroke syndromes caused by tumor emboli, endocarditis, or inherent coagulopathies. (Dorai et al., 2010;Tsemenzis, 2000;Yeung 2012) The sudden onset of this patient was thought a cerebrovascular event at early presentation. It could probably be by the presence of a hemorrhagic lesion seen from the Brain CT scan and MRI. Factors to consider in the initial treatment include the severity of the maternal symptoms, the extent and location of the tumor, the gestational age of the fetus and ultimately, the wishes of the patient. Corticosteroids are usually administered to reduce intracranial inflammation while also protecting against complications associated with fetal prematurity. Dexamethasone has been traditionally used to reduce brain edema. It is safe to use in an acute setting, but its chronic use may be harmful to the fetus as it may cause hypoadrenalism. (El Saayed et al., 2013) The patient also presented with acute symptomatic seizures. Seizures are a common symptom of a brain tumor, with estimates ranging from 30-50 % of patients. (Yuen et al., 2011) Weighing the risks and benefits of treating seizures with anticonvulsants, it is recommended to use them in this setting to avoid seizures that may lead to maternal and fetal hypoxia and acidosis. (El Saayed et al., 2013) The benefit of anticonvulsants outweighs the risks of teratogenicity, especially when the patient is beyond the first trimester. This patient has been treated with carbamazepine 100 mg every eight hours. More definitive treatments such as craniotomy, radiation, and chemotherapy must be chosen individually. The benefits and risks to the mother and fetus should be assessed before making treatment decisions. (McKenzie AP et al., 2005) Treatment should always primarily focus on preserving the mother's life and, secondarily, the energy of the embryo. The surgical decision should be tailored to each patient according to the circumstances. Still, delivery should be performed whenever possible when the fetus weighs 1 kg, calculated by ultrasound, which corresponds to the gestational period between 26 th and 30 th week. Following this period, there is a 90 % or greater chance of being born healthy. If delivery is performed around the 25 th week, the fetus has less than 50 % chance of surviving, and before the 22 nd week, only 5 %. (Lynch et al., 2011) The time of choice for neurosurgical intervention and the delivery will depend on three factors: severity of neurological symptoms, the gestational age of the embryo, and the presumed histology of the tumor. (Lynch et al., 2011) In 2000, Tewari et al. suggested a management algorithm for symptomatic brain tumors in pregnancy. The algorithm was based on the published information on this topic during the previous 50 years. A woman presenting during the first or early second trimesters of pregnancy whose condition is stable should continue with the pregnancy if she wishes. Neurosurgery or radiotherapy may be considered in the early second trimester. Similarly, a woman who presents in the late second or early third trimesters and is stable should continue with the pregnancy. Antepartum fetal surveillance must be initiated; once fetal maturity is documented the woman should be delivered and appropriate treatment is given (tumor resection, radiation). A woman who has worsening symptoms, manifests new deficits, or has evidence of increased tumor growth or metastases, may be offered radiotherapy. Delivery should occur once fetal maturity is established, followed by tumor resection. An urgent cesarean delivery under general anesthesia should be accomplished for the woman with acute deterioration or mental status changes, followed by cerebral decompression and tumor resection. (McKenzie AP et al., 2005) When the diagnosis is made at term, women should be delivered expeditiously and, in the presence of a mass effect, preferably by cesarean under general anesthesia. This would theoretically decrease the risk of cerebral herniation. Vaginal delivery should be reserved for clinically stable women. Avoiding regional anesthesia is prudent due to the dangers of cerebral herniation with the placement of an epidural catheter. Tumor resection may be performed at delivery or in the postpartum period. (McKenzie AP et al., 2005) The patient was discharged after rehabilitation therapy was conducted three times. But still, her muscle strength was zero. After two weeks of being discharged, she came with better improvement. Her muscle strength was 333 at left lower limbs and 444 at left upper limbs. The remarkable improvement of this patient was due to corticosteroid therapy. With advances in the medical and surgical treatment of metastatic brain tumors, it is sometimes difficult to predict the life expectancy of patients with the disease. The following factors have been determined to be prognostically favorable in patients with brain metastasis: a high Karnofsky Performance Scale, solitary brain metastases, an absence of systemic metastases, a primary controlled tumor, and a younger age (<60-65 years). (Dorai et al., 2010) The prognosis of this patient was still unclear since the true etiology of this fit event is still not established. More examinations are still needed. We should conduct repeated Brain MRIs for evaluation and explorations for the primary tumor if it is brain metastasis. The investigations are facing challenges, too, since she is still pregnant, and examinations,e.g., chest x-ray, the abdomen, should be postponed. Brain MRI is scheduled for her sixth month of pregnancy. IV. Conclusion 1. Brain tumor during pregnancy is extremely rare. 2. Regular physiologic changes associated with pregnancy, combined with pathophysiologic processes unique to pregnancy, predispose women to develop braintumorsduring pregnancy. 3. Corticosteroids in this patient must be given to reduce brain edema. 4. The patient had an unclear prognosis since the true etiology of early-onset seizure in this patient has not been established.
2022-03-31T16:29:12.726Z
2022-03-16T00:00:00.000
{ "year": 2022, "sha1": "07435a5d45bdbf9e5eb869b4c16014096a451e24", "oa_license": "CCBY", "oa_url": "https://valleyinternational.net/index.php/ijmsci/article/download/3408/2320", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ca8acb3fb2ee121f9fd539c8763e1bdc72a5e27f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
260908651
pes2o/s2orc
v3-fos-license
Fatty acids composition and profiling of nine abundant marine Macroalgae, Egypt This study analyzed the fatty acids composition and their profile qualitatively and quantitatively of the nine abundant macroalgal specimens collecting from Egyptian coasts. GC mass analysis identified 23 types, including 13 of saturated fatty acids (SFA) and 10 of monounsaturated fatty acids (MUSFA). SFA dominated with 78%, while MUFAs had 22%, and UFAs were negligible at 0.01%. MUSFA oleic acid (omega-9) was present in all species except green macroalgae Galaxura rugosa and Ulva fasciata, replaced by MUSFA linoleic acid (omega-6). Oleic acid methyl ester (omega-9) was registered in all the studied species, except red Hypnea cornuta & Jania rubens , and brown Hormophysa cuneiformis . Chlorophyta registered 35% of the fatty acid composition, followed by Rhodophyta (33%) and Phaeophyta (32%). Major SFAs were palmitic acid glycidyl ester, oleic acid glycidyl ester and palmitic methyl ester, comprising over half of total fatty acids. Red and brown macroalgae were richer in palmitic and oleic glycidyl esters, while green macroalgae had more palmitic methyl ester. Linoleic acid, nonadecylic acid, elaidic acid methyl ester, linoleic acid methyl, behenic acid, pentacosylic acid, palmitic acid, and trans-palmitoleic acid were exclusively identified in Chlorophyta. Lacceroic acid was distinguished in Rhodophyta, whereas pelargonic acid just appeared in brown alga Turbinaria turbinata. The maximum values of fatty acids were recorded in the green macroalga Caulerpa racemosa while reed macroalga Hypnea cornuta .was the minimum one. The research sheds light on the fatty acid composition and its potential implications for human health and nutrition. Introduction Macroalgae play a vital role in marine ecosystems, serving as essential biological resources.As primary producers, they contribute significantly to the diversity and productivity of marine communities.Moreover, they offer food and shelter to various marine organisms across different life stages [1].In a study conducted by Sohrabipour et al. [2], the significance of macroalgae from three divisions, namely green (Chlorophyta), brown (Phaeophyta), and red (Rhodophyta), was evaluated in terms of their fatty acid content and potential therapeutic effects for treating certain human diseases.The research explored how these algae species could potentially serve as valuable resources for medical applications.Furthermore, the ratio between ω-6 and ω-3 and the ratio between PUFAs and SFAs found in red and brown algae are more favorable for human health than those found in green algae [3]. Macroalgae possess specific characteristics in their fatty acid composition.Typically, their fatty acids have linear chains, an even number of carbon atoms, and one or more double bonds [4].Of particular importance is eicosapentaenoic acid (EPA, C20:5n-3), an essential fatty acid found abundantly in macroalgae.Red and brown algae are notably rich in both eicosapentaenoic acid (EPA) and arachidonic acid (AA).Conversely, green macroalgae, like Ulva, predominantly contain hexadecatetraenoic, oleic, and palmitic acids, along with significant levels of polyunsaturated fatty acids (PUFAs), such as linoleic acid (18:2n-6) and -linolenic acid (18:3n-3) [5].Notably, the ratios of ω-6 to ω-3 and PUFAs to SFAs found in red and brown algae are more favorable for human health compared to those present in green algae [3].These findings suggest that incorporating red and brown macroalgae into the diet might have additional health benefits due to their desirable fatty acid profiles. The coastlines of both the Mediterranean Sea and the Red Sea, particularly around the Suez Canal, hold significant importance.Migration of marine organisms is a biological necessity, occurring in both spatial and temporal.The Indo-Pacific originated biota has exhibited changes over time and space, emphasizing the importance of conserving and developing these merits for future generations [6].Researchers have conducted studies on the seasonal and spatial changes in macroalgal vegetation and their nutritional composition in the Red Sea [7][8].The nutritional composition of seaweed varies with seasonal fluctuations in environmental conditions.El-Manawy et al. [8] revealed that the Egyptian Red Sea coast serves as a valuable source of fiber, minerals, carbohydrates, proteins, and fatty acids.The study aims to assess the total fatty acids composition qualitatively and quantitatively as well as their profiles in nine abundant marine macroalgae collected from the Egyptian coasts. Area of study Marine macroalgae were harvested at the intertidal zone during the low tide on November-December 2022 from three sites.The collection sites, areas and their coordinates (Figure 1) were as follows: The collected samples were firstly washed with seawater to remove epiphytes and other marine organisms.After that, the macroalgal species were transported to the laboratory in sterile polythene bags and identified using references as [9][10][11][12].Next, the samples were rinsed with tap water and then thoroughly washed with distilled water to eliminate salt, epiphytes, and sand particles.The samples were air-dried at room temperature in a shaded area.Once completely dry, the samples were cut into small pieces and further processed into powder using a mixer grinder.This preparation method ensured the removal of impurities and the transformation of the macroalgae into a powdered form for subsequent analyses. Extraction of Fatty acids The extraction of fatty acids from the samples followed the Folch method [13].For each sample, one gram was extracted using a solvent mixture of chloroform: methanol in a ratio of 2:1 (v/v).To induce phase separation, an equal volume of chloroform and water (1:1 v/v) was added.The lower phase was collected and subsequently dried under nitrogen to achieve dryness [14] for further fatty acid analysis [15].This method facilitated the efficient extraction of fatty acids from the samples, making them ready for subsequent analysis. Gas chromatography-mass spectrometry (GC-MS) analysis For fatty acid composition analysis, a Trace GC1310-ISQ mass spectrometer (Thermo Scientific, Austin, TX, USA) equipped with a direct capillary column TG-5MS (30 m x 0.25 mm x 0.25 µm film thickness) was used.The column oven temperature was initially set at 35°C and then increased at a rate of 3°C/min until reaching 200°C, where it was held for 3 minutes.Subsequently, the temperature was further increased to the final value of 280°C at a rate of 3°C/min and held for 10 minutes.The injector and MS transfer line temperatures were maintained at 250°C and 260°C, respectively.As a carrier gas, Helium was used at a constant flow rate of 1 ml/min.For analysis, 1 µl of diluted samples was automatically injected using an Autosampler AS1300 coupled with GC in the split mode, with a solvent delay of 3 minutes.Electron impact (EI) mass spectra were collected at 70 eV ionization voltages over the range of m/z 40-1000 in full scan mode.The ion source temperature was set at 200°C.Identification of components was performed by comparing their retention times and mass spectra with those from the WILEY 09 and NIST 11 mass spectral databases. Results The fatty acids composition of the nine investigated macroalgal samples and their Lipid number (No), Structural formula, Retention time (RT) and Molecular weight (MW) are summarized in Table 1.GC mass analysis recorded 23 types of fatty acids including 13 of SFA and 10 of MUSFA.Lipid number ranged from C7:0 Enanthic acid to Lacceroic acid C21:0.Figure 2 illustrated the fatty acids content profile qualitatively and quantitatively to each of the nine specimens.SFAs consists of enanthic acid, pelargonic acid, myristic acid, palmitic acid, palmitic acid methyl ester, palmitic acid ethyl ester, Stearic acid, palmitic acid glycidylester, glycerol 1-palmitate, arachidic acid, behenic acid, pentacosylic acid and lacceroic acid.The USFAs comprised of one USFA elaidic acid methyl ester and 9 MUSFA namely, trans, palmitioleic acid, palmitoleic acid methyl, oleic acid, oleic acid methyl ester, nonadecylic acid, linoleic acid methyl, stearic acid methyl, linoleic acid and oleic acid glycidyl ester.Percentage of fatty acids composition in the studied macroalgae divisions (Figure 3) was represented in Chlorophyta (35%), followed by Rhodophyta (33%), and finally Phaeophyta (32%).In general, the amounts of fatty acids types varied notably among the tested species.SFAs were constructed the most abundant fatty acids composition in the studied sample species representing 78%, while MUFAs were detected low values (22%).UFAs registered very low content considering neglected 0.01% (Figure 4).The first major of SFAs found in all the selected species were palmitic acid glycidyl ester (180.94%)>oleic acid glycidyl ester (48.27%) > palimitic methyl ester (43.69 %) as shown in Figure 5.It constituted more than a half of the total fatty acids content.Palmitic acid glycidyl ester and oleic glycidyl ester mostly increased in red and brown macroalgae, while palimitic methyl ester was concentrated in green macroalgae as recoding in Table (1).The second major fatty acids fluctuated between the studied species namely, SFA stearic acid and MUSFA stearic acid methyl accounting 29% and 27.03% of total fatty acids composition respectively.These two fatty acids found in all samples except the red macroalga H. cornuta with stearic acid and the brown alga H. cuneiformis with stearic acid methyl.The Third major fatty acids were established as follows: oleic acid (17.73%) > oleic acid methyl ester (8.85%) > arachidic acid (8.07%) lacceroic acid (5.79%) > palmitic acid ethyl ester (5.04%) > glycerol 1-palmitate (3.31%).In addition to the major fatty acids in the different samples, linoleic acid, nonadecylic acid, elaidic acid methyl ester, linoleic acid methyl, behenic acid, pentacosylic acid, palmitic acid and trans, palmitioleic acid were characterized only in Chlorophyta.Lacceroic acid C32:0 was distinguished in Rhodophyta while pelargonic acid C9:0 just appeared in Phaeophyta in T. turbinata. Discussion The fatty acids content in the studied macroalgae divisions was almost the same but Chlorophyta was a substantial level, followed by Rhodophyta, and finally Phaeophyta.The selection of these species aims to encompass a wide range of macroalgal functional groups and provide valuable insights into their ecological diversity, distribution and importance along the Egyptian coasts.In general, the amounts of fatty acids types varied remarkably among the tested species.Fatty acids composition of algal lipids varies widely with species, habitat, light, salinity, pollution and environmental conditions [16]. In this study, SFAs were constructed the most abundant fatty acids composition in the studied samples representing 78%, than the MUFAs, whereas UFAs registered very low content considering neglected.The total values of fatty acids were recorded as follows: C. racemosa > G. rugosa > J. rubens > T. turbinata > P. myrica > U. fasciata > H. tuna > H. cuneiformis > H. cornuta. The present data reflect clearly distinguishable fatty acid profiles with high levels of SFA palmitic acid glycidyl ester, MUSFA oleic acid glycidyl ester and SFA palimitic methyl ester which found in all the selected species.It accounted more than a half of the total fatty acids content species.Palmitic acid glycidyl ester represented the first major SFA. Palmitic acid glycidyl ester and oleic glycidyl ester mostly increased in red and browm macroalgae, while palimitic methyl ester was concentrated in green macroalgae.In most previous studies, palmitic acid is predominant in seaweeds [17-[18]. SFA Stearic acid and MUSFA Stearic acid methyl was considered the second major fatty acids composition found in the nine species except the red macroalga H. cornuta and the brown alga H. cuneiformis respectively.The Third major fatty acids were established in this way: oleic acid > oleic acid methyl ester > arachidic acid lacceroic acid > palmitic acid ethyl ester > glycerol 1-palmitate.Linoleic acid, nonadecylic acid, elaidic acid methyl ester, linoleic acid methyl, behenic acid, pentacosylic acid, palmitic acid and trans, palmitioleic acid were characterized only in Chlorophyta.Lacceroic acid was distinguished in Rhodophyta while pelargonic acid just appeared in Phaeophyta in T. turbinata.This is agreement with many studies which have demonstrated that fatty acid profiles were specific to taxonomic groups [19][20][21][22]. Oleic acid and Oleic acid methyl ester in the studied species were regarded as biosource to omega-9 (ω-9).Linoleic acid (MUSFA) is considered a particular potential for green U. fasciata as a source for the omega-6 (ω-9) [23].El Shoubaky et al. [18] mentioned that Ulva fasciata characterized by containing high levels of the most biologically active fatty acids as Oleic acid and Linoleic acid. The palmitic acid, palmitoleic acid and oleic acid, as well as their esters in macroalgae samples might represent a useful source as food or food supplement.Moreover, they exhibit strong antimicrobial activity against oral microorganisms as Streptococcus mutans, Candida albicans, Aggregati bacter actinomycetem comitans, Fusobacterium nucleatum, and Porphyromonas gingivalis.MUFAs derivatives, of C18 and C16 FAs, may aid in resisting many pathological conditions such as cardio-diseases and cancer [24][25].Moustafa and Batran [26] mentioned that C18-PUFA acquire special importance in human nutrition and other vertebrates which are not able to synthesis them [27][28].Erkkila et al. [29] stated that several studies have created inverse correlation between the PUFA/SFA ratios and cardiovascular diseases and suggested that replacement of SFA with PUFA in the human diet will decrease similar health problems.In this study, the ratio of SFA to MUSFA was found higher in all analyzed species and to be useful in cardiovascular diseases.So, these findings highlight the importance of fatty acid composition in macroalgae and its potential implications for human health and nutrition. Disclosure of conflict of interest No conflict of interest to disclosed. Figure 1 Figure 1 Map of Egypt showing the studying area and the selected sites Figure 2 Figure 2 GC mass analysis of fatty acids in macroalga samples (1-9) showing Relative Abundance and Retention Time Figure 3 Figure 4 Figure 3 Percentage of fatty acids in the studied macroalgae divisions Figure 5 Figure 6 Figure 5The area % of each fatty acid type in all the selected species Table 1 Area % of the fatty acids in the selected species and their Lipid number (No), Structural formula, Retintion time (RT) and Molecular weight (MW)
2023-08-16T15:23:01.085Z
2023-08-30T00:00:00.000
{ "year": 2023, "sha1": "c62f212c32df5cd7460046a6a9a0553e3dfd81f3", "oa_license": "CCBY", "oa_url": "https://gsconlinepress.com/journals/gscbps/sites/default/files/GSCBPS-2023-0311.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "fb5f95c1edd81c3ae04f1e850704e908d41db46f", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
81529598
pes2o/s2orc
v3-fos-license
Dengue, bleeding and non-steroidal anti-inflammatory drugs Dengue, which was serologically first confirmed in 1962 in Sri Lanka, has become endemic since 19893. The magnitude of Dengue epidemics in Sri Lanka has continued to increase stepwise with regular epidemics4 and currently it has become a major health hazard in with a high morbidity5. Since 2009, the annual incidence rate has been more than 150 per 100,000 population6. In 2017, this increased to very high levels recording 75,000 cases within the first 06 months of the year. Introduction Dengue is the most rapidly spreading mosquitoborne viral disease in the world 1 . It is estimated that between 50 and 100 million cases of DF and several hundred thousand cases of DHF occur each year, depending on the epidemic activity 2 . In the last 50 years, incidence has increased 30fold with increasing geographic expansion to new countries and from urban to rural settings 1 . More than 100 tropical countries have endemic Dengue virus infections. This includes most of Asian countries 2 including Sri Lanka. Situation in Sri Lanka Dengue, which was serologically first confirmed in 1962 in Sri Lanka, has become endemic since 1989 3 . The magnitude of Dengue epidemics in Sri Lanka has continued to increase stepwise with regular epidemics 4 and currently it has become a major health hazard in with a high morbidity 5 . Since 2009, the annual incidence rate has been more than 150 per 100,000 population 6 . In 2017, this increased to very high levels recording 75,000 cases within the first 06 months of the year. 1 Consultant Physician, National Institute of Infectious Diseases, Colombo, Sri Lanka. Dengue classifications The widely used classification of Dengue, introduced by the WHO in 1997, divided Dengue into two main groups i.e. Dengue Fever (DF) and Dengue Haemorrhagic Fever (DHF). However, another classification was proposed by the WHO in 2009, (usually called WHO/TDR classification) where it was divided into three categories: 1. Dengue without warning signs 2. Dengue with warning signs 3. Severe dengue The need for reclassification was due to the strict nature of the definition of the DHF, where 04 essential criteria need to be fulfilled. We showed in a study of 106 clinically diagnosed Dengue patients, published in the Journal of Ceylon College of Physicians in 2011, of 28 clinical DHF cases only five (17.9%) fulfilled all four WHO criteria (1997) for DHF, whilst 23 (82.1%) did not fulfil all four criteria though they developed detectable effusions/ascites 7 . The new WHO classification of 2009 introduced the concept of warning signs. Our study showed that this is very useful as all but one patient with DHF had at least one warning sign (Table 1). According to the W HO/TDR classification, patients with warning signs require strict observation. Since these patients form the majority, compliance with the W HO TDR guidance would probably overburden clinical facilities. We pointed this out in an article written to WHO South-East Asia Journal of Public Health in 2014 8 . It was in this backdrop, in 2011, WHO SEARO group modified the 1997 criteria. In fact, it is an improvement and the main changes were the diagnosis of plasma leakage by evidence of pleural effusion and/ or ascites and the inclusion of two groups, namely, unusual manifestations and Dengue with bleeding. In "Expediency of dengue illness classification: the Sri Lankan perspective" published in WHO South-East Asia Journal of Public Health in 2014 8 we pointed out that the 1997/2011 WHO classification identifies DF and DHF as two distinct entities; by contrast, the 2009/12 W HO/TDR classification considers that "dengue is one disease entity with different clinical presentations and often with unpredictable clinical evolution and outcome" 8 . W e further pointed out that the major advantage of the 1997/2011 WHO classification, which has helped to guide clinicians to manage patients without allowing complications of severe dengue to arise, making DHF a "predictably treatable illness" is the fact that concurrent rise in haematocrit is recommended to differentiate DF from DHF 8 . Therefore, we find, WHO/SEARO 2011 classifi- Changes in the virus serotype Dengue virus has four serotypes. While all were found to be present in Sri Lanka, predominant serotype varied from time to time. However, analysis of serotypes in early 2016 and early 2017 showed a drastic change in circulating serotype. Analysis of 75 randomly selected patients admitted to National Institute of Infectious Diseases in January 2016 showed 92% had serotype 1 whereas a similar analysis done in January 2017 of 48 patients showed complete replacement of serotypes by Dengue serotype 2. We postulate that this is one of the reasons for the massive outbreak of Dengue we are experiencing at present (2017) in Sri Lanka. Demography In South-East Asian countries, where all the serotypes (DENV-1-4) are circulating, DF was typically acknowledged to be a disease of early childhood, while clinical DF in adults was rare. However, there is evidence of increase of dengue incidence in older age groups, and this age shift has been reported in Singapore, Indonesia, Bangladesh and Thailand 9 . Similarly, in Sri Lanka also Dengue had been predominantly a paediatric illness. In 80's and early 90's the age distribution pattern of Dengue patients in Sri Lanka showed that two thirds of cases were children under 15 years with a peak incidence in the 5-9-year age group 3 . However, even then, a significant number of cases were noted in the 15-29-year age group, especially between 15-19 years 3 . By now, the modal age group affected by dengue has shifted from <15 years of age to 15-34 years of age 9 , with more than 75% of cases occurring in the age group above 15 year of age 6 . Differences in children and adults with Dengue Symptoms and risk factors for Dengue haemorrhagic fever (DHF) and severe Dengue differ between children and adults. In a three year study, where the severity of Dengue was investigated in infants, children and adults, it was found that frequency of internal bleeding was significantly higher in adults compared to other two groups 10 . In a study of Dengue deaths in a Malaysian hospital, 80% of deaths were among adults 11 . Another prospective descriptive study of 947 children and 738 adults in Vietnam, found that plasma leakage and shock were more common and severe in children than in adults, while bleeding and organ dysfunction were more frequent in adults 12 . In the 2001 epidemic in Chonburi, Thailand, clinical bleeding was significantly more frequent in adults 13 . In a large retrospective study done in Singapore, 1,035 (14.8%) adults had severe dengue according to WHO 2009 criteria and of these, 40% had severe bleeding 14 . Bleeding is a frequent cause of severe dengue illness, especially in adults 15 . Petechiae, epistaxis and menorrhagia have been observed frequently in adults with DF or DHF, although upper gastro-intestinal (GI) bleeding is the most common type of severe haemorrhage. Menorrhagia is common in female adults with DF/DHF. Bleeding into sites such as subcapsular region of spleen and splenic rupture have also been reported in adults with dengue infection 16 and such bleeding can be occult. A Taiwanese study showed that massive gastrointestinal bleeding accounted for 40% of dengue haemorrhagic fatalities 17 and similar findings were found in a mortality study in Singapore 18 . Severe but occult bleeding can be difficult to recognize. Many patients with severe bleeding have initial or on-going plasma leakage that keeps the HCT in the normal range despite ongoing bleeding 15 . Therefore, haematocrit may not be a sensitive marker of plasma leakage in dengue with severe bleeding 11,15 . The ability to identify patients at high risk of progression to severe disease, who are likely to benefit from close observation and early intervention with supportive therapy, has become the focus of intense research efforts in recent years 19 . This has become important as the vast majority of symptomatic infections do not progress to severe disease, but as this is unpredictable, monitoring of large numbers of patients in seasonal epidemics overwhelms health service capacity in many Dengue prevalent areas. Most studies have focused on paediatric patients. Data for adult Dengue patients are sparse. In addition, most studies used are expensive, advanced laboratory methods, which are not suitable for limited resource settings 21 . Several retrospective studies were done in attempting to identify the predictors of bleeding in adults with Dengue. However, prospective studies are few and involved only a small number of patients 20,22 . Hence we conducted a prospective study to determine clinical parameters which could be easily used to identify Dengue patients who are likely to develop bleeding. Results of this study were presented at the Annual Conference of the American Society of Tropical Medicine and Hygiene in 2016. In this study all patients admitted to Dengue Management Unit at the National Institute of Infectious Diseases, Colombo for four months from 1st of July 2014 were included. Dengue was confirmed by NS1 antigen or Dengue Specific IgM antibodies. These patients were followed up to see the development of bleeding, possible effects of bleeding and the need of blood transfusion. Association of various parameters with bleeding is shown in Table 3. Our study showed female gender and obesity (BMI >27.5) are characteristics which are significantly associated with bleeding. Female sex has been identified as a risk factor for bleeding in other studies as well 23 . Though it is an established fact that obese patients are more prone to develop plasma leakage, association of bleeding with obesity has not been described before. Severe or persisting vomiting, abdominal pain, postural dizziness and use of NSAID during the illness were the associated significantly with bleeding. Vomiting and abdominal pain 24 were identified as predictors of bleeding in previous studies, but postural dizziness and NSAID use were not. Postural dizziness would indicate the intravascular volume depletion and should alert the medical staff. NSAID use is an avoidable risk factor. Hence the identification of these is very important and useful. In Sri Lanka, the National Guidelines on management of Dengue Fever and Dengue Hemorrhagic Fever recommends against using Non-Steroidal-Anti Inflammatory-Drugs (NSAID) in patients with Dengue available 'over the counter', and therefore the use of NSAIDs as a remedy for fever and body aches has become a practice among the general population. The gastrointestinal consequences of nonsteroidal anti-inflammatory drugs (NSAIDs) are the best recognized iatrogenic problem in clinical medicine 27 . In the majority of patients, NSAID-induced gastroduodenal mucosal injury is superficial and self-limited. However, peptic ulcers develop in some patients, and they may lead to gastro duodenal hemorrhage, perforation, and death 28 . In addition, NSAID can cause damage to the more distal part of the small intestine and there are many case reports suggesting that they can affect the large intestine causing colitis and colonic perforation 28,29 . It is estimated to cause 3500 hospitalizations and 400 deaths in 1.5 million of NSAID taking population of people above the age of 60 years in the United Kingdom 29 . There is clear evidence that severe complications associated with peptic ulcer disease are often associated with recent NSAID consumption 29 . While the concomitant use of normal doses of H-2 receptor blockers do not effectively prevent NSAID induced peptic ulcer disease, Proton Pump Inhibitors are unlikely to reduce lower GI complications 27 . Although 'serious systemic illnesses' is listed as one of the risk factors for the development of NSAID induced peptic ulcers, 28 Dengue illness has not been identified as one. Hepatotoxicity is another uncommon, but potentially lethal complication of NSAID and it can occur with all NSAIDs. They exhibit a broad spectrum of liver damage ranging from asymptomatic, transient, hypertransaminasemia to fulminant hepatic failure 30 . However, the risk of developing acute liver injury is very low in epidemiological studies though diclofenac is known to have an excess risk [31][32][33] . Though hepatotoxicity usually occurs 12 weeks after initiation of therapy, it can occur at any time after drug administration 30 . Several risk factors have been identified for people to be more prone to develop NSAID induced hepatotoxicity but relationship with Dengue has not been assessed. Therefore, it is prudent to determine the actual role of NSAID, on bleeding manifestations and liver damage in Dengue infection. This particular study represents, to our knowledge, the first study done on effects of NSAIDS on bleeding and liver in Dengue infection. A careful literature search on Medscape did not reveal any previous studies done on this subject. Even the WHO guidelines on management of Dengue does not give any supporting evidence for its recommendation of not to use NSAIDS in Dengue infection. Findings of this study were presented at the 17th International Congress on Infectious Diseases (ICID) in 2016 and was awarded the ProMED award. A prospective 'Case-control' Study design was selected for this study as a 'Randomized controlled trial' could raise ethical issues since treatment with NSAIDs is virtually not recommended, even though there is no evidence for the recommendation. The study population comprised of Dengue patients aged above 12 years, admitted to the hospital during 1stJune 2014 to 30th September 2014. Patients who were admitted to the Dengue Management Unit (DMU) with fever of acute onset with myalgia and headache and confirmed as Dengue infection were included in the study. Dengue was confirmed either by positive NS1 antigen or Dengue specific IgM antibodies or both. Patients who were on aspirin, clopidogrel or heparin were excluded since those drugs are known to increase risk of bleeding manifestations. Patients who had been treated (or self-treated) with NSAID during the febrile phase before admission to hospital were taken as cases and those who did not have NSAID were taken as controls. Data was collected using an interviewer administered questionnaire. Patients were questioned about the medicines they took prior to admission to the hospital. The prescriptions given by the General Practitioners were reviewed and the medicines the patients brought were inspected. If there was definitive evidence to of NSAID intake they were categorized as 'cases' and if there was definitive evidence of not taking NSAIDS they were categorized as 'controls'. W hen it was uncertain whether the patients have taken NSAID or not, they were not taken in to study analysis. NS1 and/or Dengue IgM Antibody test were done in all patients. All patients were managed in a single unit according to the National Guidelines of Sri Lanka and therefore, the management was uniform. The patients were regularly reviewed with monitoring of vital signs. Full blood counts were done daily and haematocrit was done 6 hourly. Regular ultra sound scans of the chest and abdomen were done to detect plasma leakage. If the plasma leakage or bleeding was detected monitoring was intensified for the next 48 hours or more. Serum Aspartate transaminase (AST) levels and alanine transaminase (ALT) levels were done daily. Rise in AST and ALT levels and presence of bleeding manifestations were taken in to consideration in the analysis as those are the commonest recognized effects of NSAID which are relevant to Dengue infection. Vol. 48, No. 2, 2017 Dengue, bleeding and non-steroidal anti-inflammatory drugs Minor bleeding manifestations such as positive tourniquet test, petechial bleeding, gum bleeding, occasional epistaxis, haemoptysis and haematuria which are common both in DF and DHF were not considered for analysis. Major bleeding such as hypermenorrhea, severe epistaxis, intermenstrual bleeding, haematamesis, malaena, and occult bleeding were considered for analysis. Reduction of haematocrit with unstable vital signs necessitating blood transfusions were taken as evidence of occult bleeding. Results All the patients, who were serologically confirmed as having Dengue, admitted to the Dengue Management Unit (DMU) from 1st of June to 30th of September were prospectively studied. The total study population was 1000 patients with 546 males and 456 females. Age ranged from 12 to 86 years. The mean age was 31 years. 30.1% were between 21-30 years of age while 28.3% were between 12-20 years. None were on warfarin, heparin or clopidogrel. In a random sample of 18 cases 15 were due to DENV 1 serotype infection and 2 were due to DENV 4 serotype. (1 patient could not be serotyped). 562 (56.2%) had DF; 438 (43.8%) had DHF. Higher number of DHF was due to the fact that DF patients were admitted to other wards and only those who needed close monitoring were admitted to Dengue Monitoring Unit. Out of 1000 patients, 6.5% (n = 65) have been definitely treated with NSAIDs prior to admission, while 57.7% (n = 577) have not used NSAIDs definitely; the rest (35.8%) were not certain. Mean age of NSAID group was 33.57 years (SD 17.124) while it was 31.34 years (SD 14.419) in the non-NSAID (control) group. Male: female ratio was 1:1 in the NSAID group while it was 1:0.92 in the non-NSAID group. Major bleeding occurred in 44.6% of patients who had NSAID and in 30.32% of patients who did not have NSAID. (P<0.024, Odds ratio: 1.818 (1.075 -3.076). When this was separately analyzed in DF and DHF groups, results were still significant; Out of DF patients 28% developed bleeding in the non-NSAIDS group while 36.7% had bleeding in the NSAIDS group (p<0.05). Among the DHF patients 33.92% had bleeding in non-NSAIDS group while 51.42% had bleeding in the NSAIDS group (p<0.05). Higher percentage of NSAID users (9.2%) required blood transfusions compared to non-NSAID users (6.2%) though this was not statistically significant. NSAID use is associated with a wide array of alterations in gastro intestinal tract integrity and function; among the most common of these are hemorrhagic gastric erosions. It has been also recognized that NSAIDS can damage more distal regions of the small intestine and also the colon 29 . Clinically significant gastric bleeds from erosions is very rare in patients without clotting or platelet impairment 31 . Thrombocytopenia is universal in DHF and also seen in many with DF. Furthermore, platelet function is abnormal in dengue infections with impaired platelet aggregation and markedly shortened platelet survival. In a study of 170 children with Dengue Shock Syndrome abnormalities were seen in all the major pathways of the coagulation cascade (i.e., low levels of the natural anticoagulant proteins and increased levels of the major procoagulant and anti-brinolytic agents), even though serious bleeding manifestations were relatively infrequent 32 . In this context, it is clear that NSAID can lead to increased bleeding in patients with dengue. Our study confirmed this. In addition to the effect of bleeding, our study showed the degree of hepatitis, as evident by the rise in liver enzymes, was significantly higher in those who had NSAID. In our study, 24.6% of the NSAIDs users and 14.7% of non-NSAID users had ALT above 300 µ/L (p<0.05 Odds ratio: 2.105 (0.883 -5.014)). 1.538% NSAID users and no non-NSAID users had ALT greater than 1000 µ/L. 36.92% of NSAID group and 23.74% of non-NSAID group had AST greater than 300 µ/L. (p<0.05. Odds ratio: 2.195 (1.025 -4.700). 3.07% of NSAID group and no non-NSAID group had AST greater than 1000 µ/L. In a study of 1585 patients with dengue, the mean elevation of AST and ALT were 93.3 U/L and 86 U/L respectively. Only 1.8% had ALT rise of more than 10 times upper limit of normal (ULN) and 3.4% had AST rise of more than 10 times ULN. More than 3 times ULN of AST and ALT was seen in 16.1% and 11.1% respectively. 27 Above findings make our study more significant. In the control group of our study 14.8% had a rise of ALT more than 300 u/l while 23.8% had a rise of AST more than 300 u/l. These figures are higher than Souza's study probably because our study had a higher number of DHF patients. However, in NSAID users these were 25.0% and 37.5% respectively. This was statistically significant. Furthermore, all patients who had AST rise of more than 1000 u/l were those who had NSAID. Journal of the Ceylon College of Physicians Wijewickrama A This study provides the hitherto unavailable evidence on deleterious effects of NSAID in patients with dengue infection. Dengue patients who had NSAID during the course of the illness had increased incidence of bleeding and also higher elevation of liver enzymes indicating more severe effect on the liver. Since bleeding and liver involvement are two complications which can lead to serious outcomes in dengue, use of NSAID can make patients more liable for worse outcomes. Therefore, we recommend NSAID should not be used in fever patients when dengue is a possibility and in dengue outbreaks NSAID should not be used at all in any fever patient. Next to blood and blood vessels, liver is the commonest organ to get involved in Dengue. Liver injury itself can give rise to bleeding. Many factors are thought to contribute to liver dysfunction, including hypoxic injury due to decreased perfusion, direct damage by the virus and immune mediated injury. We studied the patterns and causes (other than NSAID) of liver injury in acute dengue infection and the results of this study were published in the journal BMC Infectious Diseases in 2016 33 . In this study, we sought to identify the pattern in the change in liver enzymes throughout the illness and its association with the degree of viraemia, onset and extent of plasma leakage and inflammatory mediators. Serial daily blood samples were obtained from 55 adult patients with acute dengue from the time of admission to discharge and the liver function tests, viral loads and cytokines were assessed. The onset and extent of fluid leakage was measured by daily ultrasound examinations and all clinical and laboratory features were serially recorded. Results of this study showed aspartate transaminase (AST), alanine transaminase (ALT) and gamma glutamyl transferase (GGT) levels were elevated in patients with dengue infection throughout the illness. The highest AST levels were seen on day 6 of illness and both AST and GGT levels were significantly higher in patients with severe dengue (SD), when compared to those with non-severe dengue (NSD) on day 5 and 6 of illness. Three patients with SD had AST and ALT values of >1000/IU in the absence of any fluid leakage or a rise in the haematocrit (20%). The peak of the AST levels and the lowest serum albumin levels were seen 24 h before the maximum fluid leakage and 24 h after the peak in viraemia. Both serum IL-10 and IL-17 levels were elevated during early illness and were significantly higher in those with SD when compared to NSD. The figure below shows how the IL-10 and IL-17 changed over time with the illness: We found that dengue associated liver injury appears to peak around day 6 and 7. Therefore, liver function tests done at earlier dates might not reflect the extent of liver involvement in acute infection. Since severe liver involvement can occur in the absence of fluid leakage, after the peak viraemia, and since it is associated with high IL-17 and IL-10 levels, possible immune mechanisms leading to hepatic damage is a possibility. Conclusion Dengue has reached epidemic proportions in many tropical countries in the world. A classification which is useful in the clinical management would be Vol. 48, No. 2,2017 Dengue, bleeding and non-steroidal anti-inflammatory drugs the most appropriate as it would reduce the burden of hospitals as well as enable clinicians to find patients with complications early. Our studies showed that the DF/DHF classification (WHO 1997/2011) is more useful than the Non Severe Dengue/Severe Dengue (WHO/ TDR 2009) classification. With changing demography, bleeding associated with Dengue has become commoner. W e have identified pre-disposing factors which could be used to identify possible bleeders in dengue. This would enable clinicians to monitor this group of patients more closely so that bleeding can be detected early and necessary interventions can be taken early. We have also shown that liver damage may have an immune mechanism opening doors for further studies in to this area. More importantly, our studies showed conclusive evidence of the deleterious effects of NSAID in dengue. This would give enough strength to available recommendations to prohibit use of this easily avoidable risk factor.
2019-03-18T14:03:32.514Z
2017-12-27T00:00:00.000
{ "year": 2017, "sha1": "634f859b06479710d7301c24b04cce3f7bc8cf3a", "oa_license": "CCBY", "oa_url": "http://jccp.sljol.info/articles/10.4038/jccp.v48i2.7824/galley/5977/download/", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "dee20a4bedd7fef0ee7accbdd0dadca4c7486a80", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
202732361
pes2o/s2orc
v3-fos-license
Molecular characterization of nephron progenitors and their early epithelial derivative structures in the nephrogenic zone of the canine fetal kidney Nephron progenitors (NPs) and nephrogenesis have been extensively studied in mice and humans and have provided insights into the mechanisms of renal development, disease and possibility of NP-based therapies. However, molecular features of NPs and their derivatives in the canine fetal kidney (CFK) remain unknown. This study was focused to characterize the expression of potential markers of canine NPs and their derivatives by immunofluorescence and western blot analysis. Transcription factors (TFs) SIX1 and SIX2, well-characterized human NP markers, were expressed in NPs surrounding the ureteric bud in the CFK. Canine NPs also expressed ITGA8 and NCAM1, surface markers previously used to isolate NPs from the mouse and human fetal kidneys. TF, PAX2 was detected in the ureteric bud, NPs and their derivative structures such as renal vesicle and S-shaped body. This study highlights the similarities in dog, mouse and human renal development and characterizes markers to identify canine NPs and their derivatives. These results will facilitate the isolation of canine NPs and their functional characterization to develop NP-based therapies for canine renal diseases. Introduction Understanding of the cellular and molecular basis of renal development can provide insights into the mechanisms of renal disease and facilitate the development of novel regenerative medicine-based therapies. Renal development is a complex process that entails communication among multiple cell types. [1][2][3] At the initial stage of renal development, metanephric mesenchyme (MM) containing nephron and stromal progenitors surrounds the ureteric bud. [1][2][3] The MM secretes factors that stimulate branching of the ureteric bud, and cells of the ureteric bud induce MM and trans-form MM into cap mesenchyme, a condensed group of NP cells that surround the ureteric bud tips. Both ureteric bud and stromal progenitors contribute to the NP niche that balances self-renewal and differentiation of NPs. 4,5 Cells of the ureteric bud give rise to collecting ducts whereas, stromal progenitors generate vascular and interstitial cells. NPs give rise to all cell types of the nephron. 3 NPs, a multipotent cell population possessing full nephron-forming potential, can be identified by SIX2 and/or Cited1 expression. 6,7 SIX2 plays an indispensable role in renal development and controls NP cell fate by regulating self-renewal vs. differentiation decisions; whereas, deletion of Cited1 does not impact NP compartment and renal development. [6][7][8] During nephrogenesis, mesenchymal NPs undergo mesenchymal to epithelial transition (MET) and generate the renal vesicle that develops first into a comma-shaped body, and then, into a Sshaped body. 1-3 S-shaped body consists of proximal, intermediate, distal and connecting segments made up of distinct precursor cell types that give rise to different nephron segments such as glomerulus, proximal convoluted tubule, Loop of Henle, distal convoluted tubule and connecting segment. 2,9 In-depth analyses using genetic, molecular and cellular approaches in mouse models, have defined the molecular signatures of various specialized cell types in different stages of nephron development and the molecular signals that regulate nephrogenesis. 10,11 Furthermore, multiple groups have explicated methodology to isolate and expand mouse NP population in synthetic niches as well as elucidated strategies to coax these in vitro maintained NPs to differentiate into glomerular and tubular cell types and assemble into kidney organoids. 4,12,13 With the fundamental knowledge obtained from mouse models, NPs from human developing kidneys have been isolated and maintained in culture. 13,14 Furthermore, human NPs have been generated from induced pluripotent stem cells and developed into differentiated kidney cells and kidney organoids. [15][16][17] Recent studies reported detailed molecular characterization of human NPs, their nephrogenic niche and nephron patterning in the human embryonic and fetal kidneys in various stages of development. [18][19][20][21][22][23][24] These studies observed strong conservation in molecular markers of NPs and their derivatives as well as in pathways regulating nephrogenesis between developing mouse and human kidneys; although, a few distinct features for each species were determined as well. [19][20][21][22][23] Together, these experimental studies on mouse and human NPs and their develop-ment into nephrons have strong implications in the development and translation of regenerative medicine-based approaches to treat diseases requiring nephron repair and regeneration such as chronic kidney disease (CKD), increase the efficiency of de novo nephrogenesis and model various other renal diseases. 25,26 In dogs, CKD has reported prevalence of up to 25% and is one of the major causes of morbidity and mortality. [27][28][29] Nephron loss is one of the main underlying causes of progressive decline in renal function in CKD and ultimately leads to renal failure. 30 Transplantation of NPs isolated from the embryonic kidney have been shown to repair and enhance renal function in rodent models of renal injury and such NP-based therapeutic strategies have potential to address nephron loss in CKD in companion animals. 13,31,32 However, markers to identify and isolated canine NPs and mechanisms regulating their maintenance and differentiation into nephrons are completely unknown. This knowledge can not only help in developing novel strategies to treat CKD in dogs but also help understand the molecular basis of congenital canine diseases such renal and cystic dysplasia. [33][34][35] Furthermore, dogs with CKD can also serve as a large animal model for preclinical studies to test the therapeutic potential of these NP-based regenerative approaches once the fundamental knowledge about molecular mechanisms regulating canine NPs and nephrogenesis are elucidated. In the current study, we performed molecular characterization of canine NPs and their derivates in the canine fetal kidney (CFK) and found strong conservation in their expression pattern between canine, mouse and human developing kidney. This study provides a framework to identify and isolate canine NPs and will facilitate future studies for functional characterization of canine NPs. Fetal kidney tissue isolation CFKs were collected from 11 fetuses isolated from gravid uteri isolated from three pregnant mixed breed dogs of more than 2 years of age presented from elective spay in Alabama Animal Alliance Spay Neuter Clinic, Montgomery. All the females were estimated to be in third trimester of pregnancy based upon the development of reproductive organs and claws of the fetuses. 36,37 The collection and use of the tissuematerial for the current study was reviewed and approved by Tuskegee University Institutional Animal Care and Use Committee (TUIACUC) and performed according to institutional guidelines. Hematoxylin & Eosin staining and immunofluorescence studies Histological and immunofluorescence protocols used in the current study have been described previously. [38][39][40][41][42] Briefly, isolated CFKs were fixed in 4% paraformaldehyde and embedded in paraffin; 5-μm thick sections were prepared and were deparaffinized, rehydrated and subjected to Hematoxylin & Eosin (H&E) staining with the routine protocol. For immunofluores-cence studies, rehydrated sections were subjected to boiling for antigen retrieval in sodium citrate buffer (10 mM sodium citrate, 0.1% Tween-20, pH 6.0) at 95°C for 30 min and then kept at room temperature for 30 min. The sections following antigenretrieval were blocked in donkey serum for at least 1 h at room temperature and incubated overnight with following primary antibodies: E-cadherin (ECAD) ( H&E staining of the CFK First, CFK sections were stained with H&E to detect and assess NPs and other developing structures in the nephrogenic zone. Low magnification analysis revealed the nephrogenic zone in the outer renal cortex constituted predominantly by heavily hematoxylin-stained cells ( Figure 1A). High magnification analysis revealed condensed cap mesenchymal cells, known to have nephron-forming potential, surrounding the ureteric bud tips (Figure 1 B-D). A subpopulation of NPs becomes committed and give rise to pretubular aggregates (PTA) that further differentiate into renal vesicles. Both PTA and renal vesicle could be seen on the sides and/or underneath the ureteric bud tips in the H&E stained sections (Figure 1 B-D). Analysis also revealed comma and S-shaped bodies, developing underneath the ureteric bud tips (Figure 1 B-D). Below the nephrogenic zone, developing renal corpuscles and tubular segments in different stages of maturation could be visualized (Figure 1 B-D). These results reveal similar progression and structural features of nephrogenesis in CFK as previously reported in the mouse and human fetal kidney. 3,22 SIX2/ECAD co-immunofluorescence analysis SIX2 is a well-established marker of mesenchymal NPs that is highly conserved from mouse to human. 6,21,43 ECAD is expressed by the ureteric bud and NP-derivatives that are in the process of acquiring epithelial identity. To determine whether SIX2 and ECAD are expressed in the CFK, we performed western blot analysis. Both SIX2 and ECAD proteins were detected in the CFK by immunoblotting (Figure 2 A,B). To determine whether SIX2 is expressed in mesenchymal nephron progenitors and how its expression changes in NP-derivates in the embryonic dog kidney, we performed co-immunofluorescence studies using antibody against SIX2 along with ECAD antiserum. For a negative control, staining with isotype control antibodies was performed that did not show any signal ( Figure 2C). In sections, co-stained with SIX2 and E cadherin, strong SIX2 signal was specifically detected in the few layers of condensed mesenchymal cells surrounding the ureteric bud; and as expected, the cells with strong SIX2 signal did not show ECAD expression (Figure 2 D-F). SIX2 signal was specifically localized to the nucleus of NPs. In both pretubular aggregates and renal vesicles in the embryonic dog kidney, SIX2 expression decreased and ECAD signal could be detected indicating the advent of epithelization (Figure 2 D-F). SIX2 signal was undetectable beyond the renal vesicle stage. Ureteric bud cells that strongly expressed ECAD did not express SIX2. These results show that SIX2 marks mesenchymal canine NPs and has similar protein distribution in CFK as previously reported in mouse and human fetal kidney. 6,43,44 SIX2/NCAM1 co-immunofluorescence analysis NCAM1 is known to be expressed in NPs and their early epithelial derivatives; furthermore, its membrane localization makes it a suitable NP marker to sort cells with nephron-forming potential. 32,41,45 First, we determined by immunoblot analysis that NCAM1 is expressed in the CFK ( Figure 3A). To determine whether SIX2-positive NPs also express NCAM1 and study the expression of NCAM1 in the early epithelial structures developing in the nephrogenic zone in the CFK, co-immunofluorescence analysis with SIX2 and NCAM1 antisera was performed. Co-staining analysis revealed that a subpopulation of highexpressing SIX2-positive cells expresses NCAM1 at their membrane (Figure 3 B-D). As NPs differentiate into a renal vesicle, SIX2 expression was dramatically reduced whereas, a strong membrane NCAM1 signal was detected in these cells. NCAM1 Figure 2. Expression of SIX2 and ECAD in the CFK. A,B) Whole kidney extracts were prepared from CFK and subjected to Immunoblot analysis with antiserum against SIX2 and ECAD antisera. C-F) Representative co-immunofluorescence images of CFK section stained with isotype control antibodies (C) or SIX2 (red) and ECAD (green) antisera (D-F); nuclei were stained with DAPI. Strong SIX2 signal (red) (arrow) (D) was observed in the NP population that did not express ECAD (green) (arrow) (E); shown also on the merged image (arrow) (F); a weaker SIX2 signal was observed in pretubular aggregates (arrowhead) (D) that showed weak ECAD signal (arrowhead) (E); also shown on the merged image (arrowhead) (F); ECAD expression (concave arrowhead) (E) was observed in the ureteric bud that did not express SIX2 (concave arrowhead) (D); also shown on merged image (concave arrowhead) (F). Scale bars: 20 mm. expression was also observed in proximal, intermediate and distal segments of the Sshaped bodies (Figure 3 B-D). In S-shaped bodies, NCAM1 signal was undetectable in podocyte precursors. Cell-surface localization of NCAM1 in SIX2-positive NP indicates that NCAM1 could be exploited to sort canine NPs. However, gentle dissociation of NPs present on the cortical surface of the kidney will be required to avoid contamination from the relatively deeper layers of NCAM1-positive early epithelial structures during sorting. SIX1/ ECAD and SIX1/NCAM1 co-immunofluorescence analysis SIX1 is expressed in the MM of both mouse and human kidney. Notably, SIX1 expression continues in the cap mesenchyme (NPs) of the human kidney however, its expression becomes undetectable in the cap mesenchyme (NPs) of the mouse kidney. 46 Western blot analysis with SIX1 antiserum showed that SIX1 protein is expressed in the CFK ( Figure 4A). To determine the expression of SIX1 in NPs and their early epithelial derivate structures in the nephrogenic zone of the dog fetal kidney co-immunofluorescence analyses of SIX1 antibody with ECAD or NCAM1 antisera were performed. Strong nuclear SIX1 signal was detected in cap mesenchymal cells surrounding the ECAD-positive ureteric bud cells (Figure 4 B-D). SIX1 expression continued. in the pretubular aggregate and renal vesicle, structures identified by the expression of NCAM1 ( Figure 4 E-G). In the distal segment of the Sshaped body, a weak SIX1 signal could be detected. These results indicate that the SIX1 distribution in CFK is similar to its reported distribution in the human fetal kidney but, is strikingly different from the mouse fetal kidney that does not express SIX1 in NPs. 46 PAX2/ECAD and PAX2/NCAM1 coimmunofluorescence analyses PAX2 expression has been reported in NPs and their derivatives as well as in the ureteric bud; and PAX2 expression is essential for renal development in both lineages. [47][48][49] To determine whether PAX2 protein is expressed in the CFK, we performed immunoblotting analysis on whole cell extracts prepared from the dog fetal kidney. PAX2 protein expression could be detected by western blot analysis ( Figure 5A). PAX2 expression was detected in the ECAD-positive ureteric bud and cap mesenchymal cells surrounding the ureteric bud in the CFK (Figure 5 B-D). Strong PAX2 expression was also detected in NP-derivatives pretubular aggregates, renal vesicles and Sshaped bodies, structures identified by the expression of NCAM1 ( Figure 5 E-G). PAX2 was localized to the nucleus in NPs and their derivatives and in the cells of the ureteric bud. PAX2 expression became undetectable in proximal tubules and distal tubules (not shown) however, PAX2 expression was detectable in differentiated collecting ducts in the medulla (not shown). Together, these results indicate that expression and localization of PAX2 in CFK is similar to human and mouse kidney. 47 ITGA8/ECAD and ITGA8/NCAM1 co-immunofluorescence analyses ITGA8 is a surface marker expressed in human and mouse kidney and used to purify cap mesenchymal cells with nephron-forming potential. 46,50,51 Immunoblot analysis with ITGA8 antiserum showed ITGA8 protein expression in the CFK ( Figure 6A). Coimmunofluorescence analysis with ITGA8 antibody with Ecadherin or NCAM1 anti- Figure 3. Expression of SIX2 and NCAM1 in the CFK. A) Whole kidney extracts were prepared from CFK and subjected to Immunoblot analysis with antiserum against NCAM1. B-D) Representative co-immunofluorescence images of CFK sections with SIX2 (red) and NCAM1 (green) antisera; nuclei were stained with DAPI; a subpopulation of SIX2-positive (red) (arrow) (B) expressed NCAM1 (green) (arrow) (C); also shown on the merged image (arrow) (D); a weaker SIX2 (arrowhead) (B) signal was observed in the renal vesicle cells that expressed strong NCAM1 signal (arrowhead) (C); also shown on the merged image (arrowhead) (D); NCAM1 expression (concave arrowhead) (C) was observed in NP-epithelial derivate structure, S-shaped body that did not express SIX2 (concave arrowhead) (B); also shown on the merged image (concave arrowhead) (D). Scale bars: 20 mm. sera was performed to determine its expression and distribution in the nephrogenic zone of the CFK. As a negative control, staining with isotype control (mouse and goat) antibodies was performed that did not show any signal ( Figure 6B). ITGA8 signal was absent in the ECAD-positive ureteric bud ( Figure 6 C-E). A strong ITGA8 signal was found at membrane of NPs cells that expressed NCAM1 (Figure 6 F-H). ITGA8 was also expressed in pre-tubular aggregates and renal vesicles (identified by ECAD and NCAM1 expression) albeit, at lower level ( Figure 6 C-H). Very faint ITGA8 expression could also be detected in the distal segment of S-shaped body. ITGA8 expression was found in the mesangial cells of the developing and developed renal corpuscles (not shown). Given that ITGA8 is highly expressed and localized at the membrane in canine NPs and, its expression is relatively faint/weak in the early epithelial structures indicates that ITGA8 is very likely a suitable marker for the isolation of NPs from CFK. NCAD/ECAD and NCAD/NCAM1 co-immunofluorescence analyses NCAD, a mesenchymal stem cell marker is known to be expressed in mouse NPs however, its functional significance in the renal development remains unclear. 52,53 Western blot analysis with NCAD antiserum showed that NCAD protein is expressed in the CFK ( Figure 7A). Coimmunofluorescence analysis of CFK with NCAD antibody and ECAD or NCAM1 antisera was performed. NCAD signal was Original Paper . Expression of SIX1 in the CFK. A) Whole kidney extracts were prepared from CFK and subjected to Immunoblot analysis with antiserum against SIX1 antiserum. B-D) Representative co-immunofluorescence images of CFK sections with SIX1 (red) and ECAD (green) antisera; nuclei were stained with DAPI; a strong SIX1 signal (red) (arrow) (B) was observed in the NP population that did not express ECAD (green) (arrow) (C); shown also on the merged image (arrow) (D); SIX1 expression was observed in the renal vesicle (arrowhead) (B) cells that showed strong ECAD signal (arrowhead) (C); also shown on the merged image (arrowhead) (D); a weak and patchy SIX1 expression (concave arrowhead) (B) was observed in the S-shaped body that also expressed ECAD (concave arrowhead) (C); also shown on merged image (concave arrowhead) (D). E-G) Representative co-immunofluorescence images of CFK sections with SIX1 (red) and NCAM1 (green) antisera. SIX1-positive cells (red) (arrow) (E) expressed NCAM1 (green) (arrow) (F); also shown on the merged image (arrow) (G); SIX1 expression was observed in the renal vesicle cells (concave arrowhead) (E) that expressed strong NCAM1 signal (arrowhead) (F); shown also on the merged image (arrowhead) (G); a few SIX1-positive cells (concave arrowhead) (E) were observed in the S-shaped body identified by NCAM1 expression (concave arrowhead) (F); also shown on the merged image (concave arrowhead) (G). Scale bars: 20 mm. [page 162] [European Journal of Histochemistry 2019; 63:3049] absent in the ECAD-positive ureteric bud (Figure 7 B-D). Strong NCAD signal was found at membrane of cap mesenchymal cells and pretubular aggregates identified by NCAM1 (Figure 7 E-G). A weak NCAD signal could be detected in the NCAM1positive S-shaped body. Discussion In this study, molecular characterization of NPs and their early epithelial derivatives in the nephrogenic zone of the CFK with histological and immunofluorescence-based studies were performed. Specifically, this study describes the expression pattern of multiple transcription factors in the fetal kidney of dogs that have found to be important for the maintenance of NPs in mouse and human. This study also describes the expression pattern of various surface markers in canine NPs that have been used to isolate and enrich murine and human NPs. SIX2 is a well-characterized and specif-ic NP-marker and has a similar pattern of protein distribution and localization in the developing mouse and human kidney. 6,21,44,46 The expression pattern of SIX2 in canine NPs and their early derivates was found to be highly similar to its expression reported in mouse and human developing kidney. 54 SIX2 expression level has been used as a benchmark to gauge the purity and maintenance of NP-identity in various in vitro culture conditions. 4,7,13,14 Notably, SIX2 RNAprobe has been successfully used to sort NPs from the human kidney. 14 conserved expression pattern of SIX2 in canine, mouse and human fetal kidney, the results of this study indicate that canine SIX2-specific RNA-probes could be used to isolate canine NPs and its NP-specific marker status could be exploited to gauge maintenance of NP identity in while testing various in vitro culture conditions for NPs. In contrast to SIX2, the expression pattern of SIX1, a SIX2-related family member, in human and mice developing kidney is interestingly different. 46,54 SIX1 expression in mice is restricted to the MM in the early stages of renal development and becomes undetectable in NPs. 54 In the human kidney, SIX1 is expressed in the MM and its expression is maintained in the NP population. 46,54 In the human developing kidney, combined expression of SIX1 and SIX2 has been proposed to support NP selfrenewal and maintain nephrogenesis for a relatively longer gestation length in humans. 46 The combined expression of SIX1 and SIX2 has also been proposed to contribute to the generation of higher nephron number (average 1 million) in humans as compared to mice (average 20,000). 54 Similar to human, in canine kidney, both SIX1 and SIX2 expression were expressed at high levels in NPs of fetal kidney and may contribute to maintenance of NPs and generation of higher number of nephrons. The average number of nephrons in the adult canine kidney has been estimated to approximately 475,000 that develop in approximately 50-60 days. 55 It would be interesting to determine the role of SIX1 and SIX2 in the self-renewal of NPs in an in vitro canine NP culture model. PAX2 expression pattern in the CFK has strong similarities to its expression pattern in the mouse and human developing kidney. 47 PAX2 plays indispensable roles in various stages of renal development in multiple cell types. 56 Global deletion of Pax2 results in renal agenesis due to degeneration of the nephric ducts. [56][57][58] PAX2 regulates the outgrowth of the ureteric bud by regulating glial derived nerve growth factor (GDNF) and c-Ret expression. 49,56,59 PAX2 also regulates branching morphogenesis of the ureteric bud; mice expressing one mutant allele of Pax2 1Neu , show reduced branching of the ureteric bud which contributes to renal hypoplasia. 56,60 PAX2 regulates differ-entiation of NPs, by regulating mesenchymal to epithelial transition. 49,56,61 Recently, it has been shown that PAX2 in NPs is essential to maintain their identity and PAX2-deficent NPs transdifferentiate into interstitiallike cells. 47 In CFK, PAX2 is localized in NPs and their early epithelial derivates as well as in the ureteric bud. The role of PAX2 in the renal development of dogs remains to be determined. Original Paper Notably, PAX2 mutations cause renal coloboma syndrome in humans. 62,63 SIX2 mutations cause renal hypodysplasia and SIX1 mutations are known to cause brachio-oto-renal syndrome in humans. 43,54 Knockout mouse models have also identi-fied indispensable roles of these three genes in the renal development. 6,47,64 In dog, renal dysplasia cases are known to occur frequently; however, its molecular basis is not well understood. 28,33,35 This study reports the expression pattern of transcription factors SIX2, SIX1 and PAX2 in the CFK and highlights the conserved and distinct features in relation to the human and mouse kidney. These genes could be carefully analyzed in dogs while screening cases of renal maldevelopment to determine candidate genes for renal dysplasia. Development and standardization of strategies for the isolation and expansion of canine NPs are essential to develop NP- based therapeutic approaches for canine chronic kidney disease. This study reports the expression and localization of three surface markers ITGA8, NCAM1 and NCAD in the CFK. ITGA8 was found to be expressed highly in the cell surface of canine NPs and expressed at very low levels only in a sub-population of cells in NPderived early epithelial structures Similar expression pattern of ITGA8 has been reported in mice and human fetal kidney. 50 ITGA8, has been successfully used to sort and enrich NPs from the mouse and human kidney. 46,51 Following the removal of the renal capsule, underlying NP population is preferentially dissociated by partial/incomplete enzymatic digestion for a short period of time. By this method, fewer cells from early epithelial derivatives are likely dissociated and leading to NP enrichment. Furthermore, to reduce contaminating interstitial cells negative selection for PDFRA has been found to be useful. 4,51,65,66 Single cell RNA-sequencing profiling has revealed that NPs from human and mouse kidneys are ITGA8 + /PDGFA -and sorting cells based on the expression of these two markers results in high enrichment of NPs. 51,66 Recently, it has been shown that the most of the ITGA8 + /PDFGA -population from the iPSC-derived NPs were SIX2-positive NPs that robustly generated epithelial cells of glomeruli and renal tubules following stimulation with differentiating signals. 65 Future studies to determine the expression of PDGFA or other interstitial cell surface markers in CFK will be useful to select marker(s) for negative selection to minimize contaminating interstitial cells. NCAM1 has also been utilized to enrich human NP population by magnetic-activated cell sorting and NCAM1-positive cells have been shown to have nephron-forming potential and the ability to cause renal repair following chronic renal failure. 32,67 However, cells sorted based solely on NCAM1 expression contained both SIX2positive and SIX2-negative population. 32 Further studies reported that NCAM1 + /CD133 --based strategy led to better enrichment of multipotent renal stem cells as it removes NCAM + /CD133 + cells of early epithelial derivative structures and immature tubules. 68 To remove cells from mature renal tubules negative selection for the third marker EPCAM was found to be useful. 4,69 The expression of CD133 and EPCAM in the embryonic and CFK remains to be elucidated. Canine NPs also expressed high levels of NCAD and a weak NCAD signal was also expressed in early epithelial derivates. NCAD is known to be expressed in the NP population of the fetal mouse kidney. 52 Together, the results of this study suggest that high expression levels of these markers at the canine NP cell surface makes them suitable candidates for the sorting and enrichment of NP population from the developing canine kidneys. In summary, to our knowledge, this is the first report that describes the molecular features of NPs and their derivatives in CFK and highlights the similarities and distinct features between their mouse and human counterparts. These results provide an important framework that would facilitate identification, isolation and culture of canine NPs as well as their functional characterization to develop into nephrons.
2019-09-24T13:04:31.100Z
2019-08-06T00:00:00.000
{ "year": 2019, "sha1": "2b4a8ab3c6bb45a1aa74fb87e5e0c60cb2794938", "oa_license": "CCBYNC", "oa_url": "https://www.ejh.it/index.php/ejh/article/download/3049/2914", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1c5af1a9ef24fa5ae2e88e627139249481c2ccc7", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
244532923
pes2o/s2orc
v3-fos-license
A qualitative study investigating Australian cancer service outpatients’ experience of distress screening and management: what is the personal relevance, acceptability and improvement opportunities from patient perspectives? Purpose People diagnosed with cancer experience high distress levels throughout diagnosis, treatment, and survivorship. Untreated distress is associated with poor outcomes, including worsened quality of life and higher mortality rates. Distress screening facilitates need-based access to supportive care which can optimize patient outcomes. This qualitative interview study explored outpatients’ perceptions of a distress screening process implemented in an Australian cancer center. Methods Adult, English-speaking cancer outpatients were approached to participate in face-to-face or phone interviews after being screened by a clinic nurse using the distress thermometer (DT). The piloted semi-structured interview guide explored perceptions of the distress screening and management process, overall well-being, psychosocial support networks, and improvement opportunities for distress processes. Thematic analysis was used. Results Four key themes were identified in the 19 interviews conducted. Distress screening was found to be generally acceptable to participants and could be conducted by a variety of health professionals at varied time points. However, some participants found “distress” to be an ambiguous term. Despite many participants experiencing clinical distress (i.e., DT ≥ 4), few actioned referrals; some noted a preference to manage and prevent distress through informal support and well-being activities. Participants’ diverse coping styles, such as positivity, acceptance, and distancing, also factored into the perceived value of screening and referrals. Conclusion and implications Screening models only measuring severity of distress may not be sufficient to direct care referrals, as they do not consider patients’ varying coping strategies, external support networks, understanding of distress terminology, and motivations for accessing supportive care services. Supplementary Information The online version contains supplementary material available at 10.1007/s00520-021-06671-2. Introduction People with a cancer diagnosis experience distress at higher levels than that of the general population during diagnosis and treatment [1]. The National Comprehensive Cancer Network (NCCN) defines distress as "a multifactorial unpleasant experience of a psychological (i.e., cognitive, behavioral, emotional), social, spiritual, and/or physical nature that may interfere with the ability to cope effectively with cancer, its physical symptoms, and its treatment" [2]. Surveys have indicated that up to half of people with a cancer diagnosis experience significant levels of distress [3][4][5][6]. If left untreated, this distress can lead to poor outcomes including decreased social functioning, increased intensity in physical symptoms, cognitive impairment, poor adherence to treatment, and reduced length of life [7][8][9][10]. As such, distress has been branded and is recognized internationally as the sixth vital sign [11,12]. As distress levels peak during diagnosis and initial treatment stages [1], it is important to recognize and implement effective distress screening management processes within treatment facilities where there is an opportunity for early intervention. Distress screening and management is beneficial, but implementation challenges remain: Timely and standardized distress screening if coupled with well-structured psychosocial referral systems can reduce patients' emotional distress and improve their quality of life [3,13,14]. The benefits also extend to reduced physical symptoms and improved satisfaction with care and communication between patients and professionals [2]. There is also evidence that psychosocial screening reduces risk of emergency service use and hospitalization [15]. There remain challenges to the routine use of distress screening especially in time-poor clinical services. Firstly, there remains debate on the utility of single-item distress screening tools, such as the distress thermometer (DT) [16], especially without the use or availability of a well-structured referral pathway [16,17]. Secondly, the implementation of distress screening programs is poorly reported, and it is likely that only select components of evidence-based approaches are being incorporated in health services, such as one-step screening or no rescreening [18,19]. These two factors may have contributed to emerging reports that health professionals and services are unclear on the potential benefits of distress screening programs and thus unable to rationalize both the real and opportunity cost of yet another clinical activity [20]. Discrepancies between patient-reported acceptability and professional perceived acceptability exist: Australian data suggest the majority of cancer service representatives felt patients did not want to be asked questions about their distress, with 38% of health services reporting that they never, or rarely, screen for distress [21]. However, a quantitative study of 498 patients' experiences with a distress screening program implemented in ten Dutch hospitals found that patients' evaluations of the process were largely positive [22]. Opinions were more favorable in patients who more frequently completed the DT and problem checklist and were exposed to information about the tools and a discussion of potential referral options. In an Australian context, a cross-sectional study surveying callers (n = 100) to a cancer helpline reported that over 74% of callers diagnosed with cancer were comfortable with DT use [23]. Given the potential discrepancy between patient-reported acceptability and professional-reported perceived acceptability, there is a compelling demand for in-depth exploration on how patients experience distress screening within Australian cancer services. Qualitative research provides an opportunity to provide context and further insight into existing and new distress screening processes [24]. This qualitative study explored the lived experiences of distress screening and management from patient perspectives through the use of semi-structured interviews with people with a cancer diagnosis. The research aim of this study was to qualitatively evaluate a rapid distress screening process in very early implementation phases and to collate patient perspectives on the acceptability and role of standardized screening in managing their overall emotional well-being. The study was conducted in a large tertiary outpatient cancer service during the rollout of a cloud-based distress screening tool. The distress screening procedure was designed by the health service to be rapid, administered at each clinical appointment, and integrated into electronic medical records. This study recruited an opportunity sample of the patients screened and provides insight into how patients' perceive the value and approach of brief screening models in Australian health services as part of their overall cancer journey. Study design and recruitment The qualitative interview study is reported in accordance with the consolidated criteria for reporting qualitative (COREQ) guidelines [25]. Patients were approached by a clinic nurse and invited to participate between May and July 2019. The screening pathway included asking all cancer outpatients to complete the DT and problem checklist on a kiosk before their appointment. The DT has an 11-point scale ranging from 0 (no distress) to 10 (extreme distress) [3]. The problem checklist prompts the patient to identify sources of distress using a problem list. Each item is directly related to one of five domains: practical, relationship, emotional, spiritual, or physical. Inclusion criteria included being at least 18 years of age, proficient in English, and having received a cancer diagnosis. In line with best practice guidelines that recommend all outpatients are screened for distress regardless of demographic or clinical characteristics, there were no exclusion criteria applied to time since diagnosis, treatment status, or cancer type for the interviews. The research team recruited participants using a purposive sampling approach to ensure representation of gender and a broad age range. The project was approved by Hunter New England Human Research Ethics Committee (2018/ETH00520). Data collection A research team member (MC) with experience in qualitative research conducted 19 semi-structured interviews (30-45 min). Interviews were conducted face to face (n = 4) or via telephone (n = 15). The semi-structured interview guide included questions pertaining to perceptions of the distress screening process, emotional well-being, reasons for referral uptake/non-uptake, experiences of accessing new or existing professional and personal psychosocial support networks, and ways the distress screening and management process could be improved. Individuals were asked medical and demographic questions during the interview to contextualize findings. Individuals were also asked to complete the DT at time of interview as a way to prompt recall of their in-clinic experience, and in the case, they were asked other screening questions by other services. Participants' DT score also provided context when discussing experiences, for example, gaps in referrals for moderately to severely distressed individuals or perceived value of screening for mildly distressed individuals. The interview guide was pilot tested with three members of a consumer advisory panel. The research team considered data collection to have reached saturation when information became repetitive or no new information was being obtained; this was agreed upon through review of transcripts and field notes and discussion with the research team (MC, KM, EF) [26]. Multiple meetings to review transcripts, comprehensiveness of codebook, and emergence of new themes (if any) were held over a 2-month time period. All participants declined the opportunity to review their transcripts. Data analysis Audio recordings were transcribed verbatim and coded by two team members (KM, MC) using NVivo 12 qualitative data analysis software. Using an inductive thematic analysis framework [27], a sample of transcripts was open-coded prior to collaboratively developing a codebook. The remaining transcripts were coded in batches using an iterative process of discussion and codebook refinement. Following coding, clear themes were identified and related back to data extracts to ensure coherence. Results/findings Of the 39 patients invited to the study, 29 eligible individuals consented to be contacted by the research team. The 10 individuals who declined to be contacted cited a lack of time or the perception that they had little to contribute to the study. Of the remaining 29 participants who consented to contact, 19 consented to interviews prior to saturation being reached. The demographic characteristics of 19 participants are listed in Table 1. Four overarching themes were derived from the data. In talking about distress screening procedures, many participants expanded their discussion to include their attitudes toward distress, well-being, styles of coping, and understanding of the term "distress." Attitudes toward formalized screening and logistics Quotes are provided for each of the subthemes in Table 2. Acceptability The majority of participants reported both the mode of in-clinic electronic delivery and the experience of distress screening as being acceptable and appropriate even if not directly relevant to themselves. The DT was described positively as short, and participants appeared happy to have this process included as part of routine care. The process was perceived to facilitate communication, with participants suggesting it would help them to be more honest about their distress than if they had been asked a more general question about well-being. However, some participants did not appreciate completing the DT on a computer. One participant felt that responses would be inaccurate and would not be acted on if asked using a computerized kiosk, preferring human interaction. Distress screening logistics No single health professional was identified consistently as to who should be responsible for screening for distress; nurses, GPs, oncologists, social workers, and counselors were all suggested by participants. Participants had varying views on timing and recurrence of distress screening. Some felt that routine screening at repeated time points was appropriate. However, for some patients, distress screening was seen as not useful at initial diagnosis or beginning of treatment, when they were feeling overwhelmed with information. Managing distress and well-being This theme encompassed discussions around participants' awareness of their own distress and the desire for and access to services to assist. This conversation arose as part of discussing the sequential steps of the distress Table 2 Quotations: attitudes toward distress screening and logistics Acceptability "Oh it's not a problem" (Male, 65, colon cancer, DT = 10) "It's good" (Female, 63, leukemia and esophageal cancer, DT = 6) "No trouble. It only takes a couple of minutes, and you're there." (Male, 70, prostate cancer, DT = 10) "I think it's a good idea. As I said, (my doctor) wouldn't have known that I was feeling any stress unless (the research assistant) did that computer thing." (Female, 53, paraganglioma cancer, DT = 7) "It's not that I'm not a big fan. It's just for me, I don't know that there's much value in it. Because I'm not the same, I'm probably going to give the same answers for most of the time. Unless of course, as my diagnosis goes on and things get worse, then maybe I might become more distressed." (Female, 31, colorectal cancer, DT = 0) "I'd answer it honestly. Instead of trying to make a nervous joke, I can just say, I am an eight on the distress scale today, and I am not thinking well, and things like that." (Male, 30, testicular cancer, DT = 7) "I don't think sitting down at a computer, putting down… They'll just put down anything they want to put down, on a computer. Who's going to read it? …it would be better if the oncologist asked the questions. I don't want to write anything down on a computer. I would've liked that -the physical intervention, where you're asking me these things, now, why couldn't they-somebody else, or one of the nurses, or even the doctor…" (Male, 75, bladder cancer, DT = 5) "I'm probably not… I don't know whether a huge fan is the best way to put it. I understand it has to be done, but for my benefit, I was thinking I'm not really distressed." (Female, 31, bowel cancer, DT = 0) Who to conduct screening "Well, because we see the nurses and speak to them often, I suppose that would be a good start, yes. Because the surgeons or the team of doctors, we see them after longer periods like after three months or six months or things like that. The nurse widely handles all the treatment which is the critical time as I get the chemo or the radiation, so you will see them nearly every second week or something like that anyway." (Male, 65, colon cancer, DT = 0) "Well, it depends if my distress is related to what the oncologist is treating, then I think that's… but if my distress is related to something else, then I'm sure the GP would make some recommendations, or talk about that… So, it depends what I'm distressed about, I think, as to who might be the best." (Male, 70, prostate cancer, DT = 0) "Probably [my social worker]. She asks questions straight away, how are you, and I'll tell her. If the doctor asks, I'll just tell him, but they don't ask. So, I don't know. It probably is the social worker or the counsellor." (Male, 75, bladder, cancer, DT = 5) When to conduct screening "I still say any time. It's an ongoing sort of thing… Like me, I hide a lot. So, people, they don't really tell you things." (Female, 53, head and neck cancer, DT = 7) "I think after about a week or two weeks it should be okay. Because at the beginning, you're just taken onboard and you don't really know what's happening…you have to go to this appointment and go to that appointment. But after about a week or two I think it would be good if they start asking how you are feeling about it about more issues like that." (Male, 65, colon cancer, DT = 0) "I think there should be some sort of screening, maybe when it's first confirmed… It's a big thing to hear…And when you're actually told, it is this, this is what's going to happen… it's a lot to take in when they first tell you. So that's probably when it would be a good point to start doing it." (Male, 30 testicular cancer, DT = 7) management pathway: screening, discussion, referral, and service use. Independent of their distress severity or health service use, participants also discussed self-initiated activities of daily living or leisure as a way to improve overall well-being. Quotes are provided for the subthemes in Table 3. Table 3 Quotations: managing distress and well-being Discussion, referral and service use within health settings "'I would suggest, that people are given the opportunity to speak to somebody, knowing that they can say whatever they like" (Female, 53, head and neck cancer, DT = 7) "It's almost like they (nurses) were handpicked for us, the way they look after us, welcome us, sit with us before we go in, and take us and make sure our bloods are done. Look, it's only little things that they do, but for us it's very important. And to be able to talk with us, to just ask us how are we going." (Male, 64, prostate cancer, DT = 4) "I know it's available but I probably wouldn't take it at this point in time … So, I don't feel like at this point, I really need it. I feel like I'm more likely to talk to my family than a counsellor about things if I'm upset. I would talk through it, but if I'm upset, I tend to talk through things with family, I guess." (Female, 75, bowel and liver cancer, DT = 4) "You're doing your job, asking the questions. I'm answering as much as I can. What happens after this, I don't know. I'm out the door…. ……Here's a list of services. Now, they did this. "Counselling services are pretty much problem-based, not… I guess the term I'd use is they're not conversational. A lot of people value friends where they can just sit down over a cup of coffee and have a yarn about this and that, but if I regard that on a hierarchy, that's about where things should start. It should start with… A discussion could be had just around this is impacting on my daily life or raising X, Y and Z, but you got to see a doctor. There's got to be some sort of malady they can put a number against." "Oh, I couldn't do it without her. Sometimes I get panic attacks. Like I get a tightness across my shoulders and then I get a tightness…. If it goes to my chest I wake [my wife] up and she massages my back and after a few minutes I just relax again." (Male, 72, bowel cancer, DT = 5) "Yes, it is, backup is really important, and the ladies at home, they're just as important as my family to me because they see me more than the family does, yes, they work, you know, my family, so they're there every day and they'll come and knock on the door, are you alright, come over and have a cup of coffee, you know, it's just, it makes all the difference in the world." (Female, 78, esophageal cancer, DT = 6) "Yes. My sons are always here. They're doing things for me. If I'm going out somewhere, they'll… If I need something done, they'll always do it for me. Friends are the same. They ring up to find out and see how I'm going. It's good to have that sort of support." (Male, 63, bowel and liver cancer, DT = 7) "I have good family that support me, and friends, but I found that it was… Having the treatment… It was really good. I had really good support. But then, as soon as you finished the treatment and everything seems to be fine, that they don't have as much interest." (Female, 63, leukemia and esophageal cancer, DT = 6) "Well, I've got a spiritual belief, and I think that helps… I pray each time; I suppose that's reaching out. But if I was in trouble, then I could talk to a minister or that sort of thing. But I'm not in trouble, so, I've got no need to do that." (Male, 70, prostate cancer, DT = 0) "I am a spiritual person. I do go to church, and I believe. I have a strong faith, so that has got me through immensely with this. Discussion, referral, and service use within health settings When prompted to speak about supports for management of their distress, many felt that a referral to formal support services such as a psychologist, social worker, or support group was not relevant to them. Although these participants stated that they did not believe that a referral would be useful to them, they nevertheless endorsed distress screening as a potentially important communication tool in their relationship with health professionals. At the same time, participants strongly emphasized the need for open communication with cancer nurses outside of a more formalized pathway, and explanations of the various forms of support were essential when discussing referrals. A small number of participants reported recognizing that they needed support and would have welcomed a referral. One patient suggested that although they did not feel they needed formalized support at this stage of their diagnosis, it could be helpful if their health declined. Other forms of support to manage distress levels Participants spoke of the importance of various types of support, social, spiritual, and formal/clinical support when managing distress. For some participants, this was seen as just as important as talking to a professional about their feelings. The need for support was also evident in discussion about practical assistance such as getting to appointments, cooking meals, and house cleaning. One participant suggested that informal social support can wane after treatment has finished. Activities for well-being and reducing distress Participants found a variety of strategies and activities helpful in promoting overall well-being and reducing distress. The majority of these were leisure activities which would not necessitate health professional involvement but could be facilitated within community services or groups. Examples included exercise, art, spiritual activities, and volunteering to help others. These activities were largely characterized as providing distraction and keeping busy to keep one's mind off the cancer. One participant found fulfillment in using their hobbies to give back to the people who support them. Experiences of fulfillment and rewarding social interactions were also shared by those who kept busy through volunteer work. Styles of coping with cancer In talking about distress, this led many participants to talk about their attitudes toward their cancer diagnosis more broadly. Representation of different styles of coping emerged from this discussion. Subthemes included being positive, acceptance, and distancing; quotes are provided in Table 4. Being positive The first clear style of coping that appeared was being positive. Although centered on positive belief, this attitude was expressed as though it were an active stance, a committed notion of "fighting," with the implication that not doing so (being positive) could lead to worse outcomes. Acceptance Another approach that participants described was acceptance. While being a common thread, participants' reasons for this acceptance were different: from knowing that family would be okay to acknowledging the life that they have already had. "I do a fair bit of art and stuff. So, I tend to do that… I find that a form of meditation sometimes.." (Female, 31, bowel cancer, DT = 0) "I'd try to keep busy. So, I'm not thinking about what the problem is… so, get out in the garden… or with the birds." (Male, 70, prostate cancer, DT = 0) "…it is very important, yes, to get out there and keep yourself busy. Otherwise, you just sit around, and you start to have all sorts of silly thoughts. You can get negative pretty easy." (Male, 67, prostate cancer, DT = 8) "I guess I'd have to say there's nothing more precious that you can give to somebody other than your time at this stage, when time is of high value. I feel like something beautiful that I'd make for somebody and give it to them, that's more than just a lump of wood, glued together and shaped, and varnished. I'm giving them some of my time." (Male, 64, prostate cancer, DT = 4) "I help out with the activities and after that we sit down and have a talk or before we sit down, we have a talk and… Because the residents were very good, because they used to go down and they like to see me and see how I was, and that helped too." (Female, 75, bowel and liver cancer, DT = 4) Distancing The final subtheme in styles of coping involved not thinking about the cancer. For some people, this also meant not talking about the cancer, or even naming it. Understanding of distress Although we provided participants the NCCN definition of distress in our interviews, and they had recently been through the process of distress screening, there was some lack of understanding as to the meaning of distress. Participants separated their experiences of anxiety from the term "distress." While distress is defined within research and clinically as encompassing common and normal feelings, participants saw this term as representing the more severe end of this continuum (see Table 5). Discussion This qualitative exploration revealed important information on the experience of brief distress screening among a group of people with cancer who had been screened as part of their Table 4 Styles of coping with cancer Being positive "I truly believe the more positive you are, the better off you are." (Female, 53 head and neck cancer, DT = 7) "I think certainly being positive is a help because it's like a slippery slide if you start thinking about, oh, I've got aches and pains, and it's not going to get any better, and I've got… and you just slide down the slope. It gets worse and worse. If you're positive, you might still go down the slide, but it's not a quick run." (Male, 70, prostate cancer, DT = 0) "Trying to stay positive, I guess, and not let things get to me too much. If I feel down, I guess I start to think about happy things." "I remember the first time I did the distress, I thought this is a big word. Distress was quite a big word to be using because at that time, I'd just been diagnosed and I obviously hadn't felt the effects of anything really yet, except probably the emotional side. But I wouldn't say I was distressed at the time. Rather than distress I'd say I have anxiety sometimes…" (Female, 31, colorectal cancer, DT = 0) "That's a bit of a problem because I'm not sure what part of the distress you want me to talk about, the distress about my chemotherapy problems, or my operation problems, or the coming operation problems, which is what I'm doing a pain chart for, now." (Male, 75, bladder cancer, DT = 5) usual appointment in an Australian hospital cancer clinic. This distress screening program is similar to other brief programs internationally [28,29] and uses one of the most common screening tools (the distress thermometer) [21]. When considering the overall acceptability of the screening program, participants contextualized their distress within broader themes of coping, support from their personal networks, and questioned the definition of distress. Distress screening was generally acceptable to cancer outpatient participants It is widely recognized that cancer diagnosis and treatment can significantly affect patients' well-being, such that distress during cancer is now considered the sixth vital sign [11,12]. An issue of clinical importance is that, despite the implementation of distress screening and the established evidence base for the effectiveness of psychological interventions to reduce distress, supportive care referral is generally low [30]. This has led to investigations to determine why this is the case. The findings of this study reaffirm that patients generally find distress screening to be acceptable although not always personally applicable [31]; furthermore, some participants did not perceive formalized supportive care services to be relevant or valuable. The logistics of distress screening, particularly the timing and health professional involved, showed variable preferences. For some participants, if presented too soon or without sufficient explanation, distress screening was seen as not useful. In some cases, it was reported as overwhelming amidst the start of treatment with information and appointments. This echoes other study findings in which patients and clinicians felt screening was more effective middle to late in the cancer trajectory rather than early [32]. Participants endorsed distress screening as the role of numerous health professionals; no single consistent clinician type was identified, and our participants' views demonstrate differing preferences for which clinicians should address distress. Clinical guidelines recommend that everyone responsible for the patient care should be at least aware of how the patient is progressing through the distress screening and management pathway [13]. However, an implementation barrier often cited is confusion as to roles and responsibilities in this process and the lack of time and confidence to ask about distress and provide follow-up [18,20]. Developing specific roles and responsibilities for the members of the multidisciplinary team along with training modules would facilitate distress screening implementation models. This principle of allocating responsibility should also extend to referrals, whereby a member of the healthcare team ensures need-based referrals are made and patients are empowered to action the referral. Patients may not perceive supportive care referrals as personally relevant Participants identified social supports such as family and friends to be paramount in providing support throughout their cancer diagnosis and treatment. For some participants, this was cited as being more important than formalized support. This may be one reason for low referral uptake, particularly among those who do not perceive themselves as distressed. Conversely, there was another group who felt that they did not want to burden their friends and family but saw value in talking about their experience. Availability and willingness to draw upon family and social support might be an important consideration when considering and presenting referral to patients. This notion has been explored previously in a study that highlighted that receptivity to referral is a separate issue from distress levels [33]. A further study suggests that referral uptake is driven, in part, by patients' conceptualizing psychological support as preventative to worsening distress as opposed to reactive when distress is already severe [34]. Knowledge of the benefits of support was also associated with increased referral uptake [30]. Within our study, participants often did not feel supportive care services were required because they did not feel distressed "enough." It is possible that this brief distress screening model did not provide patients with information on the support services nor the motivational coaching required to action supportive care referrals. In order to maximize the utility of screening, health professionals must confidently action screening results and empower the distressed patient to recognize the personal benefit of supportive care. This may require a paradigm shift as many health professionals are focused on the biomedical model of care, along with dedicated resources to provide timely access to embedded supportive care services [35]. The lack of training to confidently identify and manage distress is a common barrier reported by cancer professionals [36]. The utility of distress screening across different patient coping styles Interview participants noted oppositional coping styles, acceptance, and positivity versus distancing. This finding aligns with previous studies. For example, a qualitative study emphasized cognitive distancing as a coping strategy among cancer patients [37]. These different approaches have implications for both the method in which distress is introduced, measured, and then discussed by health professionals [37]. Individuals with distancing coping strategies may choose to opt-out of distress screening and provide a non-representative answer (i.e., provide a lower score), or health professionals may be reluctant to continue discussions about emotional well-being. A qualitative study exploring general practitioners' perceptions of assessing distress in cancer patients identified "denial" as a barrier to implementing further psychosocial assessment [38]. A study of communication distancing with women with breast cancer suggests that coaching distant patients and their loved ones to have difficult conversations about emotional well-being may be an important psychosocial intervention to enhance coping capacity [39]. Acknowledging patients' coping strategies is a complex component of providing person-centered care which requires patients' preferences to be respected. For example, previous studies have demonstrated distant coping styles were positively associated with quality of life, whereas emotionfocused styles were negatively associated with quality of life [40]. Other studies have suggested distant thinking was related to worse long-term outcomes [41]. While this longstanding debate on the utility of distant coping continues [41], there is little guidance specifically on how to coach patients who are highly distressed and distant to utilize supportive care interventions. Furthermore, it would be valuable to explore the acceptability of psychosocial screening and uptake of subsequent referrals across coping styles in a larger sample using validated measures, such as the Mini-Mental Adjustment to Cancer or Brief-Cope Inventory. Also, it is important to acknowledge that while the coping styles have been presented as different approaches, it is likely that they are not mutually exclusive; patients may utilize different strategies depending on their evolving needs and experiences. Concept and definition of distress Nomenclature surrounding mental health is challenging, and the term "distress" was selected by tool developers as it was perceived to be less stigmatizing than "psychological" or "emotional" and therefore more acceptable [3]. However, the term distress can have different meanings across different disciplines and areas of life [42]. As a one-item tool, the DT is perceived as efficient in quickly capturing individuals who are potentially experiencing distress. Nevertheless, it is paramount that those completing the tool have a clear understanding of the definition of distress. So that tools remain brief, it may be ideal to have a quick orientation to the concept of distress before the first administration or provide an abbreviated patient resource with a more extensive definition [43]. Future studies could explore the effect and patient experience of these suggestions. Limitations This research may have been affected by selection bias. As part of the recruitment process, patients in the clinic were not invited to participate if they declined completing the DT. Other patients declined to participate citing not feeling they could contribute to a discussion about distress. It is possible that patients who declined the screening process or study participation may have differed from the study participants in their perceptions of being asked about distress, managing their diagnosis and ways of coping. The study also included individuals who were newly diagnosed and those who were more than 2 years post diagnosis. Although this aligns with universal screening of all individuals in contact with the health service and the majority of our participants reported some level of distress, it is possible that individuals reflected differently on their experience of distress screening. We also did not ask participants if they had completed other emotional well-being screening questions or had previously completed the DT in other settings. As patient-reported outcome and experience measures become more integrated into health services, it is possible that patients particularly those who had been diagnosed more than 2 years ago had become more comfortable and familiar with completing these exercises. Additionally, none of the participants in this study identified as being of Aboriginal or Torres Strait Islander descent. Although the DT is a validated tool, acceptability of screening tools can vary across and within cultures. Distress in Aboriginal and Torres Strait Islander populations remains under-researched [44], and further studies should consider acceptability and cultural safety in these populations. It is important to note that this study discussed only one form of available distress screening methods which was not supported by a formalized referral pathway at the time of screening, though the cancer service did have access to a psycho-oncology team. Conclusions This study found patients are generally accepting of in-clinic distress screening, and brief screening tools are important triggers or "red flags" for subsequent discussions. However, just as our study participants expanded discussions of distress screening to broader concepts of coping and support, so must health professionals. Our results suggest that in order for patients' distress to be accurately captured and supportive care to be provided, clinicians and our systems must consider patients': varying coping strategies, external support networks, understanding of terms, and motivations for accessing supportive care services. More research is needed to elucidate how we gather this information and how it impacts the distress screening process. Declarations Ethics approval This project was approved by Hunter New England Human Research Ethics Committee (2018/ETH00520). Consent to participate All participants provided written consent to partake in this research. Consent for publication All participants provided written consent for the research to be published. Conflict of interest The authors declare no competing interests. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2021-11-25T14:43:03.506Z
2021-11-25T00:00:00.000
{ "year": 2021, "sha1": "2fc7a49cba9590dc986c8dfb2cf10c6e2718cac5", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00520-021-06671-2.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "2fc7a49cba9590dc986c8dfb2cf10c6e2718cac5", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
255050074
pes2o/s2orc
v3-fos-license
Incorporating personalized learning in a role-playing game environment via SID model: a pilot study of impact on learning performance and cognitive load While role-playing games and personalized learning have been regarded as effective tools to improve students’ learning, incorporating personalized learning into role-playing games is challenging and approaches are limited to cognitive and motivational variables. Aiming at expanding approaches to incorporate personalization into role-playing games, this study included affective and cognitive variables to develop a personalized role-playing game, guiding by the situational design model. A pilot study was conducted to examine the effectiveness of the game on students’ learning performance and cognitive load. Results showed that personalized role-playing game environment was effective in improving students’ performance, reducing extraneous load, and promoting germane load. This study also found that although decreased extraneous load, could leave students more GL capacity for efficient learning, this would not necessarily lead to performance improvement. Students need to be motivated to invest sufficient germane load to actively process the learning materials and thus, improve performance. The findings have several implications for future research designing personalized educational games aimed to promote efficient learning. Introduction As technological advances in educational games, there is an increasing research interest in exploiting digital role-playing games (RPGs) in educational practice as RPGs are more likely to support a wide range of educational potential than many other games by allowing students to experience a series of educational scenarios and operate an avatar character in an imaginary environment (Daniau, 2016;Deterding & Zagal, 2018). Being designed for interaction, educational RPGs have given students the choice of determining their actions and provided an engaging approach for students to test their knowledge and reflect on the effect their choices made on the game (Rahman & Angraeni, 2020). Personalized learning (PL) has been regarded as an effective pedagogical strategy to improve scholastic path, learning process, and learner satisfaction in different learning environments by providing a unique learning experience through accommodating individual differences in learning (Martin et al., 2020;Zhong & Xu, 2019;Zhong, 2022a). As perceiving the importance of addressing individual learners' needs in gaming environment, game designers began to incorporate PL into games by adaptively tailoring game content (Zualkernan et al., 2010), game structure (Lin et al., 2013), and/or presentation of game materials to individual game player (Soflano et al., 2015). Previous studies have proved the effectiveness of personalized games in improving learning . For example, Plass et al. (2019) compared the effectiveness of an adaptive and a non-adaptive version game on training shifting skills and results showed that the adaptive version was more effective than the non-adaptive version game in promoting student's performance. Clark et al. (2016) reported similar results that students who received adaptive self-explanation prompts significantly outperformed in the post-test than students who were in the non-adaptive version. However, approaches to incorporate PL into educational games limited to cognitive and motivation variables, such as prior knowledge and motivation (Plass & Pawar, 2020). Those limited variables resulted in narrow approaches to personalize learning in educational games. More learner attributes, especially from affective and sociocultural domains, should be taken into consideration in the design of personalization in educational games. Additionally, PL has been rarely applied in the context of RPGs. Discussions of incorporating personalized learning specific in RPGs are quite limited in extant literature. Despite the pedagogical benefits of personalized RPGs, how PL could be best incorporated into RPGs environment is still unclear. For instance, will the incorporation of PL into RPGs impact students' performance? Will it require extra cognitive load from students? Little information can be drawn from extant literature to answer those questions. There is a need for more improvement to this research direction. This study aims to fill this gap by exploring how PL could be incorporated into RPGs and whether the incorporation would affect students' learning, such as performance and cognitive load. The purpose of this study was to expand the approaches to incorporate PL into educational games by including affective variables in the design of personalized games. This study would provide empirical evidence of the effectiveness of personalized educational games on students' performance and cognitive load in the context of role-playing game. Findings of this study would assist researchers, especially international scholars who are interested in personalized game design as well as inclusive game design. The following questions guided this study: 1. How does a personalized RPG environment developed via situational instructional design model affect student's learning performance? 2. How does a personalized RPG environment developed via situational instructional design model affect student's cognitive load? 3. Is cognitive load related to learning performance in a personalized RPG environment? The structure of this article is as follows. First, related concepts and frameworks were reviewed in the Literature section, including cognitive load, digital RPGs, PL and RPGs, and situational instructional design (SID) model. Second, the six steps of personalized RPG development were detailed in the Development of a Personalized RPG section. Third, the Pilot Study section provides the details of the pilot study, including research design, context and participants, data collection, and data analysis. The next section presents the results of pilot study. Interpretations of the results were then provided in the Discussion section. The article concludes with implications and limitations. Cognitive load Cognitive load theory indicated that information is processed in working memory, which is affected by intrinsic load (IL), extraneous load (EL), and germane load (GL) (Sweller et al., 1998). IL is the intrinsic nature of the learning materials and cannot be altered by instructional design (Sweller et al., 1998). When learning materials have high element interactivity, students are expected to have high IL to process several elements simultaneously (Sweller, 2010). Low-element interactivity materials do not expect high IL because those materials have fewer elements that can be processed serially rather than simultaneously (Sweller, 2020). EL refers to the load that does not contribute to learning (Sweller et al., 1998). It can be altered by instructional interventions and is determined by the instructional design (Sweller, 2010). If the instruction was poorly designed, high EL would occur among students. Thus, a good instructional design should be able to decrease EL (Sweller, 2020). GL is the invested cognitive effort that facilitates efficient learning (Sweller et al., 1998). If IL and EL were low, students could be directed to procedures that were relevant to learning (Sweller, 2010). Caution is needed not to exceed the limits of the total working memory (Sweller, 2020). In gaming environment, cognitive expenditure is expected to be higher than traditional e-learning environment because gaming environment has rich multimedia that needs considerable amount of cognitive capacity to process the gaming environment when simultaneously interacting with game components (Mayer, 2010). However, research regarding cognitive load in gaming environment yielded mixed results. Chang et al. (2018) compared differences in cognitive load between gaming environment and traditional computer-based environment and students displayed lower cognitive load and better performance. Studies conducted by Huang (2011) and Schrader and Bastiaens (2012) found increased amount of cognitive load among students. The incorporation of PL into games will increase the complexity of the learning environment. How students' cognitive load would be affected in this complex personalized gaming environment needs more research. Digital RPGs RPGs are games that allow the players to assume the roles of fictional characters and operate those characters in a fictional game environment (Deterding & Zagal, 2018). Among different forms of RPGs, digital RPGs is the most used forms in educational settings. Digital RPGs are digitized tabletop RPGs that all the game character operations occur on a computer rather than the paper. Many RPGs features could be used for educational purposes. For instance, portraying game characters gives students the opportunities of controlling game characters, which could stimulate perspective-taking and experience-taking (known as immersion). Taking other people's perspectives allows players to practice social-emotional skills and deepen understandings of learning materials. Experiencing or immersing in a game story will assist players to adjust their behaviors in the real world. Digital RPGs has been successfully implemented in various educational settings, such as engineering education (McConville et al., 2017), language learning (Ng et al., 2021;Peterson, 2016), and physical science (Garneli et al., 2019). Previous studies have proved RPGs' potential to be an effective approach of promoting knowledge acquisition and cognitive construction (Chen et al., 2021;Daniau, 2016;Liao et al., 2019;O'Brien et al., 2010;Yang & Quadir, 2018). For example, Kusuma et al. (2021) developed an RPG to support students' historical learning and results showed significant improvement on students' performance. Zhong (2022b) examined the effectiveness of an RPG and also found significant improvement in performance. PL and RPGs PL has been demonstrated as superior to traditional one-size-fits-all instructional approaches as it allows students to customize learning based on their own interests and abilities (Martin et al., 2020). Researchers have further indicated that PL is able to engage students in critical thinking and help them achieve higher level of learning (Zhang et al., 2020). Thus, PL has been applied to a variety of educational contexts and findings of those studies has supported the effectiveness of PL on students' learning outcomes, motivation, and metacognitive skills (Arroyo et al., 2014). In recent years, researchers attempted to incorporate PL into educational games. From the experimental results, they found that personalized games have the potential to improve students' learning performance and reduce cognitive load Soflano et al., 2015;Yang & Quadir, 2018). For example, Zualkernan et al. (2010) used student's prior knowledge to determine the subsequent questions in an adaptive RPG environment. Their follow-up case study, using mixed method, showed positive influence on students' performance. Troussas et al. (2020) used a knowledge assessment module to assess students' knowledge level in the programming language Visual C# and generate personalized quiz questions based on each student's assessment result. Results of their study also yielded positive influence on students' performance. In Ku et al. (2016) study, students' cognitive preferences (Holists or Serialists) were utilized to personalize the content layout and navigation support. Results showed the personalized educational games was useful to enhance students' learning performance and reduce cognitive load. Students' facial emotions collected via webcam were used in Tsai et al. (2012) study to personalize the game difficulty and the learning materials difficulty. Increased motivation and satisfaction were reported in their study. Krouska et al. (2020) developed a personalized brain-based quiz game that was able to adapt the quiz content based on students' motivational state. Their study showed that students in the personalized learning group outperformed the non-personalized group. In the meanwhile, researchers pointed out the limitations of incorporating PL into digital RPGs. Plass and Pawar (2020) reviewed various implementation approaches and noticed that the variables considered for personalization were limited to cognitive and motivational variables (e.g., prior knowledge and motivation). This resulted in narrow approaches to personalize educational games. More learner attributes, especially from affective and sociocultural domains (e.g., emotional state), should be taken into consideration to personalize educational games. SID model Situational instructional design (SID) model was developed by Zhong and Xu (2019), aiming at addressing individual differences in instruction. SID model consists of two parts: learning readiness (LR) status and situational design model. LR status is the core concept of SID model that assists with identifying students' individual differences in recurrent skills, non-recurrent skills, and willingness (refer Zhong and Xu (2019) for definitions). Situational design model provides guidelines of designing instructional styles that match each LR status. Each instructional style is a combination of procedural learning activity, supportive activity, and relationship activity (refer Zhong and Xu (2019) for definitions). Development of a personalized RPG SID model was utilized in this study to guide the development of a personalized RPG because this model has consolidated both cognitive variables (prior knowledge and cognitive skills) and affective variables (emotional state) to design personalized learning. Additionally, SID model has sound theoretical basis that informs us how the learning environment should respond to students' differences along the identified variables. The development of the personalized RPG used RPG Maker MV and contained six steps: (1) identify recurrent and non-recurrent skills; (2) develop player diagnosis survey; (3) develop personalized responses; (4) determine game flow and interactions; (5) present game content; (6) test and launch. Identify recurrent and non-recurrent skills The first step is to identify related domain recurrent and non-recurrent skills. According to Zhong and Xu (2019), recurrent skills refer to students' proficiency in performing routine aspect of the problem, such as explaining the definitions and conducting related procedures. Non-recurrent skills represent students' ability in performing nonroutine aspect of the problem, such as evaluating, abstracting, and reasoning. In this study, students will be introduced to situational leadership theories and complete multiple case studies to practice applying theories. Related recurrent skills include understanding basic concepts of situational leadership (e.g., ability, willingness, and behavioral indicators of ability and willingness), defining the four leadership styles, and describing how each leadership style looks like. Examples of non-recurrent skills include assessing performance readiness level, identifying matching leadership style, and explaining the results of mismatch between readiness and leadership style in the specific case context. Recurrent and non-recurrent skills identified in this study are summarized in Table 1. Develop player diagnosis survey Player diagnosis survey is used to identify each player's LR status. In this study, we adopted the LR survey used in Zhong and Xu's (2019) study (validated by Delahaye & Smith, 1995; Cronbach's alpha reliability = .73). The six eight-point Likert survey questions were slightly revised to cover the topic in this class (see "Appendix 1" for the player diagnosis survey). The first two questions were to identify students' recurrent skills and marked as r score. The third and fourth questions were to evaluate students' non-recurrent skills and marked as nr score. The last two questions were to assess students' willingness and marked as w score. Each score is the sum of the responses to the two questions, ranging from 1 to 16. If falling between 1 and 8, this score would be stored as 0 in the game. If falling between 9 and 16, this score would be stored as 1 in the game. The LR survey was implemented in the game via an NPC when players enter the game (see Fig. 1). Players could not skip the LR questionnaire as it was required to move forward. The matching map and classification of LR status and instruction is provided in Table 2. Develop personalized responses Development of personalized responses is to generate appropriate instructional style for each LR status. In this study, a total of ten case study materials, which were course materials (details of each case study can be found at "Appendix 2"), were utilized to develop the eight instructional styles as described in Table 4. Case study one, two, and three aimed to develop recurrent skills. Case study four, five, and six were to develop non-recurrent skills. Case study seven, eight, nine, and ten focused on applying the theories to solve problems. Relationship activities were provided to the instructor as a separate instruction guide (see "Appendix 3" for a sample guide, adopted from Zhong, 2022c). Examples of high relationship activities included explaining why, emphasizing how to, sharing the responsibility of decision-making, and encouraging questions. Examples of low relationship activities included keeping the emotional level in check, directly explaining specific facts, and encouraging autonomy and freedom for risktaking. Explanations of instructional styles are summarized in Table 3. Determine game flow and interactions Game flow depicted the overall flow of the game, from the moment it's launched to the end of the game. It visually shows how the game works and what game players will experience. Figure 4 shows the flow of the personalized RPG in this study. As shown in Fig. 2, players could access the game via different interfaces, such as mobile devices, laptop, and desktop. After launching the game, players would be directed to player diagnosis component. Present game content Game content presentation is to present the personalized responses. Each response consisted of four case studies and was presented as four maps in the game. Interactions were implemented via different NPCs in the maps. Figure 3 is an example of a map presented to the player. The player was directed the first map and interacted with the NPCs in the map once s/he completed the LR survey. When the first map Table 3 Explanations of instructional styles (adopted from Zhong, 2022c) Instructional style Explanation was completed, the player could use the doors to navigate to the next map where s/ he would complete the next case study. This process would repeat until all four maps were successfully completed. The player would be directed by an NPC to the exit map where s/he would end the game (see Fig. 4). Test and launch Test and launch are to identify defects and bugs in the game to improve stability and performance. In this study, the game was finally launched as a web-based game (https:// bessi ezhon glin. itch. io/ listo-system-leade rship) and a pilot study was conducted to (a) test the quality of the game and (b) examine its effectiveness on students' learning performance and cognitive load. The following section provided detailed descriptions of the pilot study. Pilot study Research design The pilot study utilized one-group repeated measurement design to investigate the effect of personalized RPG on students' performance and cognitive load. The rationale is that only one group of participants were available to the study (Gall et al., 2007). Before the intervention, participants' academic performance was assessed by a pretest and cognitive load was assessed by a survey to establish the baselines. After the intervention, participants' academic performance was assessed again using a posttest, which was slightly modified from pretest. Cognitive load was assessed again using the same tool. Differences before and after the intervention were calculated and compared to identify the effectiveness. Context and participants This study was conducted in a workforce education program at a large university in the midwestern United States. The game was implemented in the fifth learning module of a 16-week online course. This module would introduce situational leadership theories to students. The instructor of the course participated in the development of the module. On the first day of the class, the recruitment letter was emailed to all students who enrolled in this course. A consent form that explained the details of this study was attached in the recruitment email. Forty-one students signed consent forms and indicated their participations in the research, including six males and twenty-two females. All participants were between 24 and 35 years old. Data collection Participant's academic performance was evaluated by a pre-/posttest of their knowledge of the respective content. The two tests/exams were created by the faculty who had previously taught the content. The two tests/exams were revised and refined by specialists with expertise in assessment and loaded into the learning management system. Full score of each test/exam was 100 points. Pretest was delivered in week four, the week before the intervention. Posttest was delivered in week six, the week after the intervention. Participant's cognitive load was measured by a survey developed and validated by Leppink et al. (2013). This survey consisted of nine Likert scale questions that measured three types of cognitive load devoted in the learning process. This survey was also delivered to participants before and after the intervention. Data analysis Participants' performance on the pre/posttest was graded by two trained raters independently and average scores between the two raters were used for analysis. Interrater reliability analysis using Cohen's Kappa was conducted to determine the internal consistency among the raters. Crobach's Alpha was conducted to determine reliability of the cognitive load survey. Shapiro-Wilk test showed that the data significantly deviated from a normal distribution (F = .95, p = .011 < .05). Thus, Wilconxon signed-rank test was performed to determine the impact of the personalized RPG on participant's academic performance and cognitive load. Spearman' rank-order test was conducted to determine the relationship between learning performance and cognitive load. Results The interrater reliability for the raters on pre/posttest was found to be Kappa = .87 > .75, 95% CI (.55-1.00). The cognitive load survey was found to be reliable (Cronbach's α = .897). Demographic characteristics of the participants are summarized in Table 4. Descriptive results for learning performance and cognitive load are summarized in Table 5. To determine the impact on learning performance and cognitive load, Wilconxon signed-rank test was conducted. Results (see Table 6) showed significant difference in learning performance (Z = 5.582, p < .001), EL (Z = 5.655, p < .001), and GL (Z = 5.654, p < .001) but not in IL (Z = 1.897, p = .058 > .05). To determine the relationship between learning performance and cognitive load, Spearman's rank-order test was conducted. Results (see Table 7) showed that there was a negative weak correlation between performance and IL (r s = − .349, p = .025 < .05) and a negative moderate correlation between performance and EL (r s = − .423, p = .006 < .05). There was a positive strong correlation between IL and EL (r s = .644, p < .001). Discussion This study developed a personalized RPG via SID model and piloted its effectiveness on students' performance and cognitive load. Results showed that students' performance has been significantly improved after trained in the personalized RPG environment. This study also observed significantly decreased EL and increased GL. These findings indicated that personalized RPG environment is effective in improving learning performance, reducing EL, and promoting GL. This is consistent with Chang et al. (2017), Toth and Kayler's (2015), Hwang et al. (2012), Zualkernan et al. (2010), and Zhong's (2022d) studies, which observed significant improvement on learning performance and cognitive load among students who were trained in the personalized gaming environment. Additionally, this study found a negative moderate correlation between performance and EL and a positive strong correlation between IL and EL. Performance was correlated to IL, but the correlation was negative weak. Correlation between GL and other variables was not found in this study. These findings imply that although decreased EL could leave students more GL capacity for efficient learning, this would not necessarily lead to performance improvement. Students need to be motivated to invest sufficient GL to actively process the learning materials and thus, improve performance. In this study, students' EL decreased and GL increased but correlation between EL and GL was not observed, indicating that the personalized RPG environment constructed in this study has the potential of motivating students and promote sufficient GL investment in efficient learning. This echoes previous empirical studies (e.g., Fadda et al., 2022;Westera, 2019;Yu et al., 2021) as well as related theoretical discussions regarding motivational effects on cognitive load proposed by Paas et al. (2005). These findings also align with the cognitive load theory (Kalyuga, 2009;Sweller, 2010Sweller, , 2020 that IL is determined by the learning content and cannot be altered by instructional design. EL and GL can be changed by instructional interventions and decreased EL would improve students' performance. Implications and limitations Findings of this study have three implications. First, SID model could be an effective approach to include cognitive and affective variables when incorporating personalization into educational games. SID model is a practical approach that not only provides approaches to identify students' cognitive and affective differences but also supplies recommendations on how the learning environment should respond to students' differences via different instructional styles. Second, educational game designers need to consider not only reducing EL but also promoting GL investment in the gaming environment. Reducing EL could increase GL capacity but not the actual GL investment. Third, motivational design is necessary in game design to direct students to procedures that are relevant to learning and exert sufficient GL. Keller's (1983) ARCS (Attention, Relevance, Confidence, Satisfaction) model is a good approach to incorporate motivational components in educational games. This study implies that motivation theories or models need to be included in current design models to promote efficient cognitive processing in educational games. This study also has two limitations. First, control group was not included in this study so it's difficult to determine what game features have contributed to students' improvement. Future studies are suggested to include a control group to study how game features impacted students' performance. Second, participants' demographic characteristics limit the generalization of this study. Most participants were female students who had limited gameplay experience. Other populations, such as male students who had more gameplay experience may product different results. Future research is recommended to study more diverse student populations. Conclusion This study developed a personalized RPG via SID model and piloted its effectiveness on students' performance and cognitive load. Results of the pilot study demonstrated the effectiveness of the personalized RPG on students' performance and cognitive load. This study also found that increased GL capacity would not necessarily lead to performance improvement. Students need to be motivated to invest sufficient GL to actively process the learning materials, and thus, improve performance. Findings of this study revealed the important roles that motivation plays in facilitating efficient learning in gaming environment. Researchers and practitioners are suggested to use findings of this study to guide future game designs. Case seven Case seven describes behavioral interactions between Kelly Fontane (a training specialist for new office staff training) and Julene Garfield (an employee in Kelly's team) regarding encoding tasks and depositing forms. Students will assess Julene's Performance Readiness level for the task of encoding. Students will also assess Kelly's leadership style and the reasons for Kelly to use this particular leadership style Appendix 1: Player diagnosis survey Case eight Case eight describes Yuki Tanaka's (supervisor of the sales representative team at Listo Systems Company) behavioral interactions with her team members when dealing with performance problems in the team. Students will identify and assess Yuki's actions that indicate task behavior and relationship behavior. Based on the assessment results, students will identify Yuki's leadership style and assess the appropriateness of this leadership style Case nine Case nine describes Raul Martinez's, (a graphic design supervisor for Listo Systems Company) behavioral interactions with his group regarding the production in the graphic design department. Students will identify the group's Performance Readiness level for the task of production in the graphic design department and leadership style that best matches the group's Performance Readiness level. Students will also assess leadership style that Raul is using and the indicators that Raul is using the appropriate role. Additionally, students will explain potential hindering roles that Raul could be using if there is a performance readiness/style mismatch Case ten Case ten describes Michelle Hoffman's (supervisor of the market research group) behavioral interactions with her team when doing market research. Students will identify the group's Performance Readiness level for the task of doing market research and leadership style that best matches the group's Performance Readiness level. Students will also assess leadership style that Michelle is using and explain the group's likely response to the mismatch of leadership style Appendix 3: Sample relationship activity instruction guide For R1 students, the instructor is suggested to provide direct explanations of the game tasks; provide game task information in digestible amounts; help the student step by step and avoid overwhelming; instruction focuses on task completion; reinforce small improvements; explain consequences of nonperformance, such as not completing the game; check emotional level regularly. For R2 students, the instructor is suggested to explain consequences of nonperformance, such as not completing the game; encourage trying; support risk-taking; praise and build confidence; ask students question to clarify their understandings of the game tasks; discuss details of the game tasks; explore related non-recurrent skills; compliment students when they finish the game tasks. For R3 students, the instructor is suggested to provide direct explanations of the game task; support risk-taking; praise and build confidence; discuss details of game tasks; ask students question to clarify their understandings of the game tasks; encourage students to ask questions; compliment students when they finish the game tasks; For R4 students, the instructor is suggested to explain consequences of nonperformance, such as not completing the game; seek "buy-in" through persuading; discuss details of game tasks with students; praise and build confidence; compliment students when they finish the game tasks. For R5 students, the instructor is suggested to provide game task information in digestible amounts; help the student step by step; ask students question to clarify their understandings of the game tasks; discuss details of the game tasks; encourage students to ask questions; explore related non-recurrent skills. For R6 students, the instructor is suggested to ask students question to clarify their understandings of the game tasks; discuss details of the game tasks; explore related nonrecurrent skills; reinforce small improvements. For R7 students, the instructor is suggested to provide direct explanations of the game tasks; help the student step by step; instruction focuses on task completion; reinforce small improvements. For R8 students, the instructor is suggested to monitor gameplay activities; provide relatively light supervision regarding game completion; give freedom for risk taking; encourage autonomy of gameplay, such as explore other maps in the game. RPGs Role-playing games PL Personalized learning IL Intrinsic load EL Extraneous load GL Germane load SID Situational instructional design LR Learning readiness
2022-12-24T16:38:53.502Z
2022-12-21T00:00:00.000
{ "year": 2022, "sha1": "ab23f36bc3ff11d3840bd4ab796f919301fb9648", "oa_license": "CCBY", "oa_url": "https://slejournal.springeropen.com/counter/pdf/10.1186/s40561-022-00219-5", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b1c0491143d7c67be665d576f659d549ad49e93a", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [] }
203251290
pes2o/s2orc
v3-fos-license
A Study on Marketing Behaviour of Tomato Growers in Shivpuri District M.P, India Tomato known scientifically as Solanum lycopersicum is one of the important and popular vegetables and plays an important role in balanced nutrition. Tomatoes are major source of antioxidant lycopene, which has been linked to many health benefits, including reduced risk of heart disease and cancer. They are also a great source of vitamin C, potassium, folate, and vitamin K. The water content of tomatoes is around 95 per cent. The other 5 per cent consist mainly carbohydrates and fiber. The annual production of tomatoes 20 million tonnes in the country, Madhya Pradesh is leader in tomato production in India, with 15.75% of the total production. Introduction Tomato known scientifically as Solanum lycopersicum is one of the important and popular vegetables and plays an important role in balanced nutrition. Tomatoes are major source of antioxidant lycopene, which has been linked to many health benefits, including reduced risk of heart disease and cancer. They are also a great source of vitamin C, potassium, folate, and vitamin K. The water content of tomatoes is around 95 per cent. The other 5 per cent consist mainly carbohydrates and fiber. The annual production of tomatoes 20 million tonnes in the country, Madhya Pradesh is leader in tomato production in India, with 15.75% of the total production. There is a huge potential for processing Industry in Madhya Pradesh due to the high volume of the harvest, which becomes financially attractive from an investment point of view. Besides the volume cost benefits, the state government is also giving attractive The present study was carried out in Shivpuri district (M.P.) due to the maximum area (8145 ha) and production (252495 MT) of tomato in this district. There are 8 blocks of Shivpuri district out of which 3 blocks were selected on the basis of maximum area and production.120 respondents were selected with the help of simple random sampling without random sample. Objective of this study is to study the Marketing behavior of tomato growers. marketing behavior that consumers display in searching for, purchasing, selling, using, evaluation and disposing of products and services that they expect will satisfy their needs. The study revealed that majority of the respondents (75%) had medium level of overall marketing behaviour followed by low (18.3%) and only 6.7 per cent had high level of marketing behavior. Materials and Methods This study was carried out in Shivpuri district of Madhya Pradesh. Shivpuri district is situated in the central India state of M.P. due to the maximum area (8145 ha) and production (252495 MT) of tomato in this district. 8 blocks come under Shivpuri district out of which 3 blocks (Pohari, Kolaras and Shivpuri) were precast on the basis of maximum area and production. 120 respondents were selected with the help of simple random sampling without random sample. primary data were gathered from the respondents by using the semi structured interview schedule, which was pretested before actual applications. In order to understand the farmer well and answer, Hindi was used in the interview Schedule. the knowledge 3 for complete, 2 for partial and 1 for low knowledge of each practice was assigned. Results and Discussion The study of dependent variable was made with reference to marketing behaviour of tomato growers. The table1 shows that the reason to sell at any time, majority of the respondents (93.33%) had lack of cold storage followed by domestic financial requirement (90.83%), to repay the loan (68.33%), lack of quality (44.17%) and 40 per cent of the respondents had due to being highly destructive. In the case of want to sell tomato, majority of the respondents (80%) expressed that they sold their produce to directly sell to the wholesaler followed by sold their produce directly to the cooperative committee (38.33%), to the middleman (30.83%) through the commission agent and to the retail salesperson (7.5%). Finally, the reason for selling tomato at a certain place; majority of the respondents (95.83%) had good sale, followed by good transport facility (87.5%), get good value (79.17%), lack of mediators (47.5%) and business facility availability (29.17%). Overall marketing behavior of tomato growers Marketing behavior that consumers display in searching for, purchasing, selling, using, evaluation and disposing of products and services that they expect will satisfy their needs. It is obvious from the Table 2, majority of the respondents (75%) had medium level of overall marketing behaviour followed by low (18.3%) and only 6.7 per cent had high level of marketing behaviour. Majority of the respondents had medium level of marketing behavior.majority of the respondents (75%) had medium level of overall marketing behaviour followed by low (18.3%) and only 6.7 per cent had high level of marketing behaviour.
2019-09-27T07:34:35.984Z
2020-06-20T00:00:00.000
{ "year": 2020, "sha1": "c68bc81f7a0698ed691b954291d2d68562512231", "oa_license": null, "oa_url": "https://www.ijcmas.com/9-6-2020/Sonare%20Rashmita,%20et%20al.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "183bfdcfc136587be2eb64dc2fdb9b3cf5bf615e", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Business" ] }
230557770
pes2o/s2orc
v3-fos-license
Research onOperation Characteristics and Safety Risk Forecast of Bus Driven by Multisource Forewarning Data To prevent and control public transport safety accidents in advance and guide the safety management and decision-making optimization of public transport vehicles, based on the forewarning and other multisource data of public transport vehicles in Zhenjiang, holographic portraits of public transport safety operation characteristics are constructed from the perspectives of time, space, and driver factors, and a prediction model of fatigue driving and driving risk of bus drivers based on BP neural network is constructed. Finally, model checking and virtual simulation experiments are carried out. ,e result of the research shows that the driver’s fatigue risk during the period of 7 : 00-8 : 00 am is much higher than other periods. When the bus speed is about 30 km/h, the driver fatigue forewarning events occur the most. Drivers aged 30–34 years have the largest proportion of vehicle abnormal forewarning, drivers aged 40–44 years have the largest proportion of fatigue forewarning events, and drivers with a driving experience of 15–19 years have the largest overall proportion of various forewarning events. When the vehicle speed range is (18, 20) km/h and (42, 45) km/h, the probability of fatigue driving risk and driving risk forewarning increases sharply; and when the vehicle speed is lower than 17 km/h or 41 km/h, the probability of fatigue driving risk and driving risk forewarning, respectively, is almost zero.,e probability of fatigue forewarning during low peak hours on rainy days is about 30% lower than that during peak hours.,e probability of driving forewarning during flat peak hours is 15% higher than that during low peak hours and about 10% lower than that during peak hours. ,is paper realized for the first time the use of real forewarning data of buses in the full time, the whole region, and full cycle to carry out research. Related results have important theoretical value and practical significance for scientifically guiding the safety operation and emergency management strategies of buses, improving the service level of bus passenger transportation capacity and safety operation, and promoting the safety, health, and sustainable development of the public transportation industry. Background Introduction At present, China mainly evaluates the safety of buses based on the incidence of traffic accidents. e evaluation indicators and analysis methods are relatively single, and there is still a lack of accurate control, effective prevention, and emergency management countermeasures. Since 2019, with the integration and system development of BDS, video, radar, and other technologies, buses in some Chinese cities have installed vehicle driving safety forewarning systems, enabling holographic perception, dynamic monitoring, and risk reminders of the bus operation process [1]. By acquiring the historical data of vehicle forewarning of Zhenjiang Public Transport Company in Jiangsu Province of China, this paper realized for the first time the use of real forewarning data of buses in the full time, the whole region, and full cycle to carry out research. is paper excavates the general rules and main hidden dangers of vehicle forewarning events and carries out objective analysis and situation prediction of bus operation risks [2]. Relevant research conclusions have important practical significance for improving the safe operation of buses, carrying out corresponding optimized dispatching [3] and emergency management, eliminating hidden dangers of bus operation, and improving and promoting the convenience, safety, and sustainable development of public transportation [4]. Many studies believe that the fatigued driver and driver driving state is the most important factor affecting urban public transport safety, and the driver state is affected by the driver's attributes, external environment, and other aspects. Relevant scholars have researched related factors affecting the safe operation of buses, mainly as shown in Table 1. By studying the related factors that affect the severity of bus collisions [5], it can be seen that the factors such as start inhibition, automatic door opening, bus materials, and internal structure are relatively related to bus safety. Researches on perception and driving behavior [6] have shown that drivers who have experienced accidents are more likely to have collision accidents in the future. By studying the factors affecting road traffic accidents, it is known that the advanced driver assistance assessment system (ADAS) [7] can provide drivers with safety support and help avoid distractions. Besides, the vehicle anticollision forewarning strategy [8,9] is formulated through the study of the driver's reaction time when a collision occurs. e Palm probability distribution method [10] is used to study road accident risk under different weather conditions. e research results show that the accident risk probability of snow is higher than that of rain. Among them, the greater the precipitation intensity, the higher the relative accident risk probability. Secondly, the logistic regression model [11] is used to study the correlation between the driver's age, gender, vehicle, road environment, and other factors and the severity of traffic accidents. e results show that the road infrastructure conditions and the driver's age have a significant impact on the severity of road traffic accidents. By using a logarithmic linear model [12] to study the impact of time factors on the severity of bus driver collision injuries, the results show that driving in the late night or early morning will increase the risk of serious injury to bus drivers. Many scholars have researched vehicle safety characteristics and management requirements, mainly as follows. Firstly, utilizing the historical traffic data in the USA from 2005 to 2009, the potential risk factors of public transportation safety accidents are summarized [13], and it is found that the driver is the main factor in the occurrence of public transportation safety accidents. Studies have shown that as the driver's attention changes, there are significant differences in eye movement and gear operation [14]. At the same time, the characteristics of steering wheel operation and the characteristics of vehicle movement are also related to the characteristics of vehicle movement during lane changes [15]. Secondly, algorithms and models are used to analyze the causes and predictions of traffic accidents. By using the decision tree algorithm [16] to study the causes of vehicle collision accidents, the results show that human factors are the most important factor causing traffic accidents. According to the research results of the driver's steering characteristics, an evaluation model used to improve the steering stability of the car is established [17]. In [18], the backpropagation neural network model and generalized linear mixed model were used to analyze multisource traffic data, which showed that flow plays an important role in vehicle collision prediction. irdly, a variety of models were built to better predict vehicle safety. A traffic accident model based on collaboration theory [19] was proposed to analyze accident scene data by combining driving comfort thresholds. e dynamic prediction model of vehicle operation trajectory based on vehicle trajectory data [20] can calculate the suspicious collision position of the vehicle. e perceived safety of self-driving cars and their application value in transportation and road safety [21] were derived due to the analysis of the driving habits of 1,205 regular vehicle drivers. A hidden Markov model [22] is proposed by analyzing a large amount of traffic trajectory data, and it is verified that the model can better predict the occurrence of traffic conflicts. Related scholars have also carried out many studies on the safe operation and management of buses. Firstly, preventive measures [23] are proposed through the identification and risk analysis of bus drivers' dangerous behaviors. Risk assessment and analysis of hazard sources of road traffic safety risks are carried out through the application of the road traffic safety risk index evaluation method [24], and a corresponding road traffic safety risk monitoring index system is constructed. Besides, aiming at the main problems of safety management, traffic safety management countermeasures [25] are proposed to reduce driver unsafe behavior, improve vehicle safety level, reduce the accident rate, and ensure the safe operation of buses. Secondly, the safety of the driver's visual perception of dangerous areas is proposed by analyzing the eye movement data of changing lanes, cornering driving, and straight driving [26]. An evaluation method based on the driver's visual perception of safety indicators is established. In [27], a psychological fatigue evaluation system for bus drivers was constructed, and the authors proposed targeted suggestions to reduce driving fatigue. A method [28] that can evaluate the driver's potential danger prediction ability and the rationality of the system was designed. In [29], a method to analyze the safety operation of buses was proposed based on big trajectory data. In particular, research on the clustering characteristics of road safety factors [30] such as driver, vehicle, road, and environment under different accident types was conducted. According to the actual situation of vehicle safety prediction, different research methods are proposed to make the prediction results more accurate. irdly, in the establishment of the bus speed model, parameters such as bus flow and bus ratio [31] were introduced, and a bus speed control system was designed, which realized the dynamic monitoring of the vehicle running speed. e driver evaluation system based on the principal component analysis method [32] was established through the analysis and investigation of the questionnaire information of bus drivers. e research results show that the driver's driving habits and individual characteristics have a significant impact on driving behavior. Finally, by introducing the practice of traffic congestion charging in Singapore and London [33], it is concluded that public transportation congestion charging should be based on scientific planning and supporting the sustainable development of public transportation. Besides, the analysis method and test method of the index system [34] are used to classify the sustainable development of urban transportation. e evaluation index system and evaluation model of urban transportation sustainable development based on the theory of urban transportation sustainable development are established. is research provides a new perspective on urban sustainable development research. In summary, objective, real, comprehensive, and effective historical operation data are a prerequisite for the accurate study of bus safety operation situation and risk management. e existing research studies mainly use vehicle accident data, vehicle trajectory data, laboratory data, and questionnaire survey data to carry out related research on vehicle safety characteristics, dangerous driving behavior, or risk situation. Because of the contingency of vehicle accidents and the incompleteness of data collection, it is difficult to realize the comprehensive analysis of bus operation state and the accurate prediction of safety risks [35]. is paper will overcome the shortcomings of the existing research, make full use of the safety forewarning system installed on public vehicles, obtain the real mass historical data of all kinds of public transport forewarning, carry out model construction and simulation analysis, provide auxiliary decision-making for bus operation, dispatching, and safety management, and promote the healthy, green, and sustainable development of urban public transportation [36]. Data Acquisition Process e bus forewarning system installed by Zhenjiang Public Transport Company integrates various technologies such as ADAS yaw forewarning [37], fatigue driving video analysis, and BDS terminal [38]. It can realize the real-time upload of vehicle operating data and ensure the accuracy and reliability of the data. e forewarning equipment is shown in Figure 1. is research makes full use of the vehicle forewarning equipment and vehicle forewarning data platform of Zhenjiang Public Transport Company to obtain historical operating data of the bus. Forewarning Equipment 2.1.1. ADAS Vehicle Yaw Forewarning System. ADAS stands for the Advanced Driving Assistance System. e system uses a camera located on the windshield to monitor the lane markings on the road ahead. When the system detects that the vehicle has deviated from the lane [39], it will issue a forewarning to the driver. Fatigue Driving Analysis Equipment. e fatigue driving analysis equipment uses advanced AI video analysis technology [40] to accurately recognize the driver's facial characteristics. At the same time, it can record and warn the driver's fatigue characteristics. BDS. e artificial satellite's multifrequency positioning signal can be accepted by the system to achieve precise positioning. e system can calculate the distance of the vehicle ahead, consider the relative speed of the vehicle, determine the possible collision time, and issue a forewarning to the driver. Forewarning System and Data Platform. e forewarning system can realize real-time monitoring and summary of vehicle information [41], mainly including 7 forewarning types: eyes closed, yawn, glance about, lane departure, rapid acceleration, rapid deceleration, and forward collision. is paper classifies the types of forewarnings, classifies eyes closed, yawn, and glance as driver fatigue forewarnings, and classifies rapid acceleration, rapid deceleration, forward collision, and lane departure as vehicle abnormal state forewarnings, as shown in Figure 2. Data platforms mainly include current online, forward forewarning, driver forewarning, the total number of abnormalities, vehicle distribution, forewarning type distribution, forewarning occurrence trend, and other data. is paper obtained 297,189 forewarning data from November 2019 to March 2020 through the forewarning platform system of Zhenjiang Public Transport Company. e original forewarning data mainly include information such as license plate number, forewarning time, forewarning type, forewarning level, forewarning speed, latitude and longitude coordinates of forewarning points, location of forewarning points, driver names, and other information. Research Period and Weather Conditions. Since the system started trial operation at Zhenjiang Public Transport Company in October 2019, this paper selected a 27-day official operation period from November 2019 to March 2020 to conduct research, as shown in Table 2. Data Cleaning. Since the actual data obtained may have data missing, disordered format, abnormal data, and other phenomena, cleaning the data is an indispensable link. e principle of data cleaning [42] should ensure the accuracy, completeness, consistency, uniqueness, timeliness, and (1) Supplement incomplete data (2) Detection and resolution of error values or abnormal values (3) Detection and elimination of duplicate records (4) Detection and resolution of data inconsistencies e forewarning data set obtained and the driver information data set are associated and fused with the data table [44]. Finally, 297189 forewarning samples are associated with 1435 driver data samples. e research data samples after the fusion are shown in Table 3. Analysis of the Bus Forewarning Characteristics from Multiple Perspectives ere are many factors related to the bus safe operation, and different factors have different effects on bus forewarning [45,46]. To study the bus forewarning characteristics in different weather conditions, sections, driver characteristics, and periods, this paper makes a holographic portrait of the bus operation from multiple perspectives such as weather, time, space, speed, and driver characteristics distribution [47], as shown in Figure 3. Make full use of various multisource forewarning data to study the influence mechanism of various factors on bus forewarning. Weather Distribution. Vehicle operation safe is closely related to bad weather conditions [48]. is paper compares and analyzes the forewarning data of buses according to the four weather conditions of sunny, rain, fog, and snow. e results are shown in Table 4. From the analysis of Figure 4, it can be seen that the proportion of forewarnings on sunny and foggy days is relatively large. e total proportion of forewarnings on sunny days reaches 31.29%. It is mainly due to the glare of the sun and dizziness. Drivers are easily sleepy and fatigued. e total proportion of forewarnings on foggy days reached 31.27%, mainly due to low air visibility, obstructed line of sight, and low road adhesion coefficient. Drivers need to maintain a high degree of attention for a long time and are prone to fatigue. e vehicle's abnormal state forewarnings are greater than the number of driver fatigue forewarnings under all weather conditions. e vehicle's abnormal state forewarnings mainly refer to situations such as rapid acceleration, rapid deceleration, forward collision, and lane departure, which are not only closely related to the driver's bad driving behavior but also affected by road facilities, traffic environment, and other restrictive factors, resulting in a higher proportion. Time Distribution. According to the forewarning data of buses, statistics are summarized by time, and the result is shown in Figure 5. Analysis shows the following: (1) Whether it is the forewarnings of driver fatigue state or the forewarnings of vehicle abnormal state, it shows a three-stage change rule of rising, local fluctuations, and falling over time. (2) e first stage is a rapid rise period. Spatial Distribution. e driver fatigue forewarning and vehicle abnormal state forewarning data are imported into the electronic map of Zhenjiang, and the locations corresponding to the relevant forewarning samples are all mapped to the map, as shown in Figure 6. Using the kernel density analysis method [49], the density distribution corresponding to the driver fatigue forewarning is obtained. From Figure 7, the following holds: (1) e two forewarning types are generally consistent in the spatial distribution of urban road networks, but Beigu Mountain is an AAAAA-level tourist scenic spot in Zhenjiang, Jiangsu. e traffic volume of buses, private cars, tourist buses, and walking tourists is large, and the surrounding road network traffic congestion is serious. In this traffic environment, drivers are easy to make emergency operations and induce forewarning of abnormal vehicle status. Speed Distribution. Select the period from December 2019 to January 2020 as the research period, analyze the corresponding vehicle speed when various forewarning events occur, obtain a total of 297189 data samples, and make summary statistics according to the speed, as shown in Table 5. Study the speed characteristic law under different forewarning types, as shown in Figure 8. It can be seen from Figure 8 that as the speed increases, the number of driver fatigue forewarnings and the number of vehicle abnormal state forewarnings both fluctuate to a certain extent. e number of driver fatigue forewarnings reaches the peak at 30 km/h, and the number of driver forewarnings is less when the speed is less than 15 km/h or greater than 70 km/h. Since most buses operate in urban areas, the speed of running on urban roads is not high. When the speed of the vehicle is greater than 60 km/h, the number of driver fatigue forewarning and vehicle abnormal state forewarnings are kept at a low level. Both driver fatigue forewarning and vehicle abnormal state forewarnings have some abnormal values. e corresponding speed when the driver fatigue state forewarning occurs is generally greater than the speed when the vehicle abnormal state occurs. To facilitate the analysis of the vehicle speed distribution characteristics under different forewarning density areas, this paper divides the forewarning occurrence areas into three types: low, medium, and high. e correlation analysis of the speed when the forewarning occurs in the three regions is carried out, and the characteristic law of the speed is studied, as shown in Figures 9 and 10. It can be seen from Figure 9 that the low forewarning density area has the largest proportion when the speed is 30 km/h-39 km/h, and the smallest proportion when the speed is below 20 km/h. In areas with low forewarning density such as suburban areas, there are fewer vehicles, smooth roads, and a small number of forewarnings. e number of forewarnings is the highest when the speed of buses reaches about 35 km/h. e medium forewarning density area has the largest proportion when the speed is 30 km/h-39 km/h, and the smallest proportion when the speed is above 60 km/h; the high forewarning density area has the largest proportion when the speed is 20 km/h-29 km/ h, and the smallest proportion when the speed is above 60 km/h. In high forewarning density areas such as the city center, due to traffic congestion, the speed of buses is slow, and a large number of forewarnings are generated. e number of forewarnings reaches a peak when the speed is To be able to analyze the number of forewarnings that occur per unit area in each region more reasonably, this paper proposes the definition of unit forewarning density, that is, unit forewarning density � number of forewarnings forewarning area . It can be seen from the analysis of Figure 10 that the forewarning frequency of each speed in the low warning density area is low. In the medium forewarning density area, the forewarning frequency reaches the peak when the speed is 30 km/h-39 km/h. In the high forewarning density area, when the speed is 20 km/h-29 km/h, the forewarning frequency is highest. Driver Characteristics. is paper takes 324 drivers of Zhenjiang Public Transport Company as the research object, analyzes the influence of drivers' age, driving years, gender, and educational background on the forewarning of buses [50], and conducts research on the distribution law of forewarning under the action of various factors. Driving Years. e statistical analysis of the forewarning data of bus drivers of different driving years is shown in Table 6; in terms of the total number of forewarnings, drivers with driving experience between 15 and 19 years of the four driving years have the most forewarnings, accounting for about 35.66% of the total forewarnings, and; drivers with a driving experience of fewer than 5 years have the least number of forewarnings, accounting for about 0.68% of the total number of forewarnings. As shown in Figure 11, drivers with a driving experience of 15 to 19 years have the largest number of driver fatigue forewarnings and vehicle abnormal state forewarnings. Drivers in this age group are more daring after having certain driving experience, have more aggressive driving styles, and are prone to aggressive operations, so they are more prone to forewarning of abnormal vehicle conditions. Age. e statistical analysis of the forewarning data of bus drivers of different ages is shown in Table 7. In terms of the total number of forewarnings, drivers in the 40-44 age group of the eight age groups have the most forewarnings, accounting for about 23.13% of the total forewarnings. Drivers under the age of 25 have the least proportion, about 0.58%. As shown in Figure 12, from the perspective of different forewarning types, drivers in the 40-44 age group have the largest number of fatigue driving forewarnings, and drivers in the 30-34 age group have the largest number of vehicle abnormal state forewarnings. is is because 30-34 years old drivers have a more aggressive driving style, are more daring after having certain driving experience, and are easier to make aggressive operations. Drivers of different ages are generally more likely to have forewarnings of abnormal conditions than driver fatigue forewarnings. is is related to the fact that drivers are more aggressive in driving operations on the premise that they are safe. Table 8; from the total number of forewarnings, male drivers have a much higher probability of having forewarnings than females. Among them, males account for approximately 91.44%, and the proportion of driver fatigue forewarning is about 91.26%, which has a certain relationship with the high proportion of male drivers in public transportation companies. As shown in Figure 13, comparing the number of warnings for male and female drivers, it can be seen that the number of abnormal state forewarnings for drivers is higher than the number of fatigue forewarnings, while the numbers of fatigue forewarnings and abnormal state forewarnings for female drivers are very similar. Table 9; drivers with junior high school and below have the most forewarnings, accounting for about 64.28% of the total number of forewarnings; drivers with high school education account for the least, about 12.98% of the total number of forewarnings. As shown in Figure 14, as the driver's educational background changes, the number of driver fatigue forewarnings and the number of vehicle abnormal state forewarnings show roughly the same changes. Research on Risk Prediction of Public Transportation Safety Based on BP Neural Network Model BP neural network is a concept proposed by Rumelhart and McClelland et al. It is a multilayer feedforward neural network with error backpropagation. e model has arbitrary complex pattern classification ability and excellent multibit function mapping ability and is suitable for complex nonlinear systems such as bus safety risk prediction. Structure of BP Neural Network. e topological structure of the BP neural network is shown in Figure 15, x 1 , x 2 , . . . , x n is the input vector of BP neural network, y 1 , y 2 , . . . , y n is the output vector of BP neural network, and ω ij and ω jk is the weight of BP neural network. BP neural network can be regarded as a nonlinear function, and the network input value and predicted value are the independent variables and dependent variables of the function. When the number of input nodes is n and the number of output nodes is m, BP neural network expresses the functional mapping relationship from n independent variables to m dependent variables [51]. Error Backpropagation Algorithm. As a multilayer feedforward neural network, BP neural network is characterized by signal forward transmission and error backpropagation. In forward transmission, the input signal is processed layer by layer from the input layer through the hidden layer until the output layer. e neuronal states of each layer only affect the next layer of the neuron state. If the output layer cannot get the expected output, it will switch to backpropagation and continuously adjust the network weights according to the prediction error so that the predicted value of the model will converge gradually. e error backpropagation algorithm of the BP neural network [52] can be expressed as follows: In formula (2), δ l is the learning signal of layer l, δ L is the learning signal of the output layer, t is the label value, y is the predicted value, X l is the output signal of layer l, X L is the output signal of the penultimate layer, W l is the weight vector between the layer l and l + 1, and W L is the weight vector between the penultimate layer and the last layer. e weight adjustment function of the BP neural network [53] is as follows: (3) In formula (3), ΔW L is the adjustment value of the weight vector between the penultimate layer and the last layer of the BP neural network, ΔW l is the adjusted value of the weight vector between the layer l and l + 1, η is the learning rate, and E is the cost function. Establish a Network Structure. Determining the network structure is an important part of constructing a BP neural network, which directly determines the training speed and prediction accuracy of the model. Generally speaking, the more hidden layers and nodes in the network topology structure, the stronger the generalization ability of the model, and the higher the accuracy of the model. However, the excessively complex network will lead to a slow training rate of the model and the more prone to overfitting; too simple network topology will make it difficult to establish a complex mapping relationship between feature variables and predictions, and it is difficult to achieve good prediction results. Based on experience and repeated attempts, this paper confirms that the prediction results are good with the double hidden layer structure with node number of 100 and 50, respectively [54]. Selection of Learning Rate. e learning rate is an important parameter in the process of model optimization, which determines the speed of model learning and the convergence effect of the model. Too much learning rate will cause the model accuracy to oscillate and be difficult to converge. Too small learning rate will lead to slow model adjustment. In this paper, 0.01 is selected as the learning rate of the model. At this time, the model converges faster and the oscillation amplitude is smaller [55]. Activation Function Selection. e activation function in the BP neural network can increase the nonlinearity of the neural network so that the model has sufficient complex function mapping capabilities, and the applicability of different activation functions is also different [56]. (1) tanh Function. In this paper, the tanh function is selected as the transfer function of the model. e tan h function is the hyperbolic tangent function. It can maintain the nonlinear monotonic rise and fall relationship on the output and input, which conforms to the gradient solution requirements of the BP network and has good fault tolerance and bounds. Besides, compared with the sigmoid activation function, tanh function alleviates the problem of gradient disappearance to a certain extent, and its formula is as follows: In formula (4), tan h (x) is the function value of the hyperbolic tangent function, x is the input variable, and e is the natural constant. (2) Softmax Function. In this paper, the softmax function is selected as the classifier of the model output. e softmax function is the normalized exponential function, which can normalize the gradient logarithm of the finite item discrete probability distribution. Its characteristic is to normalize the vector, highlight the maximum value, suppress other components far below the maximum value, and visually show that the sample is a certain type of confidence; the formula is as follows [57]: n j�1 e x j , j � 1, 2, . . . , n. (5) In formula (5), X is the input vector; softmax (X) i is the i-th function value of the vector softmax function for the vector X; x i and x j are the i and j values of the vector X, respectively; n the length of vector X; and the meaning of e is the same as above. Cost Function Selection. e cost function is mainly divided into two types: quadratic cost function and crossentropy cost function [58]. e quadratic cost function is mainly used for regression problems. For the classification problems mentioned in this paper, the cross-entropy cost function is generally selected (labels are processed by onehot encoding), and the formula is as follows: In formula (6), E is the cost function value, t is the true label value, and y is the predicted value of the model. Besides, the cross-entropy cost function also avoids the quadratic cost function: when the error is larger, the gradient of the activation function is smaller, resulting in slow convergence. Data Preprocessing (1) Normalization. To reduce the influence of the initialization value and accelerate the convergence speed of the BP neural network, the normalized preprocessing method can be generally adopted. In this paper, the maximum-minimum method is used to normalize the continuous characteristic variables [59], and the formula is as follows: In formula (7), x min is the minimum value of the feature in all samples, x max is the maximum value of the feature in all samples, and x k is the eigenvalue after normalization. (2) One-Hot Encoding. To digitize classification and discrete variables into the model, it is necessary to map such features to Euclidean space. One-hot encoding is one of the most effective ways to achieve this function. One-hot encoding is also known as one-bit effective encoding and uses multibit status registers to encode multiple states: for a feature, if it has m values, it becomes m binary features after one-hot encoding. Construction and Application of the Prediction Model. According to the BP neural network model constructed in Section 4.2, 13 features such as weather conditions, driver data, driving period, and driving speed are taken as the input of the model, and the alarm of the driver is taken as the output of the model. e network topology structure of "13-100-50-2" is adopted, and the tanh function and softmax function are used as the transfer function and activation function of the model, respectively. e cross-entropy cost function is selected as the cost function of the model, and after repeated attempts, the learning rate of the model is 0.01, which ensures the stable convergence of the model. e specific form of the model is shown in Figure 16 [60]. Fatigue Driving Prediction Model (1) Investigation of Convergence and Dispersion. Randomly select 2/3 of the samples as the training set and the remaining 1/3 as the test set. Perform 500 cycles of iterative training on the BP neural network. e learning curve of the fatigue driving prediction model is shown in Figure 17 [61]. As shown in Figure 17, the analysis shows the following: the fatigue driving prediction model has good convergence, and the learning curve tends to be flat around the 100th training cycle; during the whole 500-cycle iteration process, there was no large-scale oscillation and the fluctuation amplitude gradually decreased with the training cycle; the model performs well on the test set and can still reach an accuracy of 79% based on using a large number of static features; the model has no obvious overfitting in the training process, and there is only a 0.0034 accuracy difference between the set and the test set. (2) Sample Inspection. Since the sample label adopts the form of one-hot encoding, with position 0 representing forewarning and position 1 representing no forewarning. erefore, a single sample error can be obtained by randomly selecting the predicted value of the model with 200 samples and subtracting the true value, as shown in Figure 18. It can be seen from Figure 18 that among the randomly selected prediction samples, the number of samples with correct forewarning accounts for 79%, 18% of the false positives are forewarnings, and only 3% of the samples are falsely reported as no forewarnings, which shows that the whole model is partial to safety and has high prediction accuracy under the application of state prediction. Driving Risk Prediction Model (1) Investigation of Convergence and Dispersion. Similar to the fatigue driving prediction model, 2/3 of the samples are randomly selected as the training set, and the remaining 1/3 are used as the test set. e BP neural network is trained iteratively for 300 cycles. e learning curve of the driving risk prediction model is shown in Figure 19. From Figure 19, the following is obtained: the driving risk prediction model has good convergence. e learning curve tends to be flat around the 120th training cycle, but it fluctuates greatly from the 170th to the 210th cycle, which may be caused by the transformation of the model from the local optimal solution to the global optimal solution; e model reaches the highest state after 300 cycles of the iterative process, and the convergence speed was faster than that of fatigue driving risk forewarning model. e model is slightly weaker than the previous model in the test set, but it can still achieve a higher accuracy rate of 78%. e model has no overfitting phenomenon in the training process, and the performance of the model in the test set is even better than that in the training set. (2) Sample Inspection. Randomly select the predicted value of 200 samples from the model, and subtract the true value from it to get a single sample error, as shown in Figure 20. As can be seen from Figure 20, among the randomly selected prediction samples, the number of samples with correct forewarning accounts for 78%, 14.5% of the false positives are forewarnings, and 7.5% of the samples are falsely reported as no forewarnings. e model is generally safe and has high prediction accuracy. Research on Simulation of Risk Probability Prediction Based on the BP Model 4.4.1. Typical Driver Selection. is paper conducts a statistical analysis of 1565 drivers of the Zhenjiang Public Transport Company. For continuous features such as driving age and age, the mean value (16,39) is used as the feature value of the virtual driver; for the classification features such as educational background and gender, the mode (high school, male) is taken as the characteristic value of typical drivers. An example of normalized virtual driver sample data is shown in Table 10. Risk Probability Analysis during Peak Hours. e fatigue driving prediction model constructed in this paper is used to calculate the fatigue confidence of the virtual driver's sample data under different weather, periods, and speeds. e simulation results are shown in Figure 21. It can be seen from the graph analysis that whether it is fatigue driving forewarning or driving risk forewarning, the probability of occurrence is positively increasing with the driving speed value; when the vehicle speed range is (18,20) km/h and (42,45) km/h, the probability of fatigue driving risk forewarning and driving risk forewarning, respectively, raises sharply; when the vehicle speed is lower than 17 km/h or 41 km/h, the probability of fatigue driving risk forewarning and driving risk forewarning, respectively, occurring is almost zero; under the same speed conditions, the probability of fatigue forewarning in snowy days is greater than that of foggy days, rainy days, and sunny days and the probability of driving forewarning in foggy days is greater than that of snowy days, rainy days, and sunny days. Risk Probability Analysis during Low Peak Hours. According to Figure 22, based on different speed conditions, the change characteristics of fatigue driving risk forewarning and driving risk forewarning probability are generally consistent with those in peak hours, indicating that high attention should still be paid to safe driving of vehicles in low peak hours; under the same speed conditions, the probability of fatigue forewarning in rainy days is about 30% lower than that in peak hours, and the difference in other weather conditions is small. Risk Probability Analysis during Flat Peak Hours. According to the analysis in Figure 23, the change characteristics of fatigue driving risk forewarning and driving risk forewarning probability are generally consistent with peak hours and flat peak hours; under the same speed conditions, the probability of driving forewarning in four weather conditions is 15% higher than that in low peak hours and 10% lower than that in peak hours; at the same driving speed, the sequence of driving risk probability is foggy, snowy, rainy, sunny, and speed, indicating that driving risk is significantly related to weather conditions. Research Result. is paper selects 297189 various types of forewarning data of Zhenjiang buses to carry out the analysis of hidden risks and characteristic laws. e distribution characteristics of bus forewarnings of different weather conditions, speeds, periods, spaces, and driver characteristics are studied. We get the following conclusions: firstly, on sunny days from 7 : 00-8 : 00 am in the morning, the probability of driver fatigue forewarning is greatest. On foggy days from 11 : 00 am-12 : 00 noon, the probability of vehicle abnormal state forewarning is the greatest. Secondly, when the vehicle is running at 30 km/h, the proportion of driver fatigue forewarning is the largest. Urban core areas are prone to trigger forewarning of driver fatigue, while tourist attractions are prone to trigger vehicle abnormal forewarning. e fatigue driving and driving risk prediction model based on BP neural network are constructed, and simulation analysis is performed. e results show that, at the same driving speed, the sequence of occurrence of driving risk probability is foggy, snowy, rainy, and sunny days. During peak hours, the probability of fatigue forewarning in snowy days is greater than that of foggy, rainy, and sunny days; the probability of driving forewarning in foggy days is greater than that of snowy, rainy, and sunny days. When the vehicle speed range is (18,20) km/h and (42,45) km/h, the probability of fatigue driving risk and driving risk forewarning increases sharply; when the vehicle speed is lower than 17 km/h or 41 km/h, the probability of fatigue driving risk and driving risk forewarning, respectively, is almost zero. e probability of fatigue forewarning during low peak hours on rainy days is about 30% lower than that during peak hours. e probability of driving forewarning during flat peak hours is 15% higher than that during low peak hours and about 10% lower than that during peak hours. Practical Implications. e relevant research conclusions of this paper are of great practical significance for improving the passenger transportation capacity of buses and enhancing the management level. At the same time, it can improve the auxiliary decision-making for the safe operation and emergency management of buses, promoting the sustainable and healthy development of urban public transport safety. Limitation and Future Research Scope. is study was not free from limitations. Firstly, we selected 297189 samples from November 2019 to March 2020 with a total of 27 days, and the sample size is relatively small. Secondly, when studying the forewarning characteristics of different gender bus drivers, male drivers accounted for a larger proportion of the selected 324 drivers. erefore, the conclusion that male drivers have a much higher forewarning rate than female drivers needs further verification. Finally, public transportation safety risk probability prediction is multifactor. Currently, it is only based on the actual data obtained by the forewarning equipment to predict. Although the constructed model has certain accuracy, more influencing factors such as the type of road facilities, road traffic conditions, and types of bus stations need to be fully considered in the follow-up research. Although the sample size is insufficient and there are some of the above shortcomings in the research, this paper realizes for the first time the use of real forewarning data of buses in the full time, the whole region, and full cycle to carry out research and the use of real data for objective evaluation, which is representative and innovative. Data Availability e data used to support the findings of this study are included within the article. Conflicts of Interest e authors declare that there are no conflicts of interest. Authors' Contributions S. D. and H. Y. conceptualized the study and prepared methodology. C. L. and H. Y. analyzed using software and investigated and visualized the study. S. D., C. L., and H. Y. validated the study. C. L. and S. D. performed data curation and wrote the original draft. S. D. supervised the study, administrated the project, did formal analysis, managed resources, obtained funding acquisition, and reviewed and edited the article.
2020-12-24T09:08:15.944Z
2020-12-18T00:00:00.000
{ "year": 2020, "sha1": "3bd8a8b206925c08ab3554e62b9b06a35baef950", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/jat/2020/6623739.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "c6b7e1ae5bf090126953ff9a886b01716d4c97d9", "s2fieldsofstudy": [ "Engineering", "Environmental Science", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
259902337
pes2o/s2orc
v3-fos-license
A culminating senior design capstone project on a residential building in engineering education A study was conducted based on the theoretical construction of a three-story residential house from beginning to end, that included the following considerations: architectural, structural, environmental Introduction The future in terms of building stewardship is in sustainability. Tomorrow advocates for more environmentally friendly innovations and designs. But a question confronting engineers is how to continually promote and innovate such sustainability features in all their construction efforts? The team of student researchers at California State University, Northridge began this project by creating a design of a home within a city in Los Angeles County. Due to seismic activity within this region, the project displays characteristics and designs pertaining to this region's building code. The project was completed through the following steps: site plan, floor plan, elevation views, design of beams, columns, beam-tocolumn connections, and the foundation. Once designed with their respective structural plan, seismic calculations were implemented following AISC 341 and then modeled through the software program, RAM structural analysis. Upon completion, researchers continued with the implementation of Leadership in Energy and Environmental Design (LEED) certification, which was the final step of the cost analysis for this project. Students of the University of Oklahoma as well as California State University, Northridge have similarly translated their senior studies into research in order to examine their findings alongside other researchers. Through the use of research, students who have already completed their undergraduate studies are able to further develop their knowledge of realworld applications within engineering. Lemley et al. [1] emphasized the importance of recognizing design aspects within research projects and provided examples of research projects that can incorporate engineering design. Incorporating LEED features and conducting a detailed cost analysis enabled the student researchers a view into the benefits of proposing a green building design versus a more traditional approach. Hopkins [2] reiterated this through the study of LEED within college campuses with the comparison of the cost versus building life cycles. The main components of this undertaking focused on the architectural, structural, and environmental aspects of the proposed building design. Researchers worked to execute a fully functional home designed from start to finish by replicating real-world processes vying for the attention of professional engineers on a daily basis. Under the auspices of the yearlong Senior Design capstone course, students from the civil engineering major were able to aggregate all that they had learned throughout their many years of study. Along with structural and architectural considerations, the home to be designed by the team of student researchers was to intentionally incorporate LEED aspects to allow for a more environmentally friendly and energy efficiently functioning house. Barshilia [3] conducted a study on LEED (and Green Rating Integrated Habitat Assessment GRIHA) innovation in design, which analyzed buildings with LEED certification and the qualifying sustainable features. Selected environmentally friendly features mentioned by Barshilia [3] were incorporated into the house's design to achieve LEED certification alongside architectural goals of being aesthetically pleasing. Architectural Features The project consists of a three-floor residential home, situated on a 0.4-acre lot, and located in the Beverly Hills area of Southern California. Composed of steel, the building was stipulated to have a 2,000 square foot area per floor. In terms of aesthetics, the home designed considered a completely symmetrical exterior, with a 45' × 45' perimeter and a 15' × 13' interior opening. This 15' × 13' exterior area included a garden atrium with an opening on the roof to be able to capture both morning dew as well as rainwater. For enhancing the environmental synergy of the house and garden, the exterior of the home utilized sage glass for maximizing the amount of natural sunlight exposure. The home was configured with the main entrance to be in a westward direction thus rendering a back entrance to the east for allowing longer durations of sunlight throughout the day. Speaking of lighting, using sage glass also allows for auto and remote dimming whenever desired or deemed necessary. Solar panels installed on the roof also allow for enhanced green energy efficiency. The home consists of nine bedrooms, six bathrooms, and an atrium visible from each story. Figure 1 depicts the layout of this residential home with suggested uses of the rooms and floor space. The kitchen has an entrance from the east and south direction, with the south opening having easier access to the garage. Alongside the kitchen's east entrance, the living room was to have an entrance on the opposite end, making for 2 backyard openings. Figure 2 depicts the second and third story layouts. The home is designed to have stately grand entrance rising 32' in height and is key for allowing the garden its needed egress of light that then permeates to the remainder of the home. Structural Design The building was designed using standard structural codes such as ASCE 7-16 and the International Building Code. In this process, the team of student researchers gained knowledge in structural design methods conforming to code while being able to express and incorporate innovative ideas through a variety of energy saving LEED features into the building's performance. The material aspects of this building utilized structural steel girders and columns in its construction, with concrete being used for the foundation and decking system. The building design commenced by first manually calculating required sizes of the girders and columns before using computer software such as commercially available finite element packages. To this end, codebooks by those of the American Institute of Steel Construction (AISC 341 & 360) as well as the American Concrete Institute (ACI 3-18) were used in determining the adequate sizes of members for withstanding the anticipated loads being applied and borne by the structure. The order of design considered a top-down approach in contrast to the actual construction which begins and progresses in bottom-up fashion. After roof loads and the beams needed for support, then consideration is given to columns, followed by beam-column connections, the lateral bracing system, and, finally, the foundation. Shown below in Table 1 are the beam, column, and foundation sizes chosen once calculations were cross-checked and updated. The calculated sizes of all structural aspects were determined using the Load Resistance Factor Design (LRFD) method, in which all the loads are increased by a multiplicative factor in order to utilize the full strength of the structural member up until the point of yielding. As stated by Galambos [5], the LRFD design approach ensures that the strength of structural members is utilized to their full capacity rather than relying on lesser, elastic-based, factors as used in methods like that of its predecessor known as the Allowable Stress Design (ASD). Design for Sustainability and LEED Certification As Abair [6] mentioned, nearly 30% of the United States' carbon dioxide emissions, 40% of its Ozone pollution, and 35% of municipal solid waste come from construction-related activities. For such reasons, it is absolutely crucial to incorporate LEED and environmentally-friendly methods into modern day construction projects. Designing a threestory, fully glass curtain wall building with sustainability and LEED features requires consideration of various factors such as material selection, indoor air-and environment-quality, and the overall savings in costs to be gained by using an enhanced energy efficient structure. Some of the LEED features incorporated into the design of this building, include:  Sage Glass: The exterior of the building is to be designed with glass using electrochromic technology that allows the glass to automatically tint or clear in response to changing external weather conditions, such as to varying intensities of sunlight and heat. Based on the information found on Sage Glass Website [7], Sage Glass reduces the need for artificial lighting and heating by dynamically adjusting the amount of sunlight entering the building, resulting in lower energy consumption.  Solar Panels: Operating solar panels for the electrical system of the building has great advantages, such as utilizing renewable energy, reducing energy costs, promoting environmental sustainability, and contributing to LEED certification. Solar panels produce clean electricity without emitting greenhouse gases or air pollutants which lead to a greener and more sustainable future. Utilizing a solar panel system also helps in fulfilling the renewable energy goals of LEED and contributes to the earning of credits in the Energy and Atmosphere category.  Atrium: The proposed indoor atrium allows natural light to penetrate deep into the building, thus reducing the need for artificial lighting during the day. It also helps to increase the sustainability of building vegetation. Incorporating plants and greenery within the indoor atrium that can absorb sunlight and release moisture through a process known as transpiration, helps to offset localized pockets of higher temperature known as the heat island effect.  Porous (a.k.a. Permeable) Pavement: Sorvig [8] explained about the benefits of using porous pavements in designs and pointed out that utilizing such a permeable medium allows for rainwater to infiltrate through the surface and into the underlying layers, thus reducing stormwater runoff. This earns the required LEED credits in the Sustainable Sites category, specifically for Stormwater Management. Moreover, using a permeable pavement supports the water efficiency goal by allowing rainwater to infiltrate into the ground in order to replenish the groundwater and reduce a higher demand for irrigation water, as aligned with LEED's objectives in the Water Efficiency category. Once again, the urban heat island effect is also reduced by allowing rainwater to infiltrate into the ground, which cools the pavement surface and contributes to the earning of LEED credits in the Sustainable Sites category for Heat Island Reduction. Integrating the aforementioned LEED features into the building's design as well as considering other LEED grading factors, the building is expected to pass LEED requirements with a minimum of 50 points, which will classify the house's design at a LEED rating of Silver. Cost Analysis An estimated cost analysis was performed to evaluate the expense associated with the design, construction, operation, and maintenance of this three-story building. To analyze the cost of a conventional building with that of a LEED featured one, all additional expenses related to incorporating LEED features and the potential cost savings or benefits associated with those features have been taken into consideration. Table 2 shows associated cost estimates and details of structural members, e.g. beams, columns, and concrete footings. Based on the collected data and estimates from two local contractors, the estimated design and construction cost of this building has been determined to be ~$3.2M with an average of ~$550 per square foot. As demonstrated in Table 3, incorporating all proposed LEED features and certification will cost an additional 23% in associated design and construction fees rendering a total cost of ~$3.9M. Educational Objectives In addition to the aforementioned design, cost analyses, and LEED considerations for the building, another significant objective to have been fulfilled from this research undertaking was for the team of student researchers to have gained an educational introspection from their various interactions with faculty, industry professionals, and collaborations between their fellow classmates. Evan et al. [9] assert the importance of faculty guiding students, as in a senior design class, on research undertakings with an educational bent for producing more well-rounded engineers in the future. The research was conducted through a series of presentations prepared by student teams to teach their classmates about the various aspects of design: from architectural to structural plans, to LEED considerations and its affects towards the overall longevity of such buildings. As per a preset schedule, the team of student researchers would prepare and learn as much as they could about a particular aspect in the design and construction of their building. Essentially exercises as these simulated real-world design situations in which deadlines, dealings with contractors, and stringent working regulations and conditions could be appreciated. Another benefit gained by the teams of student researchers throughout this process was in appreciating the importance of teamwork and collaborating with people of different backgrounds and abilities. This method of researching and presenting thus serves to enable student researchers to translate what they have learned in the classroom and apply it to real-world applications before, or, as in some cases, even in tandem with, their commencing with their own personal industry-related work experiences post-graduation. Conclusion This student-led research effort was initiated to simulate the diverse processes involved in the design of an actual structure, from start-to-end. The location of the property chosen was in the city of Beverly Hills within the county of Los Angeles, having a 0.4-acre footprint, and involving a three-story residential home using structural steel. Cost was an important factor of consideration throughout the project, even if these savings were to be realized over the course of time due to the implementing of LEED features that begin with higher front-end costs. The most notable LEED features in the design, involved using: Sage Glass, solar panels, porous pavements, the design of a high-ceiling atrium with indoor vegetation, and landscaping. The design methodology utilized ASCE-7 and the International Building Code, in conjunction with tips from practitioners in the field. In the design process, the student researchers determined the ideal dimensions of structural members to withstand the anticipated loads in accordance to the aforementioned codes together with steel and concrete specifications as published in the handbooks by the American Institute of Steel Construction (AISC 341 & 360) and the American Concrete Institute (ACI 3-18), respectively. The total cost of this project came to an estimated ~$3.9M which included the total LEED feature costs of ~$750k. With such LEED additions, the building received a minimum of 50 points, which would grant the building a Silver classification. In addition to the benefits gained by the team of student researchers regarding the design component of this project, they also made great strides educationally and academically by the well-rounded learning experience afforded them by the unique setup of this capstone, senior design class, culminating in the publication of this present journal article.
2023-07-15T15:32:11.639Z
2023-07-30T00:00:00.000
{ "year": 2023, "sha1": "9578e8f8f40f2bc5b67dd4ad4892c477c1af2b5d", "oa_license": "CCBY", "oa_url": "https://wjaets.com/sites/default/files/WJAETS-2023-0192.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "e07d9e8c00a4a0029e531fc4ff12bc791845c7c2", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [] }
53627519
pes2o/s2orc
v3-fos-license
Innervation of partially inserted human tendons used to reconstruct anterior cruciate ligament Anterior cruciate ligament (ACL)1 rupture is the most common surgically treated ligament injury, and historically many efforts have been made to reconstruct ACL as anatomically as possible in order to restore knee biomechanics and prevent knee osteoarthritis.2,3 In addition to their dynamic and static mechanical roles the cruciate ligament provides sensory information to the spinal cord that regulates the activity of periarticular knee muscles.4‒8 Consistently with this sensory function the knee joint capsule and ligaments, including ACL, contain several types of sensory nerve endings in which mechanotransduction takes place, i.e. mechanoreceptors.9,10 In particular, within the ACL nerve fibbers free or associate to the blood vessels, or among collagen bundles, as well as different types of sensory corpuscles (Ruffini-like, Golgi tendon organs, and Pacinianlike corpuscles) have been found.11‒13 Recently, Kim & co-workers14 have mapped the sensory innervation of human ACL. Both nerve fibbers and corpuscles were found mainly near the bony femoral attachment sites. Introduction Anterior cruciate ligament (ACL) 1 rupture is the most common surgically treated ligament injury, and historically many efforts have been made to reconstruct ACL as anatomically as possible in order to restore knee biomechanics and prevent knee osteoarthritis. 2,3 In addition to their dynamic and static mechanical roles the cruciate ligament provides sensory information to the spinal cord that regulates the activity of periarticular knee muscles. [4][5][6][7][8] Consistently with this sensory function the knee joint capsule and ligaments, including ACL, contain several types of sensory nerve endings in which mechanotransduction takes place, i.e. mechanoreceptors. 9,10 In particular, within the ACL nerve fibbers free or associate to the blood vessels, or among collagen bundles, as well as different types of sensory corpuscles (Ruffini-like, Golgi tendon organs, and Pacinianlike corpuscles) have been found. [11][12][13] Recently, Kim & co-workers 14 have mapped the sensory innervation of human ACL. Both nerve fibbers and corpuscles were found mainly near the bony femoral attachment sites. One interesting open question is whether or not the tendongraft reconstructed ACL reinnervates, and whether or not ACLstump remnants are important for this reinnervation. In fact, is has been suggested that maintaining the remnant of ruptured ACL the "ligamentization" of the graft accelerates contributing to a faster graft vascularization and innervation, therefore leading to a better proprioception. Nevertheless, this hypothesis is still not supported by clinical findings. 15 In patients with cruciate-retaining total knee arthroplasty mechanoreceptors are present 5 to 12years later, 16 and residual remnants of ruptured ACL contain morphologically normal mechanoreceptors and proprioceptive fibbers evenly distributed at both tibial and femoral attachments in one third of the cases. [17][18][19][20] Conversely, other researchers reported the absence of mechanoreceptors in biopsy samples of ACL after reconstruction with Achilles allografts (1 to 10years), and absence of sensory function after ACL re-construction, presumably because the ACL grafts were not reinnervated. 7,21 To avoid nerve defects in ACL reconstruction alternative surgical procedures have been proposed. This is the case of the use of the semitendinosus and gracilis muscle tendons to reconstruct ACL maintaining their tibial insertions (Guillen, personal communication). In this way the tendon-reconstructed ACL continues to receive vascular and nerve supply from the parent muscles. The availability of surgical pieces from long-term tendongraft reconstructed ACL is rare and few studies have analyzed the innervation of these pieces. [14][15][16] No data have been reported using the ACL reconstruction method indicated above. Here, we used immunohistochemistry for nerve markers and mechanoproteins to analyze the long-term (12years) innervation of tendon-graft reconstructed ACL using partially disinserted semitendinosus-gracilis muscle tendons. The study was aimed to investigate whether these pieces contain mechanoreceptors and have a pattern of innervation similar to that proper of ACL. Case report A 44year old man was submitted to the Traumatology and Orthopedics Department, Clínica CEMTRO (Madrid, Spain) due to right knee instability. Arthroscopic exploration showed partial disruption of ACL and ligament replacement was recommended. ACL was surgically removed using arthroscopy and it was reconstructed using the tendons of the semitendinosus and gracilis muscles disinserted proximally and maintaining the tibial insertion at July 2002. Twelve years later the patient undergoes knee stability again, and was submitted to arthroscopic replacement of ACL using frozen-Achilles tendon allograft. Materials The surgical piece removed was divided longitudinally intro two halves, fixed in 10% formaldehyde in 0.1M phosphate buffer saline (PBS) at pH 7.4 for 48 h at 4˚C, dehydrated and embedded in paraffin. Sections 10μm were obtained, sampled each 20 sections, mounted on gelatine-coated microscope slides and processed for immunohistochemistry. One half was sectioned longitudinally, and the other one transversally. As controls, three normal ACL specimens were taken from healthy knee amputated at thigh level due to trauma (provided by Dr. A. Maestro), and processed identically as described above. Immunohistochemistry Indirect peroxidase-antiperoxidase immunohistochemistry was performed as follows: sections were deparaffinized and rehydrated, then rinsed in 0.05 M HCl Tris buffer (pH 7.5) containing 0.1% bovine serum albumin and 0.1% Triton X-100. Thereafter the endogenous peroxidase activity (3% H2O2) and non-specific binding were blocked with 10% foetal calf serum. The sections were incubated overnight in a humid chamber at 4˚C with primary antibodies ( Table 1). The antibodies against neuron-specific enolase (NSE) and neurofilament protein (NFP) were used as specific axon markers; antibodies against S100 protein were used to immunolabel Schwann cells and Schwannrelated cells. 22,23 Antibodies against ASIC2 (acid-sensing ion channel 2) and TRPV4 (transient receptor potential vanilloid 4 ion channel) were also used to detect these two putative mechanoproteins. 24 After incubation with the primary antibodies, sections were rinsed in the same buffer and incubated with Dako EnVision System labelled polymer-HR anti-rabbit IgG or anti-mouse IgG (DakoCytomation, Denmark) for 30 minutes at room temperature. Finally, sections were washed and the immune reaction visualized using 3-3'-diaminobenzidine as a chromogen. To ascertain structural details, sections were slightly counterstained with hematoxylin & eosin. To test the specificity of the immune reactivity representative sections were processed in the same way as described above using non-immune rabbit or mouse sera instead of the primary antibodies, omitting the primary antibodies in the incubation, or using pre-absorbed antibodies for ASIC2 and TRPV4 (5 µg of the blocking peptide in 1 ml of the antibody working solution). Quantitative study The density of innervation in both normal and tendon-grafted ALC was performed on transversal section of each segment of ACL. In transversal sections the density of free nerve endings was quantified following the method proposed by Kim and co-workers. 14 Briefly, the piece was divided into 6 segments identified as TI (tibial insertion), S2, S3, S4, S5, FI (femoral insertion), and 5 sections per segment were selected, 300mm apart to avoid to measuring the same structure twice. In each section five randomly selected fields were measured (2.5mm 2 ) using an automatic image analysis system (Quantimet 550, Leika, QWIN Program) The number of nerve profiles immune reactivity for S100 protein was determined in the sub synovial layer, collagen fascicles and perivascular plexuses; moreover identified sensory corpuscles were counted. Because counts were performed in one half of the pieces the results were doubled. No statistical comparative analysis was carried out since an unique case of tendongrafted was evaluated. Normal ACL We first tested the occurrence of nerve bundles, isolated nerve fibres, and morphologically differentiated mechanoreceptors in samples of normal non-injured ACL. Small nerve bundles were regularly found in the vicinity of the blood vessels, in the sub synovial layer but also among the collagen fascicles ( Figure 1A). They displayed immune reactivity for both axonic and Schwann-cell markers. In the same localizations isolated nerve axons (presumably ending as free nerve endings; data not shown), and several morphotypes of mechanoreceptors were found, mostly identified as simple sensory corpuscles or Pacini-like corpuscles ( Figure 1B) ( Figure 1C) ( Figure 1E). They consisted of one to three NFP or NSE positive axons, each covered by independent continuous inner core displaying strong S100 protein immune reactivity. Outside the inner core there was a more or less wide corpuscular space and a capsule ( Figure 1E). Furthermore, Ruffini-like sensory corpuscles were rarely observed ( Figure 1D), as well as some other kinds of capsular corpuscles apparently multi afferented, but that surely correspond with multiple sections of an axon arranged as wool ball ( Figure 1F). The density of morphologically defined sensory corpuscles in normal ACL is summarized in Table 2. As it can be seen the maximal densities are in the vicinity of the tibial and femoral insertions, and progressively decrease at the central segment. Figure 1 Immunohistochemical localization of general nerve markers (neuron specific enolase for axons; S100 protein for Schwann cells and Schwann related cells in sensory corpuscles) in sections of normal anterior cruciate ligament. Perivascular nerves (A), and different kinds of sensory corpuscles were found (B,C), especially Ruffini-like corpuscles (D), Pacini-like corpuscles (E), and globular sensory corpuscles (F). Scale bar, 80µm for A; 40µm for B and C; 20µm for D,E,F. BV, blood vessels; C, capsule; IC, inner core Long-term partially-disinserted grafted tendons Nerve profiles displaying immunoreactivity for NSE, NFP and S100 protein were observed among the collagen fascicles in the removed tendon-grafted surgical piece (Figure 2A-2C). The most noticeable difference in the pattern of innervation with respect to the normal ACL was the severe reduction in the density of nerve profiles, and especially of sensory corpuscles (Table 2). In the entire piece four structures resembling sensory corpuscles were found, and they cannot be identified as none of the canonical morphotypes types present in joints although were provided of a thick capsule. They were localized in the segments TI and P2; in all segments, however, sparse and isolated nerve fibres were also observed ( Figure 2D-2G). Mechanoproteins The putative functional role of nerve fibres supplying the peripheral tissues, including joint tissues, can be determined on the basis of the expression of some specific proteins for mechanoreceptors, nociceptors, thermoreceptors, and so. Nerve fibres in ACL can be regarded a priori as nociceptive or mechano-proprioceptive in nature. To investigate whether or not they work as mechanoceptive we have investigated the expression of two putative mechanoproteins within them, i.e. ASIC2 and TRPV4. In normal ACL Ruffini-like corpuscles were found to display ASIC2 and TRPV4 immunoreactivity ( Figure 3A & Figure 3C), as well as the capsulated multiafferented corpuscles ( Figure 3B) and the free nerve fibres ( Figure 3D). In the tendon-grafted surgical piece ASIC2 and TRPV immunoreactivity were detected in small nerve bundles ( Figure 3E & Figure 3F) and perivascular free nerve fibres ( Figure 3G), but never in structures resembling sensory corpuscles. The density of mechanoceptive nerve fibres was reduced in these pieces compared with normal ACL. Figure 2 Immunohistochemical localization of general nerve markers (neurofilaments for axons; S100 protein for Schwann cells and Schwann related cells in sensory corpuscles) in sections of semitendinosus and gracilis tendon muscles used to reconstruct anterior cruciate ligament 12years ago. Scarce capsulated sensory corpuscles (A-D) and isolated nerve fibbers (E-G) were observed. Scale bar, 50µm for A and C; 40µm for E; 20µm for B,D,F and G, arrows indicate nerve fibbers. BV, blood vessels; C, capsule Figure 3 Immunohistochemical localization of the putative mechanoproteins ASIC2 and TRPV4 in sections of normal ACL (A-D) and in sections of semitendinosus and gracilis tendon muscles used to reconstruct anterior cruciate ligament 12years ago (E-F). Ruffini-like corpuscles display ASIC2 and TRPV4 immunoreactivity in normal ACL (A,C), as well as the capsulated multiafferented corpuscles (B) and the free nerve fibres (D). In the tendon-grafted surgical piece ASIC2 and TRPV immunoreactivity were detected in small nerve bundles (E,F) and perivascular free nerve fibres (9). Scale bar, 20µm for A and C; 40µm for B and D; 100µm E,F,G. BV, blood vessels. Table 2 Density of sensory corpuscles and free nerve ending in the normal anterior cruciate ligament and in ACL reconstructed using the tendons of the semitendinosus and gracilis muscles maintaining the tibial insertion Discussion Surgical ACL reconstruction using tendon grafts has become the standard treatment for functionally unstable ACL-deficient knee. Although tendons clearly differ biologically from ligaments, multiple experimental studies have shown that the implanted tendons indeed seem to remodel into a ligamentous "anterior cruciate ligament-like" structure, in a biological process known as "ligamentization". 25,26 This study was designed to analyze the innervation of the tendons of the semitendinosus and gracilis muscles partially disinserted used to reconstruct ACL. It was a single case performed 12years before. The surgical procedure included disinsertion of these tendons only proximally whereas the distal (tibial) insertion was maintained, therefore retaining blood vessels and nerves supply. It is well know that ACL provides information to the central nervous system to control dynamic and static activity of the periarticular knee muscles. 8 Thus to preserve or to ensure the innervation of the reconstructed ACL should be a main scope of ACR reconstructive surgery. 4-6 Sensory innervation is absent from reconstructed ACL 7,21 while innervation has been demonstrated in grafts inserted in ACL remnants. [17][18][19][20] The pattern of innervation of the normal ACL we have observed here basically agrees with previous studies using gold impregnation or immunohistochemical techniques, 11,12,14 and also was quite similar to that reported in ACL remnants. 17,20 Perivascular nerves, free nerve endings and two types of identifiable mechanoreceptors, i.e. Pacinilike and Ruffini-like corpuscles, were regularly observed. In our hands no other kinds of sensory corpuscles were found. On the other hand, the density of innervation varied along LCA, being denser in the segments near the femoral and tibial insertions, and minimal in the central segments, especially the occurrence of mechanoreceptors. Regarding the innervation of the reconstructed ACL we have found that the density of innervation was markedly reduced, especially at the proximal (femoral) stump were scarce nerve fibbers and no sensory corpuscles were found. Free nerve endings and atypical morphotypes of sensory corpuscles were found; they were capsulated and resembled Ruffini-like corpuscles; in no case proper Golgi tendon organs were observed. 27 Thus, ACL reconstructed using partially inserted tendons retain innervation even for long periods of time, but it was restricted to the segments close to the distal (tibial) insertion. Presumably the nervous apparatus of these LCA come from the tibial extreme of the tendon, more probably that from the femoral bone-anchored stump. On the other hand, whether the sensory corpuscle we have found are surviving of the original tendons or are new-formed after graft cannot be known. In other territories, like skin, denervated sensory corpuscles degenerated 28 but whether or not this also occurs in tendons has not been investigated. Interestingly, abundant nerve profiles were found in the sub synovial tissue of normal ACL while they were scarce in the surface, no covered by typical synovial, of the grafted tendons. All together these data suggest that long-term tendon-graft reconstructed ACL are severely denervated. To determine what modality of sensitivity is affected, nociceptive or proprioceptive, we have investigated the occurrence of two putative mechanoproteins, ASIC2 and TRPV4, in the ACL nerves. Previous studies in humans and monkeys have demonstrated the occurrence of these proteins in cutaneous mechanoreceptors and mechanosensory neurons. 29,30 In recent years evidence has emerged that at the basis of the sensory processes is the activation of ion channels for the conversion of a stimulus into an electrical signal. Thus, different ion channels are currently being considered as candidates for nociception or mechanoception. 24,31 The present results provide new data about the occurrence of putative mechanoproteins in sensory corpuscles, and are the first evidence of the presence of those proteins in the nerves of LCA and tendon-reconstructed LCA. Recently several mechanoproteins were detected in human periodontal ligament, associated and non-associated to neural elements, in particular Ruffinilike corpuscles. [32][33][34] All together, the present results demonstrate that normal human ACL has a rich innervation that is strongly severed after tendon-graft reconstructed ACL, even when if remains partially inserted and after long-term survival. A main component of the lost nervous apparatus can be regarded as mechanoceptive on the basis or its immunohistochemical profile. However, because ASIC2 and TRPV4 are polymodal channels that responds to nociception in addition to mechanical forces, 35,36 an impairment of nociception cannot be ruled out. Further studies are necessaries to elucidate the role of putative mechanoproteins in human ligaments, including ACL.
2019-04-02T13:06:02.110Z
2017-02-17T00:00:00.000
{ "year": 2017, "sha1": "162b2b179e73ea14e59badd5629b7e6f69f0767d", "oa_license": "CCBYNC", "oa_url": "https://medcraveonline.com/MOJAP/MOJAP-03-00085.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "b8683a70aed7968c4d79c4a6f3342d574fe43ef0", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology" ] }
1085832
pes2o/s2orc
v3-fos-license
A Maximum Entropy Approach to Natural Language Processing The concept of maximum entropy can be traced back along multiple threads to Biblical times(cid:1) Only recently(cid:2) however(cid:2) have computers become powerful enough to permit the widescale application of this concept to real world problems in statistical estimation and pattern recognition(cid:1) In this paper we describe a method for statistical modeling based on maximum entropy(cid:1) We present a maximum(cid:3)likelihood approach for automatically con(cid:3) structing maximum entropy models and describe how to implement this approach e(cid:4)(cid:3) ciently(cid:2) using as examples several problems in natural language processing(cid:1) Introduction Statistical modeling addresses the problem of constructing a stochastic model to predict the behavior of a random process. In constructing this model, we typically have at our disposal a sample of output from the process. Given this sample, which represents an incomplete state of knowledge about the process, the modeling problem is to parlay this knowledge into a representation of the process. We can then use this representation to make predictions about the future behavior about the process. Baseball managers (who rank among the better paid statistical modelers) employ batting averages, compiled from a history of at-bats, to gauge the likelihood that a player will succeed in his next appearance at the plate. Thus informed, they manipulate their lineups accordingly. Wall Street speculators (who rank among the best paid statistical modelers) build models based on past stock price movements to predict tomorrow's fluctuations and alter their portfolios to capitalize on the predicted future. At the other end of the pay scale reside natural language researchers, who design language and acoustic models for use in speech recognition systems and related applications. The past few decades have witnessed significant progress toward increasing the predictive capacity of statistical models of natural language. In language modeling, for instance, Bahl et al. (1989) have used decision tree models and Della have used automatically inferred link grammars to model long range correlations in language. In parsing, Black et al. (1992) have described how to extract grammatical rules from annotated text automatically and incorporate these rules into statistical models of grammar. In speech recognition, Lucassen and Mercer (1984) have introduced a technique for automatically discovering relevant features for the translation of word spelling to word pronunciation. These efforts, while varied in specifics, all confront two essential tasks of statistical modeling. The first task is to determine a set of statistics that captures the behavior of a random proceSs. Given a set of statistics, the second task is to corral these facts into an accurate model of the process--a model capable of predicting the future output of the process. The first task is one of feature selection; the second is one of model selection. In the following pages we present a unified approach to these two tasks based on the maximum entropy philosophy. In Section 2 we give an overview of the maximum entropy philosophy and work through a motivating example. In Section 3 we describe the mathematical structure of maximum entropy models and give an efficient algorithm for estimating the parameters of such models. In Section 4 we discuss feature selection, and present an automatic method for discovering facts about a process from a sample of output from the process. We then present a series of refinements to the method to make it practical to implement. Finally, in Section 5 we describe the application of maximum entropy ideas to several tasks in stochastic language processing: bilingual sense disambiguation, word reordering, and sentence segmentation. A Maximum Entropy Overview We introduce the concept of maximum entropy through a simple example. Suppose we wish to model an expert translator's decisions concerning the proper French rendering of the English word in. Our model p of the expert's decisions assigns to each French word or phrase f an estimate, p(f), of the probability that the expert would choose f as a translation of in. To guide us in developing p, we collect a large sample of instances of the expert's decisions. Our goal is to extract a set of facts about the decision-making process from the sample (the first task of modeling) that will aid us in constructing a model of this process (the second task). One obvious clue we might glean from the sample is the list of allowed translations. For example, we might discover that the expert translator always chooses among the following five French phrases: {dans, en, ?l, au cours de, pendant}. With this information in hand, we can impose our first constraint on our model p: p(dans) + p(en) + p(h) + p(au cours de) + p(pendant) = 1 This equation represents our first statistic of the process; we can now proceed to search for a suitable model that obeys this equation. Of course, there are an infinite number of models p for which this identity holds. One model satisfying the above equation is p(dans) = 1; in other words, the model always predicts dans. Another model obeying this constraint predicts pendant with a probability of 1/2, and ~ with a probability of 1/2. But both of these models offend our sensibilities: knowing only that the expert always chose from among these five French phrases, how can we justify either of these probability distributions? Each seems to be making rather bold assumptions, with no empirical justification. Put another way, these two models assume more than we actually know about the expert's decision-making process. All we know is that the expert chose exclusively from among these five French phrases; given this, these questions, how do we go about finding the most uniform model subject to a set of constraints like those we have described? The maximum entropy method answers both of these questions, as we will demonstrate in the next few pages. Intuitively, the principle is simple: model all that is known and assume nothing about that which is unknown. In other words, given a collection of facts, choose a model consistent with all the facts, but otherwise as uniform as possible. This is precisely the approach we took in selecting our model p at each step in the above example. The maximum entropy concept has a long history. Adopting the least complex hypothesis possible is embodied in Occam's razor ("Nunquam ponenda est pluralitas sine necesitate.') and even appears earlier, in the Bible and the writings of Herotodus (Jaynes 1990). Laplace might justly be considered the father of maximum entropy, having enunciated the underlying theme 200 years ago in his "Principle of Insufficient Reason:" when one has no information to distinguish between the probability of two events, the best strategy is to consider them equally likely (Guiasu and Shenitzer 1985). As E. T. Jaynes, a more recent pioneer of maximum entropy, put it (Jaynes 1990): ... the fact that a certain probability distribution maximizes entropy subject to certain constraints representing our incomplete information, is the fundamental property which justifies use of that distribution for inference; it agrees with everything that is known, but carefully avoids assuming anything that is not known. It is a transcription into mathematics of an ancient principle of wisdom ... Maximum Entropy Modeling We consider a random process that produces an output value y, a member of a finite set 3;. For the translation example just considered, the process generates a translation of the word in, and the output y can be any word in the set {dans, en, ?~, au cours de, pendant}. In generating y, the process may be influenced by some contextual information x, a member of a finite set X. In the present example, this information could include the words in the English sentence surrounding in. Our task is to construct a stochastic model that accurately represents the behavior of the random process. Such a model is a method of estimating the conditional probability that, given a context x, the process will output y. We will denote by p(ylx) the probability that the model assigns to y in context x. With a slight abuse of notation, we will also use p(ylx) to denote the entire conditional probability distribution provided by the model, with the interpretation that y and x are placeholders rather than specific instantiations. The proper interpretation should be clear from the context. We will denote by/~ the set of all conditional probability distributions. Thus a model p(y[x) is, by definition, just an element of ~v. Training Data To study the process, we observe the behavior of the random process for some time, collecting a large number of samples (xl,yl), (x2, y2) ..... (XN, YN). In the example we have been considering, each sample would consist of a phrase x containing the words surrounding in, together with the translation y of in that the process produced. For now, we can imagine that these training samples have been generated by a human expert who was presented with a number of random phrases containing in and asked to choose a good translation for each. When we discuss real-world applications in Combining (1), (2) and (3) yields the more explicit equation We call the requirement (3) a constraint equation or simply a constraint. By restricting attention to those models p(ylx) for which (3) holds, we are eliminating from consideration those models that do not agree with the training sample on how often the output of the process should exhibit the feature f. To sum up so far, we now have a means of representing statistical phenomena inherent in a sample of data (namely, ~(f)), and also a means of requiring that our model of the process exhibit these phenomena (namely, p(f) =/5(f)). One final note about features and constraints bears repeating: although the words "feature" and "constraint" are often used interchangeably in discussions of maximum entropy, we will be vigilant in distinguishing the two and urge the reader to do likewise. A feature is a binary-valued function of (x,y); a constraint is an equation between the expected value of the feature function in the model and its expected value in the training data. The Maximum Entropy Principle Suppose that we are given n feature functions fi, which determine statistics we feel are important in modeling the process. We would like our model to accord with these statistics. That is, we would like p to lie in the subset C of 7 ~ defined by Figure 1 provides a geometric interpretation of this setup. Here 7 ~ is the space of all (unconditional) probability distributions on three points, sometimes called a simplex. If we impose no constraints (depicted in (a)), then all probability models are allowable. Imposing one linear constraint Q restricts us to those p E P that lie on the region defined by C1, as shown in (b). A second linear constraint could determine p exactly, if the two constraints are satisfiable; this is the case in (c), where the intersection of C1 and C2 is non-empty. Alternatively, a second linear constraint could be inconsistent with the first--for instance, the first might require that the probability of the first point is 1/3 and the second that the probability of the third point is 3/4--this is shown in (d). In the present setting, however, the linear constraints are extracted from the training sample and cannot, by construction, be inconsistent. Furthermore, the linear constraints in our applications will not even come close to determining p C/v uniquely as they do in (c); instead, the set C = Q ~ C2 M ... N C, of allowable models will be infinite. Among the models p E C, the maximum entropy philosophy dictates that we select the most uniform distribution. But now we face a question left open in Section 2: what does "uniform" mean? A mathematical measure of the uniformity of a conditional distribution p(y[x) is provided by the conditional entropy 1 x,y 1 A more common notation for the conditional entropy is H(Y [ X), where Y and X are random variables with joint distribution ~(x)p(y[x). To emphasize the dependence of the entropy on the probability distribution p, we have adopted the alternate notation H(p). Figure 1 Four different scenarios in constrained optimization. ~ represents the space of all probability distributions. In (a), no constraints are applied, and all p C ~ are allowable. In (b), the constraint C1 narrows the set of allowable models to those that lie on the line defined by the linear constraint. In (c), two consistent constraints C1 and C2 define a single model p C CI A C2. In (d), the two constraints are inconsistent (i.e., Q N C3 = 0); no p E/~ can satisfy them both. The entropy is bounded from below by zero, the entropy of a model with no uncertainty at all, and from above by log lYl, the entropy of the uniform distribution over all possible lYl values of y. With this definition in hand, we are ready to present the principle of maximum entropy. Maximum Entropy Principle To select a model from a set C of allowed probability distributions, choose the model p. E C with maximum entropy H(p): p. = argmaxH(p) (6) pEC It can be shown that p. is always well-defined; that is, there is always a unique model p. with maximum entropy in any constrained set C. Parametric Form The maximum entropy principle presents us with a problem in constrained optimization: find the p. E C that maximizes H(p). In simple cases, we can find the solution to this problem analytically. This was true for the example presented in Section 2 when we imposed the first two constraints on p. Unfortunately, the solution to the general problem of maximum entropy cannot be written explicitly, and we need a more indirect approach. (The reader is invited to try to calculate the solution for the same example when the third constraint is imposed.) To address the general problem, we apply the method of Lagrange multipliers from the theory of constrained optimization. The relevant steps are outlined here; the reader is referred to Della Pietra et al. (1995) for a more thorough discussion of constrained optimization as applied to maximum entropy. • We will refer to the original constrained optimization problem, find p, --argmaxH(p) pEC as the primal problem. For each feature fi we introduce a parameter hi (a Lagrange multiplier). We call @(;~) the dual function. The functions p;~ and ~(;~) may be calculated explicitly using simple calculus. We find px(ylx)-Z~(x)eXp . ,Xifdx, y) At first glance it is not clear what these machinations achieve. However, a fundamental principle in the theory of Lagrange multipliers, called generically the Kuhn-Tucker theorem, asserts that under suitable assumptions, the primal and dual problems are, in fact, closely related. This is the case in the present situation. Although a detailed account of this relationship is beyond the scope of this paper, it is easy to state the final result: Suppose that A* is the solution of the dual problem. Then Px* is the solution of the primal problem; that is p;~. = p,. In other words, The maximum entropy model subject to the constraints C has the parametric form 2 p;~. of (10), where the parameter values A* can be determined by maximizing the dual function ~(A). The most important practical consequence of this result is that any algorithm for finding the maximum A* of ~(A) can be used to find the maximum p, of H(p) for peC. Relation to Maximum Likelihood The log-likelihood L~(p) of the empirical distribution/5 as predicted by a model p is defined by 3 It is easy to check that the dual function ~(A) of the previous section is, in fact, just the log-likelihood for the exponential model p~; that is With this interpretation, the result of the previous section can be rephrased as: The model p, E C with maximum entropy is the model in the parametric family p:~(ylx) that maximizes the likelihood of the training sample ~. This result provides an added justification for the maximum entropy principle: If the notion of selecting a model p, on the basis of maximum entropy isn't compelling enough, it so happens that this same p, is also the model that can best account for the training sample, from among all models of the same parametric form (10). Table 1 summarizes the primal-dual framework we have established. 2 It might be that the dual function tI,(A) does not achieve its maximum at any finite A*. In this case, the maximum entropy model will not have the form p~ for any A. However, it will be the limit of models of this form, as indicated by the following result, whose proof we omit: Suppose An is any sequence such that ~(An) converges to the maximum O f @(A). Then PAn converges to p.. Table 1 The duality of maximum entropy and maximum likelihood is an example of the more general phenomenon of duality in constrained optimization. Computing the Parameters For all but the most simple problems, the ;~* that maximize ~()~) cannot be found analytically. Instead, we must resort to numerical methods. From the perspective of numerical optimization, the function @()0 is well behaved, since it is smooth and convex-~ in )~. Consequently, a variety of numerical methods can be used to calculate )~*. One simple method is coordinate-wise ascent, in which )~* is computed by iteratively maximizing q~()~) one coordinate at a time. When applied to the maximum entropy problem, this technique yields the popular Brown algorithm (Brown 1959). Other general purpose methods that can he used to maximize ~()~) include gradient ascent and conjugate gradient. An optimization method specifically tailored to the maximum entropy problem is the iterative scaling algorithm of Darroch and Ratcliff (1972). We present here a version of this algorithm specifically designed for the problem at hand; a proof of the monotonicity and convergence of the algorithm is given in Della Pietra et al. (1995). The algorithm is applicable whenever the feature functions fi (x, y) are nonnegative: fi(x,y) >>_ 0 for all i, x, and y This is, of course, true for the binary-valued feature functions we are considering here. The algorithm generalizes the Darroch-Ratcliff procedure, which requires, in addition to the nonnegativity, that the feature functions satisfy ~ifi(x, Y) = 1 for all x, y. Algorithm 1: Improved Iterative Scaling Input: Feature functions fl,f2 .... fn; empirical distribution ~(x,y) Output : Optimal parameter values )~*i; optimal model p~. Let A~i be the solution to (18) with an appropriate choice for a0 and suitable attention paid to the domain of g. Feature Selection Earlier we divided the statistical modeling problem into two steps: finding appropriate facts about the data, and incorporating these facts into the model. Up to this point we have proceeded by assuming that the first task was somehow performed for us. Even in the simple example of Section 2, we did not explicitly state how we selected those particular constraints. That is, why is the fact that dans or ~ was chosen by the expert translator 50% of the time any more important than countless other facts contained in the data? In fact, the principle of maximum entropy does not directly concern itself with the issue of feature selection, it merely provides a recipe for combining constraints into a model. But the feature selection problem is critical, since the universe of possible constraints is typically in the thousands or even millions. In this section we introduce a method for automatically selecting the features to be included in a maximum entropy model, and then offer a series of refinements to ease the computational burden. Motivation We begin by specifying a large collection ~" of candidate features. We do not require a priori that these features are actually relevant or useful. Instead, we let the pool be as large as practically possible. Only a small subset of this collection of features will eventually be employed in our final model. If we had a training sample of infinite size, we could determine the "true" expected value for a candidate feature f E ~-simply by computing the fraction of events in the sample for which f(x, y) = 1. In reaMife applications, however, we are provided with only a small sample of N events, which cannot be trusted to represent the process fully and accurately. Specifically, we cannot expect that for every feature f E ~', the estimate of ~(f) we derive from this sample will be close to its value in the limit as n grows large. Employing a larger (or even just a different) sample of data from the same process might result in different estimates of/5(f) for many candidate features. We would like to include in the model only a subset $ of the full set of candidate features jr. We will call 8 the set of active features. The choice of 8 must capture as much information about the random process as possible, yet only include features whose expected values can be reliably estimated. By adding feature f to S, we obtain a new set of active features S U f. Following (19), this set of features determines a set of models The optimal model in this space of models is Adding the feature f allows the model Paul to better account for the training sample; this results in a gain AL($,f) in the log-likelihood of the training data At each stage of the model-construction process, our goal is to select the candidate feature f E ~" which maximizes the gain AL($,f); that is, we select the candidate feature which, when adjoined to the set of active features S, produces the greatest increase in likelihood of the training sample. This strategy is implemented in the algorithm below. One issue left unaddressed by this algorithm is the termination condition. Obviously, we would like a condition which applies exactly when all the "useful" features have been selected. One reasonable stopping criterion is to subject each proposed feature to cross-validation on a sample of data withheld from the initial data set. If the feature does not lead to an increase in likelihood of the withheld sample of data, the feature is discarded. We will have more to say about the stopping criterion in Section 5.3. Approximate Gains Algorithm 2 is not a practical method for incremental feature selection. For each candidate feature f E ~" considered in step 2, we must compute the maximum entropy model p u ' a task that is computationally costly even with the efficient iterative scaling algorith~ ~ntroduced earlier. We therefore introduce a modification to the algorithm, making it greedy but much more feasible. We replace the computation of the gain AL(S,f) of a feature f with an approximation, which we will denote by ~AL(S,f). Recall that a model p has a set of parameters )~, one for each feature in S. The model Ps contains thisaset of parameters, plus a single new parameter c~, corresponding ~fo f.4 Given this structure, we might hope that the optimal values for )~ do not change as the feature f is adjoined to S. Were this the case, imposing an additional constraint would require only optimizing the single parameter ~ to maximize the likelihood. Unfortunately, when a new constraint is imposed, the optimal values of all parameters change. However, to make the feature-ranking computation tractable, we make the approximation that the addition of a feature f affects only o~, leaving the )~-values associated with other features unchanged. That is, when determining the gain of f over the model Ps' we pretend that the best model containing features $ U f has the form The only parameter distinguishing models of the form (24) is c~. Among these models, we are interested in the one that maximizes the approximate gain x We will denote the gain of this model by ,,,AL(S,f) _~ max Gs,f(~) (27) and the optimal model by P~,f Despite the rather unwieldy notation, the idea is simple. Computing the approximate gain in likelihood from adding feature f to Ps has been reduced to a simple onedimensional optimization problem over the single parameter ~, which can be solved by any popular line-search technique, such as Newton's method. This yields a great savings in computational complexity over computing the exact gain, an n-dimensional 4 Another way to think of this is that the models Psu[ and Ps have the same number of parameters, but c~ = 0 for Ps' The likelihood L(p) is a convex function of its parameters. If we start from a one-constraint model whose optimal parameter value is A = A0 and consider the increase in L~(p) from adjoining a second constraint with the parameter a, the exact answer requires a search over (A, a). We can simplify this task by holding A = A0 constant and performing a line search over the possible values of the new parameter a. In (a), the darkened line represents the search space we restrict attention to. In (b), we show the reduced problem: a line search over a. optimization problem requiring more sophisticated methods such as conjugate gradient. But the savings comes at a price: for any particular feature f, we are probably underestimating its gain, and there is a reasonable chance that we will select a feature f whose approximate gain ,,~AL($,f) was highest and pass over the feature f with maximal gain AL($,f). A graphical representation of this approximation is provided in Figure 3. Here the log-likelihood is represented as an arbitrary convex function over two parameters: A corresponds to the "old" parameter, and a to the "new" parameter. Holding A fixed and adjusting a to maximize the log-likelihood involves a search over the darkened line, rather than a search over the entire space of (A, a). The actual algorithms, along with the appropriate mathematical framework, are presented in the appendix. Case Studies In the next few pages we discuss several applications of m a x i m u m entropy modeling within Candide, a fully automatic French-to-English machine translation system under development at IBM. Over the past few years, we have used Candide as a test bed for exploring the efficacy of various techniques in modeling problems arising in machine translation. We begin in Section 5.1 with a review of the general theory of statistical translation, describing in some detail the models employed in Candide. In Section 5.2 we describe how we have applied maximum entropy modeling to predict the French translation of an English word in context. In Section 5.3 we describe m a x i m u m entropy models that predict differences between French word order and English word order. In Section 5.4 we describe a m a x i m u m entropy model that predicts how to divide a French sentence into short segments that can be translated sequentially. Alignment of a French-English sentence pair. The subscripts give the position of each word in its sentence. Here al = 1, a2 = 2, a3 = a4 = 3, as = 4, and a6 = 5. Review of Statistical Translation When presented with a French sentence F, Candide's task is to find the English sentence E which is most likely given F: E Candide estimates p(E)--the probability that a string E of English words is a wellformed English sentence--using a parametric model of the English language, commonly referred to as a language model. The system estimates p(FIE)--the probability that a French sentence F is a translation of E--using a parametric model of the process of English-to-French translation known as a translation model. These two models, plus a search strategy for finding the/~ that maximizes (30) for some F, comprise the engine of the translation system. We now briefly describe the translation model for the probability P(FIE); a more thorough account is provided in Brown et al. (1991). We imagine that an English sentence E generates a French sentence F in two steps. First, each word in E independently generates zero or more French words. These words are then ordered to give a French sentence F. We denote the ith word of E by ei and the jth word of F by yj. (We employ yj rather than the more intuitive }~ to avoid confusion with the feature function notation.) We denote the number of words in the sentence E by IEI and the number of words in the sentence F by IFI. The generative process yields not only the French sentence F but also an association of the words of F with the words of E. We call this association an alignment, and denote it by A. An alignment A is parametrized by a sequence of IFI numbers aj, with 1 < ai < IE[. For every word position j in F, aj is the word position in E of the English word that generates yj. Figure 4 depicts a typical alignment. The probability p(FIE ) that F is the translation of E is expressed as the sum over all possible alignments A between E and F of the probability of F and A given E: A The sum in equation (31) is computationally unwieldy; it involves a sum over all IE] IFI possible alignments between the words in the two sentences. We sometimes make the simplifying assumption that there exists one extremely probable alignment ,4, called the "Viterbi alignment," for which We call the model described by equations (31) and (33) the basic translation model. We take the probabilities p(nle ) and p(yr e) as the fundamental parameters of the model, and parametrize the distortion probability in terms of simpler distributions. Brown et al. (1991) describe a method of estimating these parameters to maximize the likelihood of a large bilingual corpus of English and French sentences. Their method is based on the Estimation-Maximization (EM) algorithm, a well-known iterative technique for maximum likelihood training of a model involving hidden statistics. For the basic translation model, the hidden information is the alignment A between E and F. We employed the EM algorithm to estimate the parameters of the basic translation model so as to maximize the likelihood of a bilingual corpus obtained from the proceedings of the Canadian Parliament. For historical reasons, these proceedings are sometimes called "Hansards." Our Hansard corpus contains 3.6 million English-French sentence pairs, for a total of a little under 100 million words in each language. Table 2 shows our parameter estimates for the translation probabilities p(y[in). The basic translation model has worked admirably: given only the bilingual corpus, with no additional knowledge of the languages or any relation between them, it has uncovered some highly plausible translations. Nevertheless, the basic translation model has one major shortcoming: it does not take the English context into account. That is, the model does not account for surrounding English words when predicting the appropriate French rendering of an English word. As we pointed out in Section 3, this is not how successful translation works. The best French translation of in is a function of the surrounding English words: if a month's time are the subsequent words, pendant might be more likely, but if thefiscal year 1992 are what follows, then dans is more likely. The basic model is blind to context, always assigning a probability of 0.3004 to dans and 0.0044 to pendant. This can yield errors when Candide is called upon to translate a French sentence. Examples of two such errors are shown in Figure 5. In the first example, the system has chosen an English sentence in which the French word sup&ieures has been rendered as superior when greater or higher is a preferable translation. With no knowledge of context, an expert translator is also quite likely to select superior as the English word generating He appears that Bank of Boston has almost completed its review of Shawmut. Figure 5 Typical errors encountered in using EM-based model of Brown et al. in a French-to-English translation system. sup~rieures. But an expert privy to the fact that 50% was among the next few words might be more inclined to select greater or higher. Similarly, in the second example, the incorrect rendering of II as He might have been avoided had the translation model used the fact that the word following it is appears. Context-Dependent Word Models In the hope of rectifying these errors, we consider the problem of context-sensitive modeling of word translation. We envision, in practice, a separate maximum entropy model, pe(y]x), for each English word e, where pe(ylx) represents the probability that an expert translator would choose y as the French rendering of e, given the surrounding English context x. This is just a slightly recast version of a longstanding problem in computational linguistics, namely, sense disambiguation--the determination of a word's sense from its context. We begin with a training sample of English-French sentence pairs (E, F) randomly extracted from the Hansard corpus, such that E contains the English word in. For each sentence pair, we use the basic translation model to compute the Viterbi alignment between E and F. Using this alignment, we then construct an (x, y) training event. The event consists of a context x containing the six words in E surrounding in and a future Table 3. Next we define the set of candidate features. For this application, we employ features that are indicator functions of simply described sets. Specifically, we consider functions f(x,y) that are one if y is some particular French word and the context x contains a given English word, and are zero otherwise. We employ the following notation to represent these features: fl(x,y) = { 10 otherwiseify=enandAprilEI ] I ]'1 I I fa(x,y) {10 ot herwi sei f Y pen antand weeks I I I" I" I" J Here fl = 1 when April follows in and en is the translation of in; f2 = 1 when weeks is one of the three words following in and pendant is the translation. The set of features under consideration is vast, but may be expressed in abbreviated form in Table 4 I llv t ,-o and I I'll I I A maximum entropy model that uses only template 1 features predicts each French translation y with the probability ~(y) determined by the empirical data. This is exactly the distribution employed by the basic translation model. Since template 1 features are independent of x, the maximum entropy model that employs only constraints derived from template 1 features takes no account of contextual information in assigning a probability to y. When we include constraints derived from template 2 features, we take our first step towards a context-dependent model. Rather than simply constraining the expected probability of a French word y to equal its empirical probability, these constraints require that the expected joint probability of the English word immediately following in and the French rendering of in be equal to its empirical probability. An example of a template 2 constraint is p(y = pendant, e+l = several) = ~(y = pendant, e+] = several) A maximum entropy model that incorporates this constraint will predict the translations of in in a manner consistent with whether or not the following word is several. In particular, if in the empirical sample the presence of several led to a greater probability for pendant, this will be reflected in a maximum entropy model incorporating this constraint. We have thus taken our first step toward context-sensitive translation modeling. Templates 3, 4, and 5 consider, each in a different way, various parts of the context. For instance, template 5 constraints allow us to model how an expert translator is biased by the appearance of a word somewhere in the three words following the word being translated. If house appears within the next three words (e.g., the phrases in the house and in the red house), then dans might be a more likely translation. On the other hand, if year appears within the same window of words (as in in the year 1941 or in that fateful year), then au cours de might be more likely. Together, the five constraint templates allow the model to condition its assignment of probabilities on a window of six words around e0, the word in question. We constructed a maximum entropy model Pin (ylx) by the iterative model-growing method described in Section 4. The automatic feature selection algorithm first selected a template 1 constraint for each of the translations of in seen in the sample (12 in all), thus constraining the model's expected probability of each of these translations to their empirical probabilities. The next few constraints selected by the algorithm are shown in Table 5. The first column gives the identity of the feature whose expected value is constrained; the second column gives ,-~AL($,f), the approximate increase in the model's log-likelihood on the data as a result of imposing this constraint; the third column gives L(p), the log-likelihood after adjoining the feature and recomputing the model. Let us consider the fifth row in the table. This constraint requires that the model's expected probability of dans, if one of the three words to the right of in is the word speech, is equal to that in the empirical sample. Before imposing this constraint on the model during the iterative model-growing process, the log-likelihood of the current model on the empirical sample was -2.8703 bits. The feature selection algorithm described in Section 4 calculated that if this constraint were imposed on the model, the log-likelihood would rise by approximately 0.019059 bits; since this value was higher than for any other constraint considered, the constraint was selected. After applying iterative scaling to recompute the parameters of the new model, the likelihood of the empirical sample rose to -2.8525 bits, an increase of 0.0178 bits. Table 6 Maximum entropy model to predict French translation of to run: top-ranked features not from template 1. Featuref(x,y) , Table 6 lists the first few selected features for the model for translating the English word run. The "Hansard flavor'--the rather specific domain of parliamentary discourse related to Canadian affairs--is easy to detect in many of the features in this table. It is not hard to incorporate the maximum entropy word translation models into a translation model P(FIE ) for a French sentence given an English sentence. We merely replace the simple context-independent models p(yI e) used in the basic translation model (33) with the more general context-dependent models pe(y]x): where xaj denotes the context of the English word eaj. Figure 6 illustrates how using this improved translation model in the Candide system led to improved translations for the two sample sentences given earlier. Segmentation Though an ideal machine translation system could devour input sentences of unrestricted length, a typical stochastic system must cut the French sentences into polite lengths before digesting them. If the processing time is exponential in the length of the input passage (as is the case with the Candide system), then failing to split the French sentences into reasonably-sized segments would result in an exponential slowdown in translation. Thus, a common task in machine translation is to find safe positions at which 1 It appears that Bank of Boston has almost completed its review of Shawmut . Figure 6 Improved French-to-English translations resulting from maximum entropy-based system. Figure 7 Example of an unsafe segmentation. A word in the translated sentence (e3) is aligned to words (y3 and y4) in two different segments of the input sentence. to split input sentences in order to speed the translation process. "Safe" is a vague term; one might, for instance, reasonably define a safe segmentation as one which results in coherent blocks of words. For our purposes, however, a safe segmentation is dependent on the Viterbi alignment A between the input French sentence F and its English translation E. We define a rift as a position j in F such that for all k < j, ak <_ aj and for all k > j, ak > aj. In other words, the words to the left of the French word yj are generated by words to the left of the English word %, and the words to the right of yj are generated by words to the right of %. In the alignment of figure 4, for example, there are rifts at positions j = 1, 2, 4, 5 in the French sentence. One visual method of determining whether a rift occurs after the French word j is to try to trace a line from the last letter of yj up to the last letter of e~; if the line can be drawn without intersecting any alignment lines, position f is a rift. Using our definition of rifts, we can redefine a safe segmentation as one in which the segment boundaries are located only at rifts. Figure 7 illustrates an unsafe segmentation, in which a segment boundary (denoted by the II symbol) lies between a and mangd, where there is no rift. Figure 8, on the other hand, illustrates a safe segmentation. The reader will notice that a safe segmentation does not necessarily result in semantically coherent segments: mes and devoirs are certainly part of one logical unit, yet are separated in this safe segmentation. Once such a safe segmentation has been applied to the French sentence, we can make the assumption while searching for the appropriate English translation that no word in the translated English sentence will have to account for French words located in multiple segments. Disallowing inter- Figure 8 Example of a safe segmentation. %31 l e°i 31tag eai -3 1 I tag e°i 3 lclass % -3 1 I class e°i Y Figure 9 (x,y) for sentence segmentation. segment alignments dramatically reduces the scale of the computation involved in generating a translation, particularly for large sentences. We can consider each segment sequentially while generating the translation, working from left to right in the French sentence. We now describe a maximum entropy model that assigns to each location in a French sentence a score that is a measure of the safety in cutting the sentence at that location. We begin as in the word translation problem, with a training sample of English-French sentence pairs (E, F) randomly extracted from the Hansard corpus. For each sentence pair we use the basic translation model to compute the Viterbi alignment between E and F. We also use a stochastic part-of-speech tagger as described in Merialdo (1990) to label each word in F with its part of speech. For each position j in F we then construct a (x, y) training event. The value y is rift if a rift belongs at position j and is no-rift otherwise. The context information x is reminiscent of that employed in the word translation application described earlier. It includes a six-word window of French words: three to the left of yj and three to the right of yj. It also includes the part-of-speech tags for these words, and the classes of these words as derived from a mutual-information clustering scheme described in . The complete (x, y) pair is illustrated in Figure 9. In creating p(riftlx), we are (at least in principle) modeling the decisions of an expert French segmenter. We have a sample of his work in the training sample ~(x, y), and we measure the worth of a model by the log-likelihood L~(p). During the iterative model-growing procedure, the algorithm selects constraints on the basis of how much they increase this objective function. As the algorithm proceeds, more and more constraints are imposed on the model p, bringing it into ever-stricter compliance with the empirical data ~(x,y). This is useful to a point; insofar as the empirical data embodies the expert knowledge of the French segmenter, we would like to incorporate this knowledge into a model. But the data contains only so much expert knowledge; the algorithm should terminate when it has extracted this knowledge. Otherwise, the model p(ylx) will begin to fit itself to quirks in the empirical data. A standard approach in statistical modeling, to avoid the problem of overfitting the training data, is to employ cross-validation techniques. Separate the training data ]~(x, y) into a training portion, pr, and a withheld portion, ]~h. Use only ]gr in the modelgrowing process; that is, select features based on how much they increase the likelihood L~r (p). As the algorithm progresses, L~, (p) thus increases monotonically. As long as each new constraint imposed allows p to better account for the random process that generated both Pr and Ph, the quantity L~h(p ) also increases. At the point when overfitting begins, however, the new constraints no longer help p model the random process, but instead require p to model the noise in the sample Pr itself. At this point, L~r (p) continues to rise, but L~h (p) no longer does. It is at this point that the algorithm should terminate. Figure 10 illustrates the change in log-likelihood of training data L~r(p ) and withheld data L~h (p). Had the algorithm terminated when the log-likelihood of the withheld data stopped increasing, the final model p would contain slightly less than 40 features. We have employed this segmenting model as a component in a French-English machine translation system in the following manner: The model assigns to each position in the French sentence a score, p(r±ft I x), which is a measure of how appropriate a split would be at that location. A dynamic programming algorithm then selects, given the "appropriateness" score at each position and the requirement that no segment may contain more than 10 words, an optimal (or, at least, reasonable) splitting of the sentence. Figure 11 shows the system's segmentation of four sentences selected at random from the Hansard data. We remind the reader to keep in mind when evaluating Figure 11 that the segmenter's task is not to produce logically coherent blocks of words, but to divide the sentence into blocks which can be translated sequentially from left to right. Figure 11 Maximum entropy segmenter behavior on four sentences selected at random from the Hansard data. Word Reordering Translating a French sentence into English involves not only selecting appropriate English renderings of the words in the French sentence, but also selecting an ordering for the English words. This order is often very different from the French word order. One way Candide captures word-order differences in the two languages is to allow for alignments with crossing lines. In addition, Candide performs, during a preprocessing stage, a reordering step that shuffles the words in the input French sentence into an order more closely resembling English word order. One component of this word reordering step deals with French phrases which have the NOUN de NOUN form. For some NOUN de NOUN phrases, the best English translation is nearly word for word: conflit d'intOr~t, for example, is almost always rendered as conflict of interest. For other phrases, however, the best translation is obtained by interchanging the two nouns and dropping the de. The French phrase taux d'int&Ot, for example, is best rendered as interest rate. Table 7 gives several examples of NOUN de NOUN phrases together with their most appropriate English translations. In this section we describe a maximum entropy model that, given a French NOUN de NOUN phrase, estimates the probability that the best English translation involves an interchange of the two nouns. We begin with a sample of English-French sentence pairs (E, F) randomly extracted from the Hansard corpus, such that F contains a NOUN de NOUN phrase. For each sentence pair we use the basic translation model to compute the Viterbi alignment ,~ between the words in E and F. Using A we construct an (x, y) training event as follows: We let the context x be the pair of French nouns (NOUNL, NOUNR). We let y be no-interchange if the English translation is a word-forword translation of the French phrase and y = interchange if the order of the nouns in the English and French phrases are interchanged. We define candidate features based upon the template features shown in Table 8. We used the feature-selection algorithm of section 4 to construct a maximum entropy model from candidate features derived from templates 1, 2, and 3. The model was grown on 10,000 training events randomly selected from the Hansard corpus. The final model contained 358 constraints. To test the model, we constructed a NOUN de NOUN word-reordering module which interchanges the order of the nouns if p(interchange[x) > 0.5 and keeps the order the same otherwise. Table 9 compares performance on a suite of test data against a baseline NOUN de NOUN reordering module that never swaps the word order. smaller.., p{mterchange} ...larger Figure 12 Predictions of the NOUN de NOUN interchange model on phrases selected from a corpus unseen during the training process. Conclusion We began by introducing the building blocks of maximum entropy modeling--realvalued features and constraints built from these features. We then discussed the maximum entropy principle. This principle instructs us to choose, among all the models consistent with the constraints, the model with the greatest entropy. We observed that this model was a member of an exponential family with one adjustable parameter for each constraint. The optimal values of these parameters are obtained by maximizing the likelihood of the training data. Thus two different philosophical approaches-maximum entropy and maximum likelihood--yield the same result: the model with the greatest entropy consistent with the constraints is the same as the exponential model which best predicts the sample of data. We next discussed algorithms for constructing maximum entropy models, concentrating our attention on the two main problems facing would-be modelers: selecting a set of features to include in a model, and computing the parameters of a model containing these features. The general feature-selection process is too slow in practice, and we presented several techniques for making the algorithm feasible. In the second part of this paper we described several applications of our algorithms, concerning modeling tasks arising in Candide, an automatic machine translation system under development at IBM. These applications demonstrate the efficacy of maximum entropy techniques for performing context-sensitive modeling. Compute c~,+1 from ~n using (35) Compute G8,f(O@+l) using (26) 4. Set ~AL(8,f) ,--Gs,f(O~n) Computing Approximate Gains in Parallel For the purpose of incremental model growing as outlined in Algorithm 2, we need to compute the maximum approximate gain ,-~ AL(8,f) for each candidate feature f E ~-. One obvious approach is to cycle through all candidate features and apply Algorithm 3 for each one sequentially. Since Algorithm 3 requires one pass through every event in the training sample per iteration, this could entail millions of passes through the training sample. Because a significant cost often exists for reading the training data--if the data cannot be stored in memory but must be accessed from disk, for example--an algorithm that passes a minimal number of times through the data may be of some utility. We now give a parallel algorithm specifically tailored to this scenario. 2. Convergence for this algorithm is guaranteed just as it was for algorithm 3--after each iteration of step 5, the value of c~(f) for each candidate feature f is closer to its optimal value c~*(f) and, more importantly, the gain Gs,f is closer to the maximal gain ,-,,AL(,S,f).
2014-10-01T00:00:00.000Z
1996-03-01T00:00:00.000
{ "year": 1996, "sha1": "6ef1fa25a6bc519b78d339b618199fec99566ae9", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ACL", "pdf_hash": "4af182338ee63754d4569c26cb6a5c3bbdd8cf2a", "s2fieldsofstudy": [ "Computer Science", "Linguistics" ], "extfieldsofstudy": [ "Engineering" ] }
15046740
pes2o/s2orc
v3-fos-license
Phytochelatin modified electrode surface as a sensitive heavy metal ion biosensor. Electrochemical biosensors have superior properties over other existing measurement systems because they can provide rapid, simple and low-cost on-field determination of many biological active species and a number of dangerous pollutants. In our work, we suggested a new heavy metal biosensor based on interaction of heavy metal ions (Cd2+ and Zn2+) with phytochelatin, which was adsorbed on the surface of the hanging mercury drop electrode, using adsorptive transfer stripping differential pulse voltammetry. In addition, we applied the suggested technique for the determination of heavy metals in a biological sample – human urine and platinum in a pharmaceutical drug. The detection limits (3 S/N) of Cd(II), Zn(II) and cis-platin were about 1.0, 13.3 and 1.9 pmole in 5 μl, respectively. On the basis of the obtained results, we propose that the suggested technique offers simple, rapid, and low-cost detection of heavy metals in environmental, biological and medical samples. Introduction Industries produce a number of undesirable species such as pesticides, toxic organic compounds, heavy metals and so on [1][2][3][4][5]. An increasing concentration of heavy metals in the environment is a serious problem for human and animal health protection and production of foodstuffs in many countries around the world [6][7][8]. That is why easy and quick detection of heavy metals at very low concentrations levels in environmental and biological samples is necessary for assurance against acute intoxications and, first of all, against long-time exposure that may lead to many diseases and death [9][10]. Several analytical methods such as atomic absorption spectrometry [11][12][13], inductively coupled plasma with mass spectrometry [14][15][16] as well as electrochemistry [17][18][19][20][21], have been developed for these purposes. Electrochemical biosensors have superior properties over the other existing measurement systems because they can provide rapid, simple and low-cost on-field determination of many biological active species and number of dangerous pollutants [22][23][24][25][26][27][28][29]. In addition, biosensor technology is a powerful alternative to conventional analytical techniques, combining the specificity and sensitivity of biological systems in small devices. A number of recently published papers describe the determination of heavy metals using electrochemical biosensors based on their interactions with DNA [26,[29][30][31][32][33], enzymes (first of all urease) [34][35][36][37][38], bacteria [39][40][41] and proteins [42][43]. Besides high molecular species -proteins such as metallothionein -it is possible to use low molecular heavy metal binding compounds such as phytochelatins (PCs) for construction of biosensors. PCs, cysteine-rich small peptides, consist of 4-23 amino acids abounding in plants as a response on heavy metal stress [44][45][46][47], participate in the detoxification of heavy metals, because they have an ability to transport heavy metal ions to vacuole [45,48], where an immediate toxicity do not menace yet. Phytochelatins have a basic formula (γ-Glu-Cys) n -Gly (n = 2 to 11) and with the presented heavy metals (M) form M-PC complexes, in which the metal is bind via SH group of cysteine unit [48][49]; see Figure 1A. PCs are synthesized from glutathione, which is catalysed by PC synthase (γ-glutamylcysteine dipeptidyltranspeptidase, EC 2.3.2.15) activated by an increased concentration of the heavy metal (Cd, Cu, Hg, As or Pb) in a plant cytoplasm [47]. Reduced glutathione (GSH) itself plays the important role in cell protection against heavy metals, and reactive oxygen species (ROS) that are able to oxidize GSH to GSSG (oxidized glutathione; disulfide glutathione) [50]. The GSH:GSSG ratio was found as an indicator of cell damage and some diseases [50,51]. The aim of this paper was to suggest a new heavy metal biosensor based on interaction of heavy metal (cadmium and zinc) with phytochelatin using adsorptive transfer stripping (AdTS) differential pulse voltammetry (DPV). The basic scheme of the proposed heavy metals biosensor is shown in Figure 1B. Chemicals Phytochelatin (γ-Glu-Cys) 2 -Gly (PC 2 ) was synthesized in Clonestar Biotech; purity over 90% (Brno, Czech Republic). Tris(2-carboxyethyl)phosphine is produced by Molecular Probes (Evgen, Oregon, USA). Sodium chloride, cadmium nitrate, zinc nitrate and other used chemicals were purchased from Sigma Aldrich. The stock standard solutions of PC 2 at 10 µg.ml -1 were prepared by ACS water (Sigma-Aldrich, USA) and stored in the dark at -20 °C. Working standard solutions were prepared daily by dilution of the stock solutions. The pH value was measured using WTW inoLab Level 3 with terminal Level 3 (Weilheim, Germany), controlled by personal computer program (MultiLab Pilot; Weilheim, Germany). The pH electrode (SenTix-H, pH 0-14/3M KCl) was regularly calibrated by set of WTW buffers (Weilheim, Germany). Electrochemical measurements Electrochemical measurements were performed with AUTOLAB Analyser (EcoChemie, Netherlands) connected to VA-Stand 663 (Metrohm, Switzerland), using a standard cell with three electrodes. The working electrode was a hanging mercury drop electrode (HMDE) with a drop area of 0.4 mm 2 . The reference electrode was an Ag/AgCl/3M KCl electrode and the auxiliary electrode was a graphite electrode. The supporting electrolyte was prepared by mixing buffer components. The analyzed samples were deoxygenated prior to measurements by purging with argon (99.999%) saturated with water for 240 s. -Adsorptive transfer stripping (AdTS) differential pulse voltammetry (DPV) of phytochelatin The amount of PC 2 was measured using AdTS DPV. The samples of the PC 2 were reduced before each measurement by 1 mM tris(2-carboxyethyl)phosphine addition according to [52]. The supporting electrolyte (sodium chloride: 0.5 M NaCl, pH 6.4) from Sigma Aldrich in ACS purity was purchased. DPV parameters were as follows: an initial potential of -1.2 V, an end potential -0.3 V, a modulation time 0.057 s, a time interval 0.2 s, a step potential of 1.05 mV/s, a modulation amplitude of 250 mV. All experiments were carried out at room temperature. For smoothing and baseline correction the software GPES 4.4 supplied by EcoChemie was employed. -Preparation of cis-platin -pharmaceutical drug cis-Platin was synthesized and provided by Pliva-Lachema (Brno, Czech Republic) [53]. The stock standard solutions of cis-platin at 10 µg.ml -1 were prepared by sodium chloride solution (0.5 M, pH 6.4) and stored in the dark at -20 °C. Working standard solutions were prepared daily by dilution of the stock solutions. Statistical analysis STATGRAPHICS® (Statistical Graphics Corp®, USA) was used for statistical analyses. Results are expressed as mean ± S.D. unless noted otherwise. A value of p < 0.05 was considered significant. Adsorptive transfer stripping technique as a base of electrochemical biosensor Adsorptive transfer stripping technique (AdTS) was developed as a suitable tool for electrochemical detection of biomolecules such as proteins, peptides and/or DNA [29,[71][72][73][74][75][76][77][78][79]. Principle of the technique is in an adsorbing of studied analyte on surface of the working electrode -in our case of HMDE at open electrode system; see Figure 2A After the absorbing, the electrode is removed from the solution and redundance of analyte is washed from the surface of the working electrode in buffer (Figure 2A 3 ). The adsorbed analyte is finally detected in the presence of an indifferent electrolyte (Figure 2A 4 ). It was proved that during the described process running on the surface of HMDE, only one assembled layer of the adsorbed analyte, which can be bio-macromolecules species and/or compounds capable of adsorbing on the electrode surface, could form [80]. On the base of the above-mentioned description of the transfer technique, we were concerned with the possibility of using of a peptide (PC 2 ) modified HMDE surface for heavy metals determination. Primarily, we focused on the optimisation of the modification of the electrode surface by phytochelatin. Using the adsorptive transfer stripping technique for determination of phytochelatin An electrochemical behaviour of the phytochelatin 2 (key plant peptide binding heavy metals; PC 2 ) was studied on the surface of the HMDE by differential pulse voltammetry (DPV) in combination with adsorptive transfer stripping technique (AdTS). The voltammogram of 500 µM PC 2 accumulated on the HMDE surface during the time of 120 s and analysed in 0.5 M NaCl (pH 6.4) is shown in Figure 2B. On the obtained record, we observed the signal at potential -0.57 V, which probably correspond to adduct of the PC 2 with mercury on the surface of the HMDE (HS-peptide + Hg = HgS-peptide) [81][82]. It was necessary to know the way of probable interaction of peptide with the working electrode surface with the view to use the modified HMDE as a suitable toll for detection of heavy metals. On the most important index of status of electrode, the double-layer is dependent on the current response on the accumulation time [80,83]. An influence of the accumulation time of PC 2 at 1 mM and 10 µM concentrations on the electrochemical response (current height of PC 2 signal) was studied. The observed dependence at 1 mM PC 2 concentration steeply increased up to 240 s and resembled to the Langmuir isotherm (Figure 2Ca). From the obtained results it follows that the signal of PC 2 increased up to 240 s of the peptide accumulation time, which is probably connected with sequent filling up of the electrode surface. The maximum of the presented curve at 240 s probably relates with needed time for filling up of the HMDE electrode surface by one layer of PC 2 -surface assembled monolayer (SAM) [84]. After 240 s, the signal of adsorbed peptide did not increase, contrariwise decreased, which probably relates with formation of the poly-layer of the PC 2 on the HMDE surface -decreasing the possibility of the detection of adsorbed molecules. In the case of lower tested PC 2 concentration (10 µM), we observed the increase of PC 2 peak height with increasing accumulation time at all tested values ( Figure 2C(a)). In addition, we indeed observed very low increase of the peak height (about 4 %) from the accumulation time of 360 s. Due to using of PC 2 at 1 mM concentration for the determination of heavy metals, we used the accumulation time of 240 s, because up this time the surface assembled monolayer is formed. The next important index of behaviour of phytochelatin on the HMDE surface was the change of PC 2 current response according to its different concentrations. At the accumulation time of 240 s, concentrations of PC 2 varying from 2.5 to 1000 µM were tested ( Figure 2C(b)). The obtained dependence set from the obtained PC 2 current responses according to its different concentrations was linear in the concentration range 0 -40 µM (y = 0.0894 + 0.0621; R 2 = 0.9969, inset in Figure 2C(b)). In addition, we observed a decrease of the current responses from the PC 2 concentration of 100 µM. This phenomenon probably relates with a forming of poly-layer on the electrode surface [80,83,85]. Modification of the HMDE surface by phytochelatin For our purposes, we used the HMDE as the physical-chemical part and phytochelatin 2, which is able to bind heavy metals [68,70,[86][87][88][89][90][91][92][93], as the biological part of the suggested heavy metals biosensor. That is why we could suggest following experiments: i) on the HMDE surface adsorb PC 2 ; ii) remove redundant PC 2 ; iii) expose the adsorbed PC 2 to interaction with heavy metal; iv) detect changes in the signals of PC 2 ( Figure 2D). We selected for our purposes two heavy metals -cadmium and zinc. That is why we were interested if free ions of selected heavy metal are able to adsorption and transfer on the HMDE surface. If we accumulated (120 s) only free ions of Cd(II) and/or Zn(II) without MT on the surface of the mercury working electrode, we did not observe any signal corresponding to heavy metal species (not shown). The described effect prove that free ions of heavy metals are not able to transfer and consequently to detect. Electrochemical behaviour of phytochelatin-modified HMDE in presence of Cd(II) and Zn(II) Phytochelatin 2 (1 mM) was adsorbed on the HMDE surface for the duration of 240 s. Then, the modified electrode was washed in the basic electrolyte solution and consequently, interacted with 500 µM of Cd(II) and/or Zn(II) for the duration of 300 s that was established as the most effective. The obtained voltammograms are shown in Figure 3A (Cd) and 4B (Zn). In the presence of Cd(II) we recorded except original signal PC 2 also another two signals that we named CdPC 2 (-0.76 V) and PC 2 (Cd) (-0.45 V). In addition, we observed a linear decrease of PC 2 signal and increase of CdPC 2 and PC 2 (Cd) signals according to the rise of Cd(II) added concentration. The equations of the mentioned rising linear curves were in case of CdPC 2 : y = 0.0128x + 0.2085; R 2 = 0.9918; and PC 2 (Cd): y = 0.0999x -4.8325; R 2 = 0.9997. The detection limit (3 S/N) of Cd(II) calculated from increase of PC 2 (Cd) peak was about 1.055 pmole in 5 µl (0.211 µM); see Figure 3C. In the case of Zn(II) determination by PC 2 modified HMDE, we observed only one additive signal, which was named ZnPC 2 : -1.09 V, in comparison with control detection of PC 2 without interaction with Zn(II). The signal of PC 2 linear decreased and ZnPC 2 signal linearly increased (y = 0.8496x -18.598, R 2 = 0.9961) according to rising Zn(II) concentration. The detection limit (3 S/N) of Zn(II) calculated from increase of ZnPC 2 peak was about 13.30 pmole in 5 µl (2.66 µM); see Figure 3D. Determination of Cd(II) and Zn(II) by PC 2 modified HMDE in biological matrix We decided to test our peptide-modified heavy metal biosensor by means of detection of Cd(II) and/or Zn(II) in the presence of the biological matrix (human urine). In the concrete, PC 2 (1 mM) was adsorbed (240 s) on the HMDE surface, washed (0.5 NaCl) and then the modified electrode interacted with Cd(II) and/or Zn(II) in the presence of human urine (10× diluted) for the duration of 300 s. Subsequently, the electrode was washed (0.5 NaCl) and placed to an electrochemical cell-containing supporting electrolyte (0.5 M NaCl; pH = 6.4). Human urine contained additions of Cd(II) and/or Zn(II) at concentrations (25, 50, 100, 225, 400, 600 and/or 50, 100, 200, 400, 600, 800 µM, respectively). All studied signals embodied very similar electrochemical behaviour in the course of their analysis both in buffered solution and biological matrix (not shown). The only two differences between analysis in buffered and non-buffered medium, which we found out, are peak heights and their relationship. In the presence of human urine, heights of all studied signals were lower than in the buffered medium -sodium chloride (differences from 10 to 15%). This effect is probably caused by impurities included with real sample that may complex with heavy metal ions. Using of peptide-modified HMDE to study of anticancer drug -cis-platin We attempted to use our suggested peptide modified heavy metal biosensor for the determination of the anticancer drug -cis-platin [Pt II (NH 3 ) 2 Cl 2 ] 0 ; MW 303. A prepared solution of Pt complex interacted with PC 2 modified HMDE for the duration of 300 s. A resulting voltammogram is shown in Figure 4A. A phytochelatin-modified HMDE surface formed with presented Pt complex that we detected at potential -0.96 V and named as PtPC 2 . In addition, we studied the influence of different durations of interaction between the PC 2 -modified electrode surface and Pt on height of the presented PtPC 2 signal. We selected a duration of 300 s as most suitable for the Pt complex interaction with the PC 2 -modified electrode surface (not shown). The dependence of heights of the PC 2 and PtPC 2 signals on cis-platin concentration is shown in Figure 4B. The PC 2 signal decreased and PtPC 2 linearly increased (y = 0.0532x -5.7079; R 2 = 0.9946) in the studied concentration of cis-platin varying from 100 to 750 µM. The detection limit (3 S/N) of cis-platin ([Pt II (NH 3 ) 2 Cl 2 ] 0 ) calculated from increase of PtPC 2 peak was about 1.958 pmole in 5 µl (0.392 µM) at the interaction time of 300 s. Conclusions A development of easy and rapid renewable sensors for the detection of different species is one of the most important tasks of analytical chemistry and biochemistry. We suggested a simple sensor for the determination of Cd(II) and Zn(II) using a modification of the hanging mercury drop electrode surface by phytochelatin. The main advantage of using HMDE in comparison with other solid electrodes (carbon, gold and so on) is sensitivity. On the basis of the obtained results we propose that the suggested technique offers simple, rapid, and low-cost detection of heavy metals in environmental, biological, and medical samples.
2016-05-14T03:07:21.308Z
2005-02-01T00:00:00.000
{ "year": 2005, "sha1": "f215017832b871d2a53173a2ea53a8e9b5e48de1", "oa_license": "CCBY", "oa_url": "http://www.mdpi.com/1424-8220/5/1/70/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f215017832b871d2a53173a2ea53a8e9b5e48de1", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
1950279
pes2o/s2orc
v3-fos-license
The BioLink SIG Workshop at ISMB2004 The Special Interest Group (SIG) on Text Mining (or BioLINK — Biological Literature, Information and Knowledge; http://www.pdg.cnb.uam.es/BioLINK/) was created to address the need for communication and interchange of ideas in the field of text mining and information extraction applied to biology and biomedicine. Information extraction (IE) is an outgrowth of work in automated natural language processing, which began in the 1950s with work on transformational grammar by Zellig Harris [5,6] and later Noam Chomsky [3,4]. Information extraction technology made rapid progress starting in the late 1980s, thanks to a series of conferences focused on evaluation of IE: the Message Understanding Conferences [1]. There is also a long history of research on applications in medicine. Applications to the medical field focus on two distinct sub-problems: improved access to the medical literature and extraction of information from patient records. Despite these successes in other fields, natural language processing (NLP) techniques were not introduced in biology until the late 1990s. Even today, there are two distinct groups: on the one hand, researchers with a background in computer science, and on the other hand, their colleagues with a background in the life sciences, with only limited interaction between the two groups. To improve this situation, the BioLINK group holds regular open meetings to bring together researchers developing text data mining tools and related language processing methods to manage the information explosion in the biomedical field. They include invited and contributed papers, with a focus on developing shared infrastructure (tools, corpora, ontologies) and challenge evaluations, in the style of the KDD Challenge Cups [2]. This year, the BioLINK SIG meeting focused on resources and tools for text mining, with special emphasis on the evaluation of these tools. Speakers from the following areas were invited: The Special Interest Group (SIG) on Text Mining (or BioLINK -Biological Literature, Information and Knowledge; http://www.pdg.cnb.uam.es/Bio-LINK/) was created to address the need for communication and interchange of ideas in the field of text mining and information extraction applied to biology and biomedicine. Information extraction (IE) is an outgrowth of work in automated natural language processing, which began in the 1950s with work on transformational grammar by Zellig Harris [5,6] and later Noam Chomsky [3,4]. Information extraction technology made rapid progress starting in the late 1980s, thanks to a series of conferences focused on evaluation of IE: the Message Understanding Conferences [1]. There is also a long history of research on applications in medicine. Applications to the medical field focus on two distinct sub-problems: improved access to the medical literature and extraction of information from patient records. Despite these successes in other fields, natural language processing (NLP) techniques were not introduced in biology until the late 1990s. Even today, there are two distinct groups: on the one hand, researchers with a background in computer science, and on the other hand, their colleagues with a background in the life sciences, with only limited interaction between the two groups. To improve this situation, the BioLINK group holds regular open meetings to bring together researchers developing text data mining tools and related language processing methods to manage the information explosion in the biomedical field. They include invited and contributed papers, with a focus on developing shared infrastructure (tools, corpora, ontologies) and challenge evaluations, in the style of the KDD Challenge Cups [2]. This year, the BioLINK SIG meeting focused on resources and tools for text mining, with special emphasis on the evaluation of these tools. Speakers from the following areas were invited: Overview: contributed papers The contributed papers reflect the importance that is currently given to biological named entity detection in the literature. Four out of the five publications are related to this issue and to the associated issues of resources, infrastructure, and evaluation: The first BioCreAtIvE Workshop was held in Granada, Spain, 28-31 March 2004. The goal of the workshop was to provide a set of common challenge evaluation tasks to assess the state of the art for text mining applied to biological problems. The assessment focused on two tasks. The first dealt with extraction of gene or protein names from text, and their mapping into standardized gene identifiers for three model organism databases (fly, mouse, yeast). The second task addressed issues of functional annotation, requiring systems to provide Gene Ontology (GO) annotations for proteins, given full-text articles. Overall, 27 groups participated in the assessment, including 18 for gene/protein name extraction, and nine for the GO functional annotation task. Enhancing access to the bibliome: the TREC genomics track -William R. Hersh The Text Retrieval Conference (TREC) is an annual activity of the information retrieval (IR) research community sponsored by the National Institute for Standards and Technology (NIST). TREC aims to provide a forum for evaluation of IR systems and users. Activity is organized into 'tracks' of common interest, such as questionanswering, multi-lingual IR, web searching, interactive retrieval and, as started in 2003, IR in the genomics domain. The genomics track is sustained by a National Science Foundation Information Technology Research grant that provides funding through 2008. Background on the motivation and evolution of the track can be found on the track website (http://medir.ohsu.edu/∼genomics/). The website also contains an overview paper from the 2003 track as well as the protocol for the 2004 track. BioMinT: a database curator's assistant for biomedical text processing -Anne-Lise Veuthey The goal of the BioMinT project is to develop a generic text mining tool that assists manual database annotation by: (a) interpreting diverse types of query; (b) retrieving relevant documents from the biological literature; (c) extracting the required information; and (d) providing the result as a database slot filler or as a structured report. L Hirschman, C. Blaschke and A. Valencia The development of the BioMinT system has followed a strictly problem-oriented approach. All decisions relative to prototype design have been based on requirements from those who will use the final product in their daily work, i.e. the curators of Swiss-Prot (the knowledgebase component of the UniProt resource) and PRINTS (the protein family fingerprint database), as well as biological researchers. CASP: critical assessment of techniques for protein structure prediction -Anna Tramontano The CASP community-wide experiment critically assesses the state-of-the-art in the prediction of protein structure from sequence and it has been conducted on a 2 year cycle for the last decade, beginning in 1994. The primary goals are to establish the capabilities and limitations of current methods of modelling protein structure from sequence, to determine where progress is being made, to determine where the field is held back by specific bottlenecks, and to compare the results of automatic prediction servers with manually submitted predictions. Methods are assessed on the basis of the analysis of tens of thousands of blind predictions of protein structure submitted by a large number of prediction teams from around the world. CASP provides a forum in which there is a thorough examination of the outcome of the predictions -what went right, what went wrong and, where possible, to provide an understanding of why. For members of the structural biology community not directly involved in structure prediction, the results provide a reasonable guide to the current state of the art. For the prediction community, the results provide a new and sharper sense of direction. Finally, we can begin to measure progress in the field over time. EVA: automatic system for the evaluation of structure prediction servers -Burkhard Rost EVA (http://www.rostlab.org/eva/) is a web server for evaluation of the accuracy of automated protein structure prediction methods. The evaluation is updated automatically each week, to cope with the large number of existing prediction servers and the constant changes in the prediction methods. EVA currently assesses servers for secondary structure prediction, contact prediction, comparative protein structure modelling, and threading/fold recognition. Every day, sequences of newly available protein structures in the Protein Data Bank are sent to the servers and their predictions are collected. The predictions are then compared to the experimental structures once a week; the results are published on the EVA web pages. Over time, EVA has accumulated prediction results for a large number of proteins, ranging from hundreds to thousands, depending on the prediction method. This large sample assures that methods are compared reliably. As a result, EVA provides useful information to developers as well as users of prediction methods.
2018-04-03T02:28:06.761Z
2005-02-01T00:00:00.000
{ "year": 2005, "sha1": "52aff7edd0568fe1d0222b01f70e43af5f87842c", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/ijg/2005/316253.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "52aff7edd0568fe1d0222b01f70e43af5f87842c", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
225300688
pes2o/s2orc
v3-fos-license
Using onset times from frequent seismic surveys to understand fluid flow at the Peace River Field, Canada Our limited knowledge of the relationship between changes in the state of an aquifer or reservoir and the corresponding changes in the elastic moduli, that is the rock physics model, hampers the effective use of time-lapse seismic observations for estimating flow properties within the Earth. A central problem is the complicated dependence of the magnitude of time-lapse changes on the saturation, pressure, and temperature changes within an aquifer or reservoir. We describe an inversion methodology for reservoir characterization that uses onset times, the calendar time of the change in seismic attributes, rather than the magnitude of the changes. We find that onset times are much less sensitive than magnitudes to the rock physics model used to relate time-lapse observations to changes in saturation, temperature and fluid pressure. We apply the inversion scheme to observations from daily monitoring of enhanced oil recovery at the Peace River field in Canada. An array of 1492 buried hydrophones record seismic signals from 49 buried sources. Time-shifts for elastic waves traversing the reservoir are extracted from the daily time-lapse cubes. In our analysis 175 images of time-shifts are transformed into a single map of onset times, leading to a substantial reduction in the volume of data. These observations are used in conjunction with bottom hole pressure data to infer the initial conditions prior to the injection, and to update the reservoir permeability model. The combination of a global and local inversion scheme produces a collection of reservoir models that are best described by three clusters. The updated model leads to a nearly 70 percent reduction in seismic data misfit. The final set of solutions successfully predict the observed normalized pressure history during the soak and flow-back into the wells between 82 and 175 days into the cyclic steaming operation. S U M M A R Y Our limited knowledge of the relationship between changes in the state of an aquifer or reservoir and the corresponding changes in the elastic moduli, that is the rock physics model, hampers the effective use of time-lapse seismic observations for estimating flow properties within the Earth. A central problem is the complicated dependence of the magnitude of time-lapse changes on the saturation, pressure, and temperature changes within an aquifer or reservoir. We describe an inversion methodology for reservoir characterization that uses onset times, the calendar time of the change in seismic attributes, rather than the magnitude of the changes. We find that onset times are much less sensitive than magnitudes to the rock physics model used to relate time-lapse observations to changes in saturation, temperature and fluid pressure. We apply the inversion scheme to observations from daily monitoring of enhanced oil recovery at the Peace River field in Canada. An array of 1492 buried hydrophones record seismic signals from 49 buried sources. Time-shifts for elastic waves traversing the reservoir are extracted from the daily time-lapse cubes. In our analysis 175 images of time-shifts are transformed into a single map of onset times, leading to a substantial reduction in the volume of data. These observations are used in conjunction with bottom hole pressure data to infer the initial conditions prior to the injection, and to update the reservoir permeability model. The combination of a global and local inversion scheme produces a collection of reservoir models that are best described by three clusters. The updated model leads to a nearly 70 percent reduction in seismic data misfit. The final set of solutions successfully predict the observed normalized pressure history during the soak and flow-back into the wells between 82 and 175 days into the cyclic steaming operation. I N T RO D U C T I O N Time-lapse geophysical data, observations gathered from repeated geophysical surveys, are well suited for the monitoring of fluid flow within the Earth (Calvert 2005). As a result, time-lapse seismic data have been used to monitor the injection of carbon dioxide for underground storage (Arts et al. 2000), geothermal energy production, as well as to image fluid saturation and pressure changes in due to oil and gas production (e.g. Eastwood et al. 1994;Johnson et al. 1998;Tura & Lumley 1999;Landro et al. 2001;Behrens et al. 2002). The dynamic nature of time-lapse data, the fact that they are often related to saturation and fluid pressure changes in the reservoir, suggests that they could be used for aquifer and reser-voir characterization, as noted in Landa & Horne (1997), Vasco et al. (2004), and Dadashpour et al. (2008Dadashpour et al. ( , 2009Dadashpour et al. ( , 2010. The major impediment to successful characterization is the indirect relationship between the observations and the state of an aquifer or reservoir. To address this issue, a rock physics model is invoked to map the current state of the reservoir into seismic properties or attributes. This act introduces additional parameters that are necessary to characterize the poroelastic properties of the in situ rock. These parameters are usually not well constrained, determined from a few cores or laboratory measurements. Furthermore, the properties almost always vary spatially, particularly between formations. Thus, the introduction of rock physics parameters presents yet another level of non-uniqueness (Chen & Dickens 2009). This is a 1610 Published by Oxford University Press on behalf of The Royal Astronomical Society 2020. This work is written by (a) US Government employee(s) and is in the public domain in the US. barrier to aquifer and reservoir characterization that can be difficult to overcome. As pointed out in Vasco et al. (2014Vasco et al. ( , 2015 with sufficient temporal sampling it is possible to adopt an approach that mitigates some of the issues associated with the intervening rock physics model. In particular, it is possible to define onset times, the calendar time at which a geophysical quantity changes from its background value. Given a weak causality requirement, the onset time can often be related to the time at which saturation, fluid pressure and temperature change within an aquifer or reservoir. Thus, the onset time is typically related to the arrival time of a fluid pressure and/or saturation front, and hence to the propagation time of the fluid front, rather than the magnitude of such changes. As a result, onset times are sensitive to flow-related properties and relatively insensitive to the parameters of the rock physics model, as demonstrated in Vasco et al. (2014Vasco et al. ( , 2015. In this paper, we illustrate the advantages of onset times for reservoir characterization by examining their use at a cyclic steam stimulation operation at one well from Pad 31, within the Peace River field (Fig. 1) in Alberta, Canada (Lopez et al. 2015;Przybysz-Jarnut et al. 2016). This is a very complicated setting, with heterogeneity, prior production, and documented changes in pressure, temperature, and saturation. In fact, there were four periods of enhanced oil recovery at Pad 31, which covers the area that we will examine: a pad-wide cyclic steam injection from 2001 to 2011, a horizontal steam drive from 2012 until 2013, a pad-wide top-down steam stimulation starting in 2014 and extending beyond 2016, and a localized cyclic steam injection at just one well pattern (31-08) from August 2015 until February 2016. Fortunately, we have a rich set of seismic monitoring data from a dense surface array to aid in our analysis (Fig. 2). The seismic array gathered daily surveys to monitor the fluid-induced changes. The techniques that we have developed allow for the compression of this multitude of seismic surveys into a single map of onset times, which is used to image the heterogeneity within the reservoir. Pad 31 horizontal production wells (red) and injection wells (green). The area covered by the production wells is 1.5 km by 1.5 km in the north-south and east-west directions, respectively. Also shown are seismic sources (red spheres) and receivers (blue dots) of the seismic monitoring system. Map modified from Shell Canada (2016). M E T H O D O L O G Y In this section, we describe our approach for using repeat time-lapse geophysical observations, recorded by a permanently buried seismic system, to monitor fluid flow and to characterize the reservoir. The fact that the region has undergone previous production, necessitates a two-stage approach. First, we estimate the values of a set of global parameters, primarily describing the initial pressure, temperature, and saturation of the reservoir and its large-scale permeability structure, as described by low order basis functions. Secondly, the finer-scale permeability variations are determined from both reservoir production data and time-lapse observations of the traveltimes of seismic waves that propagate through the reservoir. Governing equations Here we outline the equations governing the conditions within the reservoir and the changes due to fluid injection and production. Such changes lead to temporal and spatial variations in the seismic properties and we briefly describe Gassmann's (1951) approach for estimating elastic moduli in fluid saturated rock. Difficulties associated with estimating the appropriate effective fluid moduli lead us to the concept of the onset time of a change in a geophysical observable. We end this section with a brief description of the global and local updating schemes that will be used to estimate the reservoir properties. Multiphase flow and thermal stimulation The reservoir operations at the Peace River field, involving cyclic steam stimulation to extract very viscous bitumen, are described and modelled using the equations of non-isothermal multicomponent flow (Lopez et al. 2015;Przybysz-Jarnut et al. 2016). The multicomponent mass and energy balance equations may be written succinctly using index notation (Pruess et al. 2011), where the index κ indicates one of the N k fluid components and the (N k + 1)th component signifies the heat that flows within the reservoir. The mass and energy balances in the reservoir are given by where the quantities q κ in eq. (1) represent source or sink terms, often associated with injection or production wells. The dependent variable M κ is a mass accumulation term for the chemical component κ. This term is written in the form of a sum, given in terms of the porosity φ, the saturation S β , the density ρ β and the mass fraction X κ β of the fluid phase β. In the equation describing the energy balance, the heat accumulation term for a multiphase system is given by where ρ r is the grain density of the rock, C r is its specific heat, T is the temperature and u β is the specific internal energy in phase β. The advective flux vector for component κ, F κ , is a sum over all of the fluid phases, given by a multiphase form of Darcy's law, for a fluid phase travelling with the Darcy velocity w β . The absolute permeability k is a particularly important quantity, one of the main factors controlling fluid flow within the reservoir. The relative permeability k rβ is usually determined from laboratory experiments on cores from the main formations of the reservoir. The fluid pressure for phase β, P β , is one of the dependent variables along with the fluid saturation of the phase, S β , and the temperature T . The fluid viscosity μ β is determined from laboratory experiments on a given fluid at the appropriate temperatures and pressures of interest. Finally, g = gz is the gravitational force vector that alters the flow in the presence of fluid density variations. The vector for the heat flux is given by where λ is the thermal conductivity of the formation and h β is the specific enthalpy in phase β. In the forward problem we are given an aquifer or reservoir model and we solve the governing equations and accompanying equations-of-state, initial and boundary conditions for the evolution of the saturation and pressure. The solution is usually constructed using a numerical fluid flow simulator (Peaceman 1977;Datta-Gupta & King 2007). For a realistic model solving the forward problem requires significant effort and is often computationally intensive because the governing equations are non-linear partial differential equations. Here, we shall tackle the inverse problem, in which we are given observations, both flow-related measurements and geophysical data, and tasked with estimating the characteristics of the aquifer or reservoir. This is typically a much greater challenge than the forward problem, requiring at least an order of magnitude more computation. Next, we develop a relationship between timevarying fluid saturations, pressures and temperatures within the Earth, and changes in the seismic properties at depth. Relating velocities and elastic moduli to saturation, temperature and pressure changes It is well known (Tura & Lumley 1999;Landro et al. 2001;Lumley 2001;Calvert 2005) that fluid saturation, pressure and temperature changes within and around an aquifer or reservoir will lead to changes in the elastic moduli of the fluid-filled porous medium and thus change its seismic characteristics. For example, the speed of a compressional wave transiting a saturated porous material, V p , depends upon the saturated bulk modulus, K sat , shear modulus, G f r and the density ρ sat of the fluid-filled rock according to (Mavko et al. 2009) We adopt Gassmann's equations (1951) to model the changes in elastic properties due to variations in fluid saturations, as they are generally accepted and widely used and found to agree with observations at seismic frequencies (Landro et al. 2001;Lumley 2001;Calvert 2005;Foster 2007). In Gassmann's approach the Velocity of a compressional wave as a function of temperature for different gas saturations while the pressure and water saturation are fixed. At temperatures less than 48 • del depends upon linear correlations, as discussed in Barker & Xue (2016). The vertical black line denotes a transition from these linear correlations to an equation-of-state. (c) Velocity of a compressional wave as a function of the water saturation. All velocity estimates are computed using Gassmann's approach but with four different combinations of the Reuss and Voigt models for computing the fluid bulk modulus. The labels indicate the weighted fractions of each of these models. shear modulus is not influenced by the presence or absence of the fluid. Furthermore, the density of the fluid infiltrated rock is simply the weighted average of the component densities and for a multicomponent fluid the composite fluid density is given by the weighted sum The bulk modulus of the fluid saturated rock has a more complicated dependence on the component properties, given by the function where K f r is the bulk modulus of the porous rock frame, φ is the effective porosity of the medium and the bulk modulus of the mineral K g , which in the simplest case of a consolidated sandstone can be taken to be the bulk modulus of quartz. The parameter K f is the bulk modulus of the pore-filling fluids. The original formulation of Gassmann only considered a single fluid saturating a porous rock. In order to generalize the approach, the fluid modulus, K f , has been extended to cover the case in which the fluid is a mixture of several liquids and possibly gases. This leads to additional complications, with the essential difficulty that the composite modulus can depend upon how the fluids are distributed within the porous medium at length scales that are less than a seismic wavelength. By considering two extreme distributions one can derive upper and lower bounds on the effective fluid bulk modulus for a given fluid saturation, known as the Voigt and Reuss bounds, respectively (Mavko et al. 2009). In Fig. 3, we plot the velocity variation based upon the Voigt and Reuss composite fluid moduli as a function of the water saturation, S w . In a complex geological setting, including oriented fracture systems, it can be difficult to determine which modulus is most representative. One compromise estimate involves taking the average of the two moduli, the so-called Hill average (Fig. 3). The differences in the calculated values of the three models shown in Fig. 3 are almost as large as the entire variation due to the saturation change. We will use these limiting moduli to illustrate variations in rock physics models and how they can impact calculated changes in seismic properties associated with changes in fluid saturations. In addition, the cyclic steam stimulation process used at the Peace River field also produces coupled changes in pressure and temperature within the reservoir, leading to complicated rock physics models (Das & Batzle 2010;Kato et al. 2010). The model of Barker & Xue (2016) was used to map the saturation, temperature and pressure changes into corresponding variations in elastic properties. The sensitivity of the seismic velocity variations as functions of the gas and water saturations, pressure, and temperature are presented in Fig. 4, showing the difficulty in interpreting velocity changes and hence seismic traveltime and amplitude changes in terms of unique variations in saturation, pressure and temperature within the reservoir. This difficulty is compounded by the fact that this area of the Peace River field has undergone earlier production, including a previous pad-wide cyclic steam stimulation that started in 2001 and lasted until the end of 2011. The cyclic steam injection was followed by a brief implementation of a horizontal steam drive operation from 2012 to the end of 2013. These earlier production efforts resulted in spatially varying temperatures, pressures, and saturations prior to the initiation of the top-down stream drive recovery process that ran from 2014 to the end of 2016, and the subsequent follow-up cyclic stream simulation on a single well pattern (31-08) that we shall analyse. The extreme heterogeneity in the initial conditions of the reservoir is indicated by the seismic amplitude variations in a regional time-lapse survey (Fig. 5) used to diagnose well problems during the cyclic stream stimulation, conducted in March 2009. This legacy seismic reflection survey was conducted prior to the daily seismic monitoring that is the focus of our work. In Fig. 5 one can observe large amplitude anomalies, associated with the appearance of gas that was expelled from the volatized oil as the pressure was reduced around the production wells, denoted by the black lines in the figure. Thus, among other complexities, there are initial variations in gas saturation, temperature, and fluid pressures to contend with. This fact necessitates making these initial conditions a part of the inverse problem. That is, our workflow will use a global inversion approach as an initial step, in order to estimate the initial reservoir conditions and global properties. The onset of a time-lapse change and its relationship to reservoir dynamics The magnitudes of seismic velocity changes are influenced by the nature of the fluid distribution within the reservoir at length-scales that are less than the typical seismic wavelengths. Therefore, it can be difficult to relate changes in the magnitude of seismic velocities to changes in fluid saturation, pressure and temperature in a quantitative sense. To overcome these problems, we use an onset time methodology to relate the time-lapse seismic data to the propagating fluid fronts. We will describe this approach using the Peace River reservoir monitoring program as an illustration (Lopez et al. 2015;Przybysz-Jarnut et al. 2015, 2016. A permanent seismic reservoir monitoring system was installed at the field consisting of 49 buried sources, in a rough grid with 200-220 m spacing, at a depth of 25 m (Fig. 1). The 1492 receivers (hydrophones) are situated in a denser grid with 40 m spacing, in 20 m deep boreholes and packed in bentonite. The sources consisted of a set of 37.6 s long single frequency sweeps from 0.4 to 216 Hz. The entire set of 540 sweeps took 6 hr to complete for a single survey. A time-lapse monitoring program was applied to a top-down steam drive oil recovery process that began in 2014, in which six new horizontal steam injection wells were drilled and operated above existing production wells drilled for an earlier cyclic stream stimulation (Lopez et al. 2015;Przybysz-Jarnut et al. 2016). The data are acquired in a continuous fashion and automatically processed to generate four complete time-lapse data cubes every 24 hr (Lopez et al. 2015;Przybysz-Jarnut et al. 2016). These four cubes are stacked to produce a single daily estimate. A vertical time Downloaded from https://academic.oup.com/gji/article/223/3/1610/5896957 by Lawrence Berkeley National Laboratory user on 24 November 2020 Figure 9. Interpretation of time-shift anomalies due to enhanced oil production at the Peace River field. The area covers the same region as that shown in Fig. 8. The black triangle denotes the location of the set of wells (31-08) analysed in this study. This map shows the cumulative time-shifts in milliseconds since the start of top-down injection until August 2015. The north-south trends of positive time-shift, indicating slow down, are due to the increase in pressure and temperature associated with the overlying stream injectors. The blue anomaly indicating speed up is thought to be due to water breakthrough and filling the area containing gas, seen in Fig. 5. Figure 10. Normalized bottom hole pressure variation during a steam injection and production cycle that was conducted at the isolated well pattern 31-08 at the southern edge of pad 31. The production in this figure extends from August 2015 until mid-January 2016. The peak pressure attained in the area was 7.5 MPa and the minimum pressure was slightly below 2.5 MPa. section through one such data cube is shown in Fig. 6, along with a density log from a well that is intersected by the cross-section. The pink curve is the top of the reservoir (Bluesky formation) while the blue line indicates the base (Debolt formation). Small but visible traveltime-shifts, for reflections from layers at the bottom of the reservoir, are evident in the two snapshots plotted in Fig. 7. The reservoir appears to be thicker than the dominant wavelength of the seismic traces so tuning effects (Ghaderi & Landro 2009;Zhang & Castagna 2011), which occur when the top and bottom reflections from the reservoir interfere, are probably not an issue. The daily monitoring allowed for the systematic extraction of small traveltime and amplitude changes for reservoir monitoring. Traveltime-shifts were extracted from the migrated time-lapse cubes using a cross-correlation technique over a 120 ms window that extends beyond the bottom of the reservoir. A triangular-weighting filter was applied to remove edge effects in the cross-correlation estimates. An example of the time-shifts generated by the top-down stream drive, gathered between 14 April 2014 and 30 March 2015, are shown in Fig. 8. There are clear coherent anomalies within the area of interest, generally positive time-shifts are colocated with the overlying steam injection wells, and a large negative time-shift anomaly is associated with the production wells just south of the centre of the well pad. In addition to the time-shifts, we also plot the time-lapse amplitude changes during this time interval. Note the small amplitude decreases associated with the injection wells. However, there are much larger amplitude increases that correlate with the large negative time-shifts. These amplitude increases, and the negative time-shifts, are thought to be due to water from the condensed steam replacing gas that had been generated during the earlier cyclic steam injection. The areas containing this gas are indicated by amplitude anomalies in the legacy seismic survey from 2009 that is plotted in Fig. 5. The time-shifts associated with water encroaching on this region of accumulated gas are plotted in Fig. 9, where we note the major contributing factors to the traveltime-shifts: fluid substitution, temperature and fluid pressure changes. The traveltime-shifts are sensitive to velocity changes and possibly deformation within the reservoir itself. In the manner of seismic tomography, the time-shifts of waves propagating through the reservoir are a sum of the changes within each grid block of the reservoir model. Thus, if we consider a restricted segment of a seismic wave, propagating from a reference point just above the reservoir to the base of the reservoir, and then reflecting from the base of the reservoir and returning to the reference point, the total traveltime-shift is given by where B(x, y) denotes the indices of the grid blocks that are traversed by the seismic wave that is observed at location x, y of the seismic array, L n is the propagation length within the specified grid block and V p (x, y, τ, n) is the seismic velocity within the grid block, V o (x, y, n) is the baseline velocity. For two surveys that are closely spaced in time we are assuming that the ray paths do not change significantly in eq. (10). As noted above, the velocity is time-dependent due to the changes in fluid saturation, pressure and temperature induced by the injection and production. For our analysis of onset times we shall focus on an even later redevelopment on the southern-most portion of the production pad, where a single well set (31-08) underwent an additional cyclic steam stimulation (CSS). In this process steam was injected for 82 days, allowed to soak in and heat up the viscous oil, and then pumped out along with the mobilized oil. The normalized pressure response in the well, associated with one complete cycle which lasted for 175 d, is shown in Fig. 10. The seismic data is translated into transit timeshift maps, expressing the traveltime changes for the seismic waves that propagate across the reservoir between a chosen baseline survey (e.g. the start of the cycle) and subsequent monitor surveys. Over the stimulation cycle shown in Fig. 10, a total of 175 time lapse seismic surveys were available for integration (Fig. 11). Note the temporal and spatial complexity of the time-shifts around the two wells, as shown in Fig. 11. The interpretation of the time-shifts is based upon a rock physics model, using expressions (6), (9) and (10) for T (x, y, τ ) given above, coupled with the relationship between the seismic velocity V p (x, y, τ, n) and the changes in fluid saturation, pressure and temperature provided by Gassmann's equation and the other rock physics expressions from Barker & Xue (2016) noted above. As an illustration of the complex nature of the time-shifts, consider the temporal variations in the size of the traveltime-shift at a location near the injection well 31-08 (Fig. 12). The complicated spatial variation in the pattern of time-shifts is indicative of the various physical mechanisms at play in the field. For example, pressure induced velocity changes can arrive at a location much faster than saturation changes (Vasco 2011). Similar considerations apply to thermal fronts which can take even longer to move through a porous medium (Vasco 2010). Such transient fluid fronts are usually aliased by conventional time-lapse surveys, that are most often taken years apart, but they can be reliably imaged by a daily monitoring program. The fact that the magnitudes of the recorded time-shift data combine several processes involving pressure change, thermal effects and saturation variations, makes it extremely challenging to incorporate them directly into a history matching procedure. For this reason, we utilize the onset time idea to integrate the seismic time-shifts into a reservoir characterization scheme. The onset time is defined as the calendar time at which the traveltime-shift exceeds a chosen threshold value. The first step is to define a threshold value that results in a meaningful definition of an onset time, as illustrated in Fig. 12. This pre-defined threshold has two main roles: (1) to ensure that the magnitude of the seismic observation is above the background noise level (2) to define the physical process that is being tracked, which often decides the sign of the threshold value. Time-lapse seismic data are typically noisy due to non-repeatable environmental noise, source and sensor issues, and changes in near surface propagation due to variations in the water table or in the overlying water column. These variations lead to changes in the seismic characteristics even when there are no dynamic changes within the reservoir, thus we need a threshold value that distinguishes between the noise and a meaningful signal. Based upon the calculated signal-to-noise ratio for data from the array at Peace River, the threshold was defined as a time-shift decrease of 0.1 ms (Fig. 12). To cross-validate the threshold value, we compared the signal with those from locations that are far from the well and where no changes are expected within the reservoir, as shown in Fig. 13. The use of onset times not only leads to a significant data reduction, collapsing the 175 daily time-shift maps into a single spatial distribution of onset times, but has also been found to be less sensitive to the rock physics model used to interpret the seismic data (Vasco et al. 2014(Vasco et al. , 2015. As a demonstration of this, four different rock physics models were generated by linearly averaging the Reuss and Voigt estimates of the fluid modulus (Figs 3 and 4) to calculate the P-wave velocity. The variations of four such models with water saturation (S w ) are shown in Fig. 4(c), where we observe that the P-wave velocity is very sensitive to the method used to average the fluid moduli. In Fig. 14, we plot the size of the time-shift changes over the injection period (e.g. the first 82 surveys), calculated using the four models. In Fig. 15, we generate the corresponding onset time maps for the four rock physics models. There are no notable differences between the calculated onset times (Fig. 15) for the different models and all models display areal propagation of changes, related in this particular case to steam/fluid propagation. The similarity of the onset times stands in sharp contrast to the patterns of the magnitude of the traveltime-shifts which are strongly influenced by the particular rock physics model used for the calculations (Fig. 14). Inversion strategy Due to the coupled nature of our inverse problem, involving both fluid flow and seismic wave propagation, and the complicated processes and initial conditions, we adopt a two-stage inversion procedure. In the first step we conduct a sensitivity analysis in order to find the most important factor influencing our observations. Secondly, we implement an efficient parametrization for both the initial conditions and properties that is based upon an eigenvalue decomposition of the grid Laplacian matrix. Third, we determine both large-scale properties and initial conditions that are necessary for the fluid flow simulation and the calculation of the seismic timeshifts using an evolution algorithm followed by a cluster analysis of the final population. In the last step we adjust the individual grid-block permeabilities in the reservoir model using an efficient tomographic-like approach to match the onset times. Because the focus of this paper is on matching the onset times, we discuss this step in some detail. Our description of the first steps of the inversion procedure is rather brief, with more detail provided in Appendices A and B. Furthermore, an in-depth discussion of the inversion approach is also given in Hetz (2017) and in Hetz et al. (2017b). Initial determination of the global parameters The coupled flow model contains a large number of parameters that need to be specified in order to conduct a numerical simulation. Some properties will be more important than others in controlling the simulation results. In order to discern those parameters that are to be included in the initial global inversion we conducted a sensitivity analysis as described in Hetz (2017). For the sensitivity study, the objective function was defined as the summation of misfits in the onset time seismic response, OT i , and the bottom hole pressure (BHP): By perturbing each parameter and examining the changes in the misfit we constructed a tornado diagram that indicates the relative importance of each major class of parameters in the misfit functional. Based on the sensitivity analysis we found that all of parameters have some influence on the objective function, but the completion interval is the most important parameter indicating the need to adjust the size of stimulated zone. Other important parameters include the permeability and the initial gas saturation. In order to successfully simulate fluid flow and seismic wave propagation in the reservoir, we need to specify the initial state of the reservoir, including the pressure, temperature and saturation fields, and the large-scale properties of the model. A key element of this first step is a judicious representation of the fields in the initial model in order to maintain flexibility and prevent a proliferation of model parameters. In Appendix A, we discuss a representation in terms of the eigenvectors of the Laplacian of the simulation grid (Bhark et al. 2011). That is, we represent the model x as a linear combination of M Laplacian eigenvectors where φ i are the weighting factors that are to be found in the inversion. This parametrization has the advantage that it is tied to the fluid flow simulation grid, which may be quite irregular in order to represent a complicated geological model. Furthermore, the representation provides a flexible parametrization that can describe a uniform model, a layered model, and a fully 3-D model, and all models in between these end-members. The lowest order basis functions are constants for each layer, while the second set of functions are composed of linear variations within the given layer. The higher order basis functions contain increasingly rapid spatial variations in properties. The weighted summation of the first ten basis functions gives the fields of initial properties. The updating scheme for the global parameters is described in Appendix B. It is based upon the general notion of a set of Pareto optimal solutions (Lobato & Steffen 2017). We adopt this approach in order to treat the inverse problem as a multi-objective optimization task. That is, we are given two primary classes of observations, namely time-lapse seismic data and bottom hole pressure measurements, leading to two distinct misfit functions, given by (B1) and (B2) in Appendix B. We wish to determine models that minimize the misfit to the N s observed onset times (OT) and N b bottom hole pressures (BP) given by respectively. To some degree the set of Pareto optimal solutions generalizes the notion of a trade-off curve in geophysical linear inverse theory (Menke 2018). In particular, Pareto optimal solutions cannot be improved with respect to a given objective function, such as the fit to the seismic onset times, without increasing the value of at least one of the other objective functions. As noted in Appendix B Pareto optimal solutions lie on the boundary of the set of feasible solutions, the Pareto front. We generate the set of feasible solutions using a stochastic evolutionary technique, the genetic algorithm (Park et al. 2015). In this approach we represent a model in terms of binary strings. A randomly generated collection of models progressively evolves from one generation to another by mutation (random changes) and recombination (joining of portions of the models). The misfit functions M s (x) and M b (x) contribute to the definition of a fitness function that is used to select the models that are retained in the succeeding generation. The Pareto optimal solutions are defined with respect to the population of models in a generation. This set of solutions is further subdivided into groups of solutions using a clustering algorithm (Appendix B). Local updates of the solution clusters The second major step in the inversion algorithm involves adjusting the clusters of solutions through an iterative updating scheme. The entire process takes place on a fine-scale reservoir model that may consist of tens of thousands to millions of grid blocks. Therefore, efficiency is a paramount consideration. To this end we adopt a semi-analytic, streamline-based technique for calculating model parameter sensitivities, first presented in Vasco et al. (2004). The general idea, as it relates to the onset of changes in the time-shifts, is that the injected fluids or transient pressure fronts propagate outward from the source well to various points within the reservoir. For the cyclic steam stimulation associated with the wells in the pattern 31-08 in the Peace River field, we will be concerned with injected steam that may condense into water, and associated pressure and temperature changes. The changes in the elastic moduli resulting from the arriving fluid fronts lead to changes in the seismic waves propagating through the reservoir and alter the traveltimes of these waves. In the absence of significant deformation, the onset of a change in the seismic traveltime is directly related to the arrival time of the fluid front. For a 3-D model we can compute trajectories from each grid block where the saturation has changed to a point on the injection well by streamline simulation (Datta-Gupta & King 2007). We can use time-of-flight or streamline simulation methods to relate the arrival or onset time to reservoir properties along the flow paths or streamlines (Vasco et al. 1999(Vasco et al. , 2004Rey et al. 2012;Vasco & Datta-Gupta 2016;Watanabe et al. 2017). As an illustration, consider the movement of a thermal front due to the injection of steam or hot water along the streamline trajectories shown in Fig. 16. The traveltime for the injected steam, after condensing to hot water, τ (r o , t), from a point on the injector, r i to a location in the reservoir where we observe a change in a geophysical observation, r o , is given by an integral along the flow path where q w (r, t) is the velocity vector of the hot water at the leading edge of the coupled fluid front. This vector follows from the form of the flux vector in eq. (4) and is given by (Vasco & Datta-Gupta 2016) where and z is a unit vector pointing in the downward direction. The quantity κ(S w , S g , S o , P, T ) is the total fluid mobility given by that is usually considered to vary by formation but is most often taken as constant in a given formation. Note that the total fluid mobility will depend upon the reservoir conditions, through its dependence on the formation relative permeability curves and the (Peaceman 1977;Vasco & Datta-Gupta 2016) with respect to the water saturation S w , which is also a function of the reservoir conditions and usually specified for each formation or lithology. The velocity of the thermal front is also a function of the porosity φ(r) and the absolute permeability k(r). Finally, the front propagation is controlled by the pressure field that is established during the injection. As mentioned above, the transient behaviour of the pressure field can be rapid in comparison to the propagation time of the saturation or thermal front. Therefore, we shall assume that, after the pressure transients have decayed, the average fluid pressure is primarily a function of spatial position r and will calculate it using a numerical reservoir simulator for a given initial or background reservoir model. In each iterative step we seek local, or grid-block, updates to the permeability model that further refine the fits to the seismic onset times and the bottom hole pressure data. In computing model parameter sensitivities, we fix the relative permeability functions and the capillary pressure curves for the formation, using values obtained from the initial geological data and the global update, as well as the initial saturation, pressure, and temperature conditions of the reservoir and the large-scale porosity and permeability variations. The sensitivities, relating a perturbation in the permeability at a location in the reservoir to a deviation in the onset time are obtained from the path integral for saturation front traveltime, τ (r o , t). That is, substituting the perturbed absolute permeability k = k o + δk into the expression for q w (r, t) and then into the integral gives where q o signifies the fluid velocity in the background or current reservoir model. The semi-analytic expression for the sensitivity of the onset time is given by and provides the basis for an efficient, tomographic approach to refining the local permeability model using onset times (Vasco et al. 2014(Vasco et al. , 2015. For each column of cells, the trajectory that represents the first arrival of the injected steam to a grid block in the column, is the path that determines the onset time for that location. Fig. 16 shows the correlation between the time-shift onset time and the saturation and the time-of-flight in days for a neutral tracer injected with the water. This traveltime is proportional to the propagation time of the injected water. In order to map the time-of-flight of a neutral tracer to the traveltime of the injected water we must multiply by the derivative of the fractional flow curve and the total mobility, as indicate above. The main purpose of Fig. 16 is to illustrate some of the trajectories that are the basis for the semi-analytic sensitivities, given by eqs (17) and (18), that for the basis for an efficient local inversion algorithm. Using a reservoir model, we may discretize the integral for the perturbed onset time associated with trajectory of the lth streamline, τ l , into a sum over the segment in each grid block of the reservoir model traversed by the path: The set of paths to each point in the model where we have estimated an onset time leads to a system of equations, δτ = Mδk, that may be solved in a least squares sense. That is, we minimize the sum of the squares of the residuals The conditions for an extremum of R 2 , the vanishing of the gradient with respect to the model parameters leads to the system of equations that may be solved for δk. The system of equations could be illposed if there are effectively fewer equations than unknowns. The usual remedy is to introduce additional regularization requirements, such as specifying that the magnitude of the model updates remains small if it is not constrained by the data, and, because the data cannot resolve small features, the spatial variations of the updates are often assumed to be smooth (Menke 2018). Such considerations, encapsulated in quadratic penalty terms lead to an augmented system of equations, as discussed in Vasco & Datta-Gupta (2016, p. 212). This matrix M is sparse because individual trajectories only intersects a small percentage of the grid blocks in the model. Therefore, the system of eq. (21) is solved using a least squares QR algorithm (LSQR) designed for large, sparse linear systems (Paige & Saunders 1982). In order to solve the nonlinear inverse problem, we iteratively update the model, adding perturbations and then recompute the residuals and quantities used in the linearized inversion, such as the saturation, pressure, and temperature fields. After a sufficient number of iterative updates, the misfit tends to level off, and the algorithm is terminated. In the section below, we illustrate the application of this approach to the data from the Peace River field. A P P L I C AT I O N T O T I M E -L A P S E M O N I T O R I N G AT P E A C E R I V E R The methodology was applied to the southernmost region of Pad-31 in the Peace River field, focusing on well set 31-08, three wells (31-8E1, 31-8E2 and 31-8W1) forming a 'tuning fork pattern' shown at the bottom of Fig. 4. One positive feature of the area around Pad-31 was that it lacked some of the vertical heterogeneity seen in other parts of the field. In particular, it did not contain shale baffles that had complicated the vertical flow in many other areas of the Peace River field. Our analysis of the monitoring data from the Peace River field begins with the initial global history match in which we determine the initial, temperatures, pressures and saturations as well as the large-scale variations in permeability and porosity. The inversion is based upon the initial portion of the cyclic steam stimulation involving the injection of hot steam, the first 82 days of the cycle. We use observations from the final soak and flow back to the well for validation purposes, attempting to predict the bottom hole pressure during this process using the history matched models. The initial water and gas saturations, porosity and permeability were taken from a geological model provided by the operator, and the initial temperatures were obtained by interpolating the observed tubing head temperatures at the beginning of the cycle. The reservoir simulation model consisted of an irregular grid with 21 layers with variable boundaries. The model representation of the global properties is in terms of the eigenvectors of the grid Laplacian matrix, the adjacency-based parametrization described above. A total of ten basis functions, eigenvectors of the Laplacian, were used in the representation of the porosity, permeability, initial water and gas saturations, and the initial temperature. The genetic algorithm used to approximate the Pareto front and to determine the initial set of global parameters for the first step of our inversion scheme ran over 30 generations with population of 150 members per generation. The initial 150 models were generated stochastically by uniformly sampling from expected intervals of parameter values. The values of the model misfit functions associated with the seismic onset times, M s (x), and the reservoir bottom hole pressure data, M b (x), are plotted in Fig. 17(a). The initial scatter in the models, due to sampling randomly from the expected ranges of the parameters provides an indication of the variation in the two misfits expected in the model space for the range of all possible models. After 30 generations the genetic algorithm has reduced the misfit to both the seismic onset time observations and the bottom hole pressure data significantly in comparison to the prior cloud of solutions. The resulting suite of 150 models appear to define a trade-off curve between the two misfit functions, the Pareto front (Fig. 17b). An application of the K-means cluster analysis algorithm generates three clusters that are colour-coded in Fig. 17(b). By applying a cluster analysis we further investigate the objective space. In particular, Fig. 18 shows the updated onset time maps of selected models in cluster 1, cluster 2 and cluster 3, respectively. For all clusters, we observed some improvement from the initial onset time map calculated using the prior model. The improvements in the match to the bottom hole pressure data are shown in Fig. 19, where we plot the calculated values for 40 models. One notable feature is the consistent pressure match during the soak validation interval, where we used the history matched models to predict the pressure behaviour, indicating that the models are able to adequately represent the saturation changes within the reservoir. By looking at the parameter changes after the global update, as shown in Fig. 20, we can gain some insight on the different physical mechanisms that are associated with the clusters. For example, we observe that cluster 1 contains the greatest permeability decrease around the well. Also, the water saturation at the base of the reservoir increase more in cluster 1 as compared to clusters 2 and 3. This may explain the overestimation of the well pressure associated with cluster 1 (Fig. 19). Furthermore, the change in the temperature and gas saturation around the well in clusters 2 and 3 indicates different spatial flow patterns for these two models. These differences are reflected in the onset time maps as an underestimation of the propagation time. The next stage of the inversion workflow involves adjustments to the reservoir permeabilities on the fine-scale grid in order to match the onset time observations for the first 82 d of steam injection and the bottom hole pressure. We apply the iterative linearized inversion algorithm to three candidate models which were selected based upon the cluster analysis. In Fig. 21 we plot the normalized misfit as a function of the number of iterations of the algorithm. The misfit is reduced to almost 30 per cent of its original value. Our iterative linearized algorithm is rather simple and uses a fixed step length for each iteration. The convergence is influenced by the weighting of the regularization and the characteristics of the linear solver that is applied at each step of the iteration. The updated onset time responses from the local step significantly improves the results due to the individual grid-block adjustments of reservoir flow properties. The changes made to permeability field, shown in Fig. 22, reveal that models from both clusters share common characteristics with similar large-scale increases and decreases. These updates imply that the stimulated zones are located mostly around the vertical part of the well. Fig. 23 displays the improvement in the pressure match and prediction as a result of the local updates for clusters 1 and 2 in Figs 23(a) and (b), respectively. Most notably, after the local update the excess pressures associated with first cluster from the global update are reduced to values much closer to the observed pressures (Fig. 23a). The final reservoir model produced by the inversion methodology is not only a useful tool for better matching the observations, but also gives additional insight into the state of the reservoir during the cyclic steaming operation. Fig. 24, a plot of the water saturation changes over the injection period, shows that the distribution of water is much less dispersed in the final clusters than it is in the initial model. The final models also help us to identify steam override during production, a common phenomenon in steam injection processes. The reason for this phenomenon is that mobility of displaced fluid is much lower than that of the displacing fluid (steam). Due to the differences in density between steam and the oil and water, steam override occurs. Fig. 25 shows the water saturation along the streamlines over the injection period. At the beginning of the cycle (Fig. 25a) the steam starts moving upward as soon as it is injected inside the model. This movement is captured by the onset time map. The gravity override phenomenon becomes less severe over time, as the fluid starts to move downward at later times (Fig. 25b). Overall our hierarchical history matching approach significantly reduces the misfit associated with the time-varying seismic and pressure data, and provides an improved representation of reservoir sweep through the identification of limits on the distribution of water and the detection of steam override. D I S C U S S I O N The use of onset times should be viewed as the first step in the construction of a detailed reservoir model, whereby flow properties are obtained from geophysical time-lapse data. Because onset times are chiefly sensitive to the flow properties of a reservoir or aquifer, and much less sensitive to the parameters of the rock physics model, they are well suited for estimating hydraulic conductivity or permeability. Furthermore, the onset times are related to the traveltimes of fluid fronts, which have a quasi-linear relationship to properties such as hydraulic conductivity (He et al. 2006). As a result, inversions of onset times for properties such as permeability are much less sensitive to the initial or starting model, and inversion algorithms based upon them are much less prone to becoming trapped in a local minimum, similar to seismic traveltime tomographic imaging. The next step would be to use the magnitudes of the time-shifts and reflection amplitudes to further refine the model and to estimate the poroelastic properties of the rock physics model. The final step would be to combine all of the data to construct the final reservoir model. Like most surface seismic monitoring efforts our study was hampered by issues related to vertical resolution, due to the averaging of seismic waves and their dominantly vertical propagation. It may be possible to improve the resolution by including broad-band data, larger offsets and utilizing the full seismic waveform. Another option would be to use the pre-stack data directly for a tomographic estimation of the time-shifts or the velocity changes. These enhancements should be topics for future research, as should be development of automated systems for seismic monitoring such as the continuous active seismic source monitoring system (Ajo-Franklin et al. 2011;Vasco et al. 2014). Such systems augment existing permanent arrays for monitoring reservoirs that exist in various fields around the world. While daily monitoring was possible at the Peace River field, the onset time approach is applicable to surveys that are separated by much longer intervals, such as yearly repeats (Vasco et al. 2015). C O N C L U S I O N This study demonstrates the advantages of onset times, the recorded times at which a set of time-lapse geophysical data begin to deviate from their initial or background values, for high resolution reservoir characterization. A synthetic test shows that, in comparison to seismic time-shift magnitudes, the onset times are insensitive to the details of the rock physics model used to relate the state of the reservoir to the seismic moduli. The methodology allows for the compression of multiple seismic surveys into a single map of onset times, that are directly related to fluid front propagation times. The compression of the frequent seismic surveys into a single set of onsets assists in the development of an efficient globally convergent stochastic inversion technique, in this case the genetic algorithm. The Peace River field case treated here displays all of the complexity that one can encounter in enhanced oil recovery, including temperature and pressure variations, saturation changes and complicated reservoir initial conditions. Using a hierarchical workflow, we were able to construct a set of initial models satisfying both the onset times and the well pressure data. The Pareto surface defines a set of feasible solutions, generalizing the concept of a trade-off curve used in linear inverse problems. Using local model updates, where the flow properties were adjusted on a cell-by-cell basis, the algorithm was able to improve upon the global stochastic solution. The final set of reservoir models not only match the data used in the inversion, they also successfully predict well pressure data left aside for validation. Finally, the reservoir models provide insight into the processes operating in the reservoir during the cyclic steaming operation. In particular, the models predict a much sharper water/steam front and reveal steam override due to the influence of gravity. Finally, the estimates of the initial conditions and local permeabilities, allow us to construct an improved injectivity profile along the horizontal well, which is crucial for further development considerations. A C K N O W L E D G E M E N T S The work of G. Hetz and A. Datta-Gupta was supported by the Department of Energy under Award Number DE-FE0031625 and Downloaded from https://academic.oup.com/gji/article/223/3/1610/5896957 by Lawrence Berkeley National Laboratory user on 24 November 2020 A P P E N D I X A : M O D E L R E P R E S E N TAT I O N Because the inversion approach contains what we are calling global and local model updates, essentially large-scale and fine-scale spatial variations of in the properties of the model, we need a flexible model representation that allows for a seamless transition between spatial scales. In this Appendix we briefly describe one such parametrization, as we will incorporate it into our two stage inversion scheme. We shall define the grid by its set of vertices V and edges E, characterizing it by a graph G = (V,E). The N vertices, V = {1,2,. . . ,N}, represent the centre of the grid cells at which the reservoir properties are defined. The edges E represent connections between vertices and one can specify the set of edges using an adjacency matrix a i j , where the non-zero entries denote a connection between vertices v i and v j . Specifically, the entries of the N × N grid adjacency matrix A are given by because we are only considering unweighted connections between vertices. Jafarpour & McLaughlin (2009) showed that a low dimensional approximation may be given by the lowest frequency Fourier components. In order to extend this approach to an irregular mesh, we make use of the association, first noted by Taubin (1995), between the discrete Fourier transform of a function and the decomposition of the function into a linear combination of the eigenvectors of the Laplacian of the grid. The grid Laplacian is a discrete second-order differencing operator given by where d i is the degree of the ith vertex a measure of the number of edges connected to the vertex. The Laplacian provides a measure of the connectivity of the grid and for many commonly encountered boundary conditions the discrete operator is a positive semi-definite, symmetric matrix (Bhark et al. 2011). Given these properties we may use the spectral theorem to construct an eigen-decomposition of the Laplacian matrix where the vectors v i are pairwise orthogonal unit eigenvectors. The eigenvalues λ i are the modal frequencies associated with the Laplacian eigenvectors, a direct consequence of the equivalence between the Laplacian eigenvectors and the basis set of the discrete Fourier transform (Taubin 1995). Here, we will represent the model x as a linear combination of basis vectors that consist of the Laplacian eigenvectors where M is small for a large-scale global specification of properties and equal to N for a full-scale representation of the model. This representation is referred to as an adjacency-based transformation or parametrization. The low frequencies, or small values of M, can be used to represent the global properties of the model, such as a uniform layer velocity, while the highest frequencies account for much more rapid local variations in properties. By changing the value of M used in our representation we can switch between inversions for local and global properties. A P P E N D I X B : D E T E R M I N AT I O N O F T H E G L O B A L PA R A M E T E R S In this Appendix, we describe our approach for determining the global properties of the model, including the initial saturations, pressure and temperature of the reservoir, as well as the large-scale porosity and permeability values at the beginning of the stimulation cycle. As a first step, a sensitivity analysis is conducted in order to identify the parameters to be considered in the global updating scheme. A description of that effort is presented in Hetz et al. (2017a,b) and Hetz (2017) and will not be repeated here. We consider the calibration or inversion procedure to be a multi-objective optimization problem. There are two classes of observations, geophysical measurements and hydrological or reservoir engineering where B P o i are the observed bottom hole pressure and B P c i are the calculated bottom hole pressures for the ith observation point. We can linearly combine the misfit functionals to produce a composite measure of sum of the squared residuals. However, it can be a challenge to correctly weight the two classes of data in order to produce a meaningful model. The conventional approach in geophysics is to construct a trade-off curve through a series of inversions, and to pick a point that balances the fits to each data class (Menke 2018). While this technique is useful linear inverse problems, it can encounter difficulties for nonlinear inverse problems, such as our inversion for reservoir properties. One alternative to the minimization of a composite misfit is to consider multi-objective optimization techniques characterizing the trade-off between different objective functions. A general approach is provided by the notion of Pareto optimal solutions (Lobato & Steffen 2017). These are solutions that cannot be improved with respect to any particular objective function without degrading at least one of the other objective functions. To describe such solutions, consider a multi-objective optimization problem formulated as the minimization of a vector of m objective functions where X is the set of feasible solutions. One may also characterize Pareto optimal models using the notion of solution dominance. A feasible solution x 1 is said to Pareto dominate another feasible solution x 2 if for all indices i ∈ {1, 2, . . . , m} and for at least one index j ∈ {1, 2, . . . , m}. A solution is called Pareto optimal if there does not exist another solution that dominates it. The set of optimal solutions constitutes the Pareto front or boundary and characterize the trade-off between the various objective functions. A class of stochastically driven techniques, known as evolutionary algorithms, provide an a means of generating a Pareto front (Deb 2001). The genetic algorithm, perhaps the most widely used of these techniques, was motivated by an analogy with biological evolution. In particular, an initial set of models is constructed using a random number generator. The parameters describing each model are converted to binary strings, the full description of each model is referred to as a genome or chromosome. The family of models is successively updated by recombination and mutation. Recombination involves taking selected pairs of individuals and forming new members by randomly combining various segments from the two models. The new model will thus be a hybrid model with characteristics of both parent models. In addition, the process of mutation introduces random changes into the genomes of some subset of the new models. The evolution of the population of models is governed by a fitness function of the form exp[− f (x i )] where f (x i ) is the objective function. That is, the probability of selecting a particular model to take part in the construction of the next generation is given by a function of the general form once a new generation of models is produced it is used in the next iteration of the algorithm. The process is repeated until the overall fitness of the population reaches a satisfactory level of some maximum number of generations has been produced. One issue associated with this approach is that it can fail to adequately define non-convex Pareto fronts such as those associated with non-linear inverse problems. We use a stochastically driven technique to address this problem by: (i) Assigning fitness to population members based on nondominated sorting and ranking. (ii) Preserving diversity among solutions on the same front by examining the distance between solutions (Deb et al. 2002). The models are first sorted according to their dominance rank (Fig. 10). That is, solutions that are not dominated by any other models with respect to the given objective functions, those lying on or closest to the Pareto front are considered Rank 1 or belonging to Front 1. A model of Rank 2, or lying in Front 2, is only dominated by those of Rank 1 and no others. Generally, a model in Front k + 1: (1) Should be dominated by at least one model in Front k; (2) May or may not dominate solutions in Front k + 2 (Park et al. 2013). Rather than use the expression P(x n ) given above, the fitness is equal to the rank of the model. If two models have equal rank than the model with the larger crowding distance [cdist i in Fig. B1] is selected to take part in the construction of the next generation through cross-over and mutation. Finally, the K-means clustering algorithm (James et al. 2017, p. 386) is used to define clusters of solutions that share similar characteristics. We start by assuming that the solutions may be grouped in some number, say K, of clusters. This number may change as we try to find the optimal set of clusters. The main goal is to partition the data set into internally homogeneous and externally distinct groups. The idea is to minimize the within-cluster different between solutions, W (C k ), usually defined by the sum of the square of the distance between each solution in cluster k. where x k is the cluster centroid and |C k | denotes the number of solutions in cluster k. The approach is initialized by randomly assigning solutions to one of the clusters and computing the centroid of the K clusters. Then the following steps are repeated until the within-cluster distances and cluster assignments stop changing: (1) reassign the observations to the centroid which lies closest to that solution (2) after the reassignment, recompute the cluster centroids. This approach is guaranteed to decrease the measure of total withincluster distances as explained in James et al. (2017, p. 402). The clusters provide an initial set of solutions which we can update in order to match the seismic onset times and bottom hole pressure observations, as described next.
2020-08-27T09:07:07.878Z
2020-08-25T00:00:00.000
{ "year": 2020, "sha1": "97f0c9111d93db687f6c9b5441b6699dd4ed8140", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/gji/article-pdf/223/3/1610/33762104/ggaa396.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "5a4ce32b4c853ad86e43525b6ce4a163c14c1a23", "s2fieldsofstudy": [ "Geology", "Environmental Science" ], "extfieldsofstudy": [ "Geology" ] }
73558902
pes2o/s2orc
v3-fos-license
On the Construction Process of the Surname / Ancestral Seat Descent Groups in Korea as seen through Genealogies The uniform application in understanding the characteristics of Korean surname/ ancestral seat descent groups of the current concept of ‘clan’ and ‘lineage’ when tracing descent from a common ancestor ignores their respective historical construction process. This paper highlights some misleading results of research based on such a uniform application and suggests that certain categories be used rather than one single concept to accommodate the highly particular and complicated construction process of Korean surname / ancestral descent groups. Introduction The surname/ ancestral seat descent groups in Korea are known to comprise patrilineal kinsmen tracing their descent from one common first ancestor (sijo ).The immense importance descent groups have had in Korean society past and present has promoted ample surveys and research on them.Unfortunately some misconceptions regarding such descent groups continue to hamper research. From a historical point of view, the most problematic understanding of the Korean surname/ ancestral seat descent groups stems from igonoring the fact that these descent groups are the outcome of a historical process.This paper points out the problems such a misconception has caused in understanding the Korean surname/ ancestral seat descent groups (s ong' gwan chiptan ), and it illustrates the unique features of these descent groups by tracing the historical process of their construction. Misconceptions about Korean Surname/ Ancestral Seat Descent Groups It is misleading to apply retrospectively such English terms as 'lineage' and 'clan' to the earlier forms of Korean surname/ ancestral seat (hereafter S/A) descent groups based on later developments.John B. Duncan in his The Origins of the Chos on Dynasty for example fails to redefine the term lineage when analyzing the central yangban descent groups that showed a great deal of continuity during the Kory o-Chos on dynastic transition. 1 1 email of the author: miyajamah@skku.eduOn the Construction Process of the Surname/ Ancestral Seat Descent Groups in Korea as seen through Genealogies 2 For the publication of the genealogies, see Miyajima Hiroshi, "Andong Kw on ssi s ongwha po r ul t'onghaes o pon Hanguk chokpo ui kujoj ok t' uks ong," [Structural characteristics of Korean genealogy seen through the 1476 genealogy of the Andong Kw on] Daedong munhwa y on ' gu 62 (2008). MIYAJIMA Hiroshi Sungkyunkwan University 3 This genealogy is preserved at the National Library of Korea. MIYAJIMA Hiroshi Duncan's top ten S/A descent groups include the Hwangny o Min (the Y oh ung Min), the Andong Kw on, the P'ap'y ong Yun, the Munwha Yu, the Andong Kim, the Ch onju Ch'oe, the Chuksan Pak, the Ky ongju Yi, the P'y ongyang Cho, and the Ch' ongju Han.He treats each of these descent groups in the beginning of the dynasty as the established lineage they became in the later years of the dynasty, thus leading to a misunderstanding of their historical development. According to Duncan, the Ch onju Choe ( ) lineage comprises segment A and segment B, each descending from Kyun ( , a civilian official prominent after the military coup of 1170) and Sunjak ( , a contemporaneous military officer) respectively (Duncan 2000, 131-32).But the two segments have separate first ancestors (sijo ).Hence, they cannot be subsumed under a single lineage.Similarly, he sees two separate groups of the Andong Kim ( ) as one lineage (Duncan 2000, 128-27).Thus, the so called old Andong Kim which produced a number of the highest munkwa ( ) examination passers in the early Chos on, and the 'new' Andong Kim which became politically prominent in the nineteenth century are recognized as one single lineage despite the fact that they worshiped separate first ancestors and thus do not fit the current concept of lineage, tracing descent from a common first ancestor. The Andong Kw on ( ) and the Ky ongju Yi ( ) are also recognized to have had two distinct segments respectively in the early Chos on (Duncan 2000, 122-24, 128-29).But, it was not until the later years of the dynasty that each segment came to have a sense of common ancestry through a common lineage founder.Segment B of the Andong Kw on as designated by Duncan is in fact the Chwayun kong branch line ( ), which worshiped Chij ong ( ) as a later prominent ancestor (chungsijo, ), and which appeared as the descedants of the Andong Kw on's daughters in the first edition of the Andong Kw on genealogy (S ongwha po, ) published in 1476.Therefore, at the time of the genealogy compilation, this branch was not recognized as descendant from Haeng ( ), the founder of the Andong Kw on.It was only in the 1794 genealogy (Hu kabin po, ) that this branch was incorporated into the lineage. 2 The two segments A and B in the early Chos on as subsumed under a single lineage in later years did not share the same lineage consciousness as demonstrated in the later years of the dynasty.Similarly, the earliest extant genealogy of the K ongngju Yi compiled in 1684 3 lists segment B of Duncan's account, without a mention of segment A. It was not until after the eighteenth century that both segments A and B were connected within this lineage. 5For the difficulties encountered by each S/A descent group, and the materials used for compiling first genealogies, see Miyajima, "Andong Kw on ssi s onghwa po." 6 Edward W. Wagner, "The Korean Chokpo as a Historical Source," Spencer J. Palmer ed., Studies in Asian Genealogy (Provo: Brigham Young University Press, 1972), 141-52. The above are some examples of late Chos on attempts to connect hitherto distinct lines of descent.Still, there were also earlier attempts in the early Chos on.For example, Duncan points out that two major segments of the Hwangny o Min ( ) with many powerful officials in the early Chos on central bureaucracy had descended from the brothers Sik ( ) and Konggyu ( ) respectively.But, it seems that the two branch lines of the Min were connected for the first time only in the 1478 genealogy.The Hwangny o Min (the Y oh ung Min) in the early Chos on are believed to have compiled their genealogies in 1417, 1477, and 1478,4 among which the first two may well have been compiled upon the instigation of two prominent female figures from the Min.Indeed, it is highly probable that the 1417 genealogy was complied upon the urging of Queen W on' gy ong ( , King T' aejong s queen, and a descendant from Konggyu), while the 1477 genealogy was compiled thanks to another prominent Min wife (the wife of a political giant, Han My onghoe, mother of King S ongjong's Queen Konghye , and a descendant from Sik).It is possible that on the occasion of the 1477 genealogy the connection between the two lines of descent, from Sik and Konggyu, was first established and was henceforth continued in the subsequent genealogies including the 1478 one.It is therefore misleading to assume that two major segments of the Hwangny o Min lineage existed prior to 1477, when the relation between them had not even been established. The above instances offered by Duncan as established segments and lines in the late Kory o and early Chos on were in fact a projection of the lineage organizations as they were constructed in later years.These are just a few instances.For almost all descent groups, however, except perhaps for the Andong Kw on whose genealogy appeared the earliest, all the segments and lines that were so clearly known in later years of the dynasty, had remained quite obscure in the late Kory o and early Chos on periods.This is because the first attempt of any S/A descent group to compile a genealogy was fraught with difficulties in tracking down the relations between differing lines of descent.Some came to be known only during the process of compilation, and others were plainly reconstructed on the basis of meager evidence indeed. 5here is another problem in using such terms as 'lineag' and 'clan' when referring to Korean S/A descent groups.In the process of construction, Korean S/A descent groups are so diverse as to defy the uniform application of such terms as 'lineag' and 'clan.' Currently, it seems that the term 'lineage' is used in general to denote those descent groups which have definitive relations with first ancestors, whereas the term clan is used to denote any other descent groups.The term 'segment' is used to indicate a sub-category of a lineage.Edward W. Wagner uses 'lineage' and 'clan' in a separate way, but he does not mention the criteria for such separate usage. 6ough the term lineage has been gaining general currency in denoting Korean S/A descent groups, it still remains hard to fit the current concepts of lineage or 'clan' to such groups in any uniform way.This is because each Korean S/A decent group has its own specific historical process of construction.As a result, it is doubtful that such a single term as lineage based on the later developments can have a universal application to all Korean S/A descent groups.As suggested below, we may define distinctive categories of Korean S/A descent groups, which could well reflect the fact that Korean lineages as we now know them have undergone a process of historical construction. The misconceptions about Korean S/A decent groups have led to another instance of problematic research results in Korean academic circles.A notable example can be seen in research that examines the passers of government examinations (kwag o ), using the S/A decent group as a unit of analysis.Such research aims to discover the concentration level of examination passers for given S/A decent groups, thereby measuring their prominent position in Chos on society. To indicate the level of concentration of examination passers, the top 10 or top 30 S/A decent groups are examined for their number of passers and for their proportion to the total number of passers.Likewise, W on Ch'ang'ae's research counts the overall number of the highest munkwa passers from each of the top 30 S/A descent groups throughout the Chos on dynasty.The top 10 of munkwa passers for the entire Chos on dynasty reads as follows: 847 for the Ch onju Yi, 358 for the Andong Kw on, 339 for the P'ap'y ong Yun, 322 for the Namyang Hong, 310 for the Andong Kim, 284 for the Ch' ongju Han, 258 for the Milyang Pak, 257 for the Kwangsan Kim, 242 for the Y onan Yi, and 233 for the Y oh ung Min. 7t is highly debatable that each of these S/A descent groups constituted a single integral entity throughout the dynasty (1392-1910).The assumption that each descent groups existed as single unified organization throughout the dynasty, without giving consideration to its own respective process of construction is particularly problematic.As mentioned before, among W on Ch'ang'ae's descent groups, the Andong Kim ( ) were not a unified descent group in a genealogical sense, as the old Andong Kim and the new Andong were worshiping separate first ancestors.Certainly, each group produced more than 100 munkwa passers, but the combined total has no statistical meaning.The same is true of the Namyang Hong ( ) munkwa passers that W on examined.The Namyang Hong had two separate descent groups identified as Tang Hong ( ) and T o Hong ( ) indicating respectively Chinese and Korean origins.It is true that the former produced more than 200 munkwa passers, while the latter produced more than 100 passers.Nonetheless, the combined total of two groups, again, carries little meaning. In cases of the Milyang Pak ( ) and the Y onan Yi ( ), each had a common first ancestor, but was divided into many branch lines derived from later prominent ancestors (chungsijo ).The connection of these later prominent ancestors to the first ancestors remained unclear.Thus, although the Y onan Yi, unlike the Andong Kim and the Namyang Hong, did have a common first ancestor (named Mu ) they included three lines descended from three later prominent ancestors whose connections to the first ancestor were unclear.All three lines produced many munkwa passers.But it was only in the 1729 genealogy that the three groups were for the first time incorporated into the extended Y onan Yi. 8 Hence, the extended Y onan Yi was a construction of the eighteenth century; certainly not from the beginning of the dynasty. The Milyang Pak belong to the same category as the Y onan Yi in terms of genealogical structure.The Milyang Pak shared a first ancestor named Onch'im ( ), but was divided into 12 branch lines, which respectively worshiped 12 later prominent ancestors, and were only connected for the first time in the 1742 genealogy.9Therefore, the statistical figures that account for the Y onan Yi or the Milyang Pak as a whole prior to the early eighteenth century carry little meaning. The Andong Kw on ( ) second only to the Ch onju Yi regarding the total number of munkwa passers did not have the same lineage as we now know it.At present, the Andong Kw on is comprised of 15 branch lines, among which only three appeared in the first 1476 genealogy, the rest being newly incorporated into later editions.All the Andong Kw on' s 12 later prominent ancestors had obvious links to the first ancestor (named Haeng ), unlike the Y onan Yi and the Milyang Pak.Nevertheless, it is doubtful that all links are genuine.Seven branch lines out of 12 accounted for one or more munkwa passers.Thus, the progressively extended branch lines are responsible for such a large number (385) of munkwa passers in W on' s calculation. The Ch' ongju Han ( ) and the Kwangsan Kim ( ) fit the same genealogical structure category as the Andong Kw on.These two descent groups also accrued new branch lines in new genealogy compilations, resulting in the extended lineages we see today.The Ch' ongju Han in particular has expanded by incorporating not only new branch lines identified as the Ch' ongju Han, but also the Py ongsan Han and the Hanyang Han which had different ancestral seats (pon'gwan ).In short, it is hard to believe that all the top ten S/A descent groups with large numbers of munkwa passers existed as unified lineage organization as we know them today from the very beginning of the Chos on dynasty.Future inquiries into whether and when given S/A descent group can be defined as lineage organization should begin by accumulating data gained from reviewing individual S/A descent group.In analyzing the genealogical pattern of individual S/A decent group, the following hypothetical categories of S/A descent group may serve as a preliminary rationale. Categories of Korean S/A Decent Groups In terms of construction process, the following categories can deducted from Korean S/A descent groups.The first category includes the S/A descent groups that have the same surname (s ongssi ) and the same ancestral seat (pon' gwan ), but multiple first ancestors (sijo ), such as the Andong Kim ( ) and the Namyang Hong ( ).The Ky ongju Kim ( ), the Kimhae Kim ( ), the Y onil Ch ong ( ), the Chinju Kang ( ), the Chinju Yu ( ), and the Ky ongju Ch'oe ( ) also belong to this category.It is out of the question to define each group as a single clan or lineage tracing descent from one common ancestor.As for individual branch lines that had their own respective first ancestor, their genealogical pattern should be further examined to place them into one of the other categories mentioned below. Among the S/A descent groups that have the same first ancestor, one category includes such S/A descent groups as the Y onan Yi ( ) and the Milyang Pak ( ), which show a loose solidarity as a single descent group, yet exhibit a strong independence among their respective branch lines.This category of S/A descent groups could well be understood as a clan, while their respective branch lines can be seen as a lineage. The other category includes the Andong Kw on ( ), the Ch' ongju Han ( ), and the Kwangsan Kim ( ), which as time passed formed progressively larger groups by incorporating new branch lines.Pending the result of further research, such outstanding S/A descent groups as the P'ap'y ong Yun ( ) and the Y ohung Min ( ) may also belong to this category.It is highly probable that so many prominent S/A descent groups in Korea belong to this category that it could well represent the typical pattern of descent group construction.Still, many relations between newly-incorporated branch lines remain dubious.In its characteristics, this category is not far removed from to the category that includes the Y onan Yi. In contrast to the above two categories, there is another category of S/A descent groups, which has authentic relations between branch lines, such as the Ch' ongsong Sim ( ), the Pallam Pak ( ), the P'ungsan Hong ( ), and the Haep'y ong Yun ( ).These descent groups all have authentic first ancestors originating from the late Kory o or the early Chos on, and all produced many munkwa passers among their direct descendants.These groups, therefore, became very powerful in the late Chos on.Remarkably, within the above two categories with the same ancestry, some branch lines demonstrated the same characteristics as this category of S/A descent groups.For example, the Ch'umilgong branch line ( ) of the Andong Kw on, and the Yanggan'gong branch line ( ) of the Kwangsan Kim produced more than 100 munkwa passers.Thus, this particular category is of great significance, when one attempts to examine the family background of the ruling elite of the Chos on era.Also, it is meaningful to trace the similar prominent branch lines like the Ch umilgong and the Yanggan gong among other categories of S/A descent groups. 10he categories offered so far are the major ones, but there remain many minor but remarkable categories of S/A descent groups that have a complicated construction history.For example, the Munhwa Yu ( ) share an awareness of a same ancestry with the S osan Yu ( ), the Ch onju Yu ( ), the Chinju Yu ( ), and the S onsan Yu ( ), though they all have their respective ancestral seats.Moreover, the Y onan Ch'a ( ) and the Munhwa Yu, despite different surnames and ancestral seats, worship the same first ancestor.As for these descent groups, further study on how to delineate their lineage organization and when lineage consciousness emerged is needed in order to better understand them. Future Research Taken as a whole, this brief survey of Korean S/A descent groups suggests that they have undergone varied and complicated construction processes.Hence, the uniform application of the terms clan or lineage based on specific experiences may well be misleading.Therefore, further research needs to be devoted to individual S/A descent groups, examining their respective construction processes, and illustrating their origins and characteristics as lineage organizations.Needless to say, such a task demands a huge amount of work to be done on extensive individual descent groups, but is imminently necessary in bringing precision to our understanding of such critical topics as the sustainability of the Chos on ruling elite, and the role of present-day S/A descent groups in Korean society. 11(Translated from Korean by Cheolbae Son) This work was supported by the National Research Foundation of Korea (MEST).(NRF-2007-361-AL0014) 1 John B. Duncan, The Origins of the Chos on Dynasty (Seattle and London: University of Washington Press, 2000).
2018-12-21T03:58:34.052Z
2010-04-01T00:00:00.000
{ "year": 2010, "sha1": "95da02f710078f57bec46280ab5548e32433f34e", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.21866/esjeas.2010.10.1.001", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "95da02f710078f57bec46280ab5548e32433f34e", "s2fieldsofstudy": [ "History" ], "extfieldsofstudy": [ "Geography" ] }
18288697
pes2o/s2orc
v3-fos-license
The challenges and promises of allogeneic mesenchymal stem cells for use as a cell-based therapy Mesenchymal stem cells (MSCs) are ideal for cell-based therapy in various inflammatory diseases because of their immunosuppressive and tissue repair properties. Moreover, their immunosuppressive properties and low immunogenicity contribute to a reduced or weakened immune response elicited by the implantation of allogeneic MSCs compared with other cell types. Therefore, implantation of allogeneic MSCs may be a promising cell-based therapy. In this review, we first summarize the unique advantages of allogeneic MSCs for therapeutic applications. Second, we critically analyze the factors influencing their therapeutic effects, including administration routes, detection time-points, disease models, differentiation of MSCs in vivo, and timing and dosage of MSC administration. Finally, current approaches to allogeneic MSC application are discussed. In conclusion, allogeneic MSCs are a promising option because of their low immunogenicity and immunosuppressive and tissue repair capabilities. Further investigations are needed to enhance the consistency and efficacy of MSCs when used as a cell-based therapy in inflammatory diseases as well as for tissue repair. Introduction Mesenchymal stem cells (MSCs) are classified into various groups according to the cell source, such as bone marrow-derived MSCs (BM-MSCs), adipose-derived MSCs (ASCs), and umbilical cord MSCs. These MSC types share common features, which have been described by the International Society for Cellular Therapy. The minimum criteria for defining MSCs are that they: (a) remain plastic-adherent under standard culture conditions; (b) express CD105, CD73, and CD90 and fail to express CD45, CD34, CD14 or CD11b, CD79a or CD19, and major histocompatibility complex (MHC) class II molecules; and (c) differentiate into osteoblasts, adipocytes, and chondrocytes in vitro [1]. The following unique properties appear to make MSCs ideal for cell-based therapy in various diseases. First, they have multilineage potential, differentiating into various cell types, including adipocytes, hepatocytes, and neurocytes [2][3][4]. This makes them useful as seed cells to replace damaged tissue in tissue engineering applications. Second, they alleviate tissue injury and promote tissue repair by their anti-apoptotic and cytoprotective effects and angiogenic capacity [5,6]. Third, they have become a promising approach to treat graft-versus-host disease (GVHD) and autoimmune disease because of their immunomodulatory properties and low immunogenicity [7][8][9]. Advantages of allogeneic MSCs for therapeutic applications Autologous MSC (auto-MSC) applications have some potential limitations. First, it is difficult to obtain sufficient auto-MSCs from some patients-for example, ASCs from thinner patients or BM-MSCs from myelofibrosis patients. Second, MSCs isolated from elderly donors have decreased biological activity, including differentiation and regenerative potential [10,11], resulting in disappointing treatment outcomes. Third, some systemic diseases, such as diabetes [12], rheumatoid arthritis [13], and systemic lupus erythematosus (SLE) [14], alter the intrinsic properties of MSCs, thus impairing their protective function. It is difficult to obtain sufficient quantities of healthy auto-MSCs with high activity from patients with these diseases. MSC implantation in these patients is therefore challenging. Obtaining allogeneic MSCs (allo-MSCs) from young healthy donors is a reasonable approach to resolving this issue. Furthermore, auto-MSC extraction is time-consuming, making it difficult to use them promptly to treat acute diseases such as stroke and myocardial infarction. In contrast, allo-MSCs are readily available and can be administered immediately. In addition, commercial allo-MSC production should guarantee quality control and reduce the cost of cell therapies. Therefore, allo-MSCs are promising alternatives to auto-MSCs, with advantages with regard to time, cost, and quality assurance. Above all, the immunosuppressive properties and low immunogenicity of allo-MSCs contribute to a reduced immune response after implantation. The following mechanisms are responsible for their immunosuppression and low immunogenicity. First, their expression of a low or modest level of MHC class I molecules and lack of expression of MHC class II and co-stimulatory molecules, such as CD40, CD80 (B7-1), and CD86 (B7-2), leads to low immunogenicity, thus avoiding immune responses in recipients [15]. Second, MSCs inhibit the activity of various immune cells, including T cells, B cells, natural killer cells, and dendritic cells via cell-cell contacts and soluble factors [16,17]. Factors influencing the protective effect of allo-MSCs The concept that allo-MSCs may have equivalent efficacy to auto-MSCs has become well established. Increasingly, however, in vivo studies report that allo-MSCs are not fully immune privileged and probably cause an immune response despite the immunosuppressive properties and low immunogenicity of MSCs being documented both in vivo and in vitro. Currently, different research groups have obtained inconsistent or even contradictory results on the therapeutic effects of allo-MSCs in various studies [18][19][20][21]. Therefore, the in vivo immunogenicity of allo-MSCs and the relationship between immunogenicity and their protective effects remains to be determined. In addition, the cause of the inconsistent results has yet to be established. We describe in detail the factors that influence the therapeutic effects of allo-MSCs below. Administration routes versus therapeutic effects The routes of MSC administration are classified into two categories: systemic and topical. Some studies have reported that the administration route of allo-MSCs determines the extent of their protective effects. There are two types of topical administration: intralesional injection (e.g., intracranial, intracerebral, subcutaneous) and local vascular injection (e.g., superior vena cava, mesenteric blood vessels, coronary artery). Compared with systemic administration, topical administration routes may have a common advantage in that MSCs arrive directly at the target tissue with little loss during migration [18,22]. It was demonstrated that allo-MSCs loaded onto cancellous bone granules have a similar efficacy to auto-MSCs for bone regeneration in bone defect models [23]. Acar et al. [24] reported that direct injection of allo-MSCs into marrow cavities (i.e., intrabone marrow delivery) had similar effects to intravenous (IV) injection in irradiation-damaged bone marrow repair. However, Gu et al. [25] reported that allo-MSCs implanted via the intrapancreatic route had a greater effect on hyperglycemia correction and increasing insulin secretion in the serum of diabetic rats than those administered via the IV route. Types of systemic administration include IV, intraarterial, and intraperitoneal injection. IV is the most common method in preclinical and clinical settings because of its convenience. However, MSCs administered via this route are more easily trapped in small lung capillaries because of their larger size and expression of cell adhesion molecules [26,27]. Lung entrapment of MSCs decreases the number of MSCs delivered to target tissues and can result in ineffectual treatment [28]. However, some reports have shown that auto-MSCs delivered via IV injection have protective effects in various animal models even when lung entrapment occurs [3,29]. Similar to auto-MSCs, IV administration of allo-MSCs improved islet function and corrected hyperglycemia without immune rejection in a diabetic rat model [25]. In a rat ischemic stroke model, allogeneic ASCs and BM-MSCs delivered via IV injection decreased cell death, increased cellular proliferation, and improved the functional recovery of the brain [3]. Administration routes determine the microenvironments that MSCs first encounter after entering the patient's body, thus influencing their differentiation, immunogenicity, and survival [30]. However, the mechanisms responsible for these effects are far from clear because of the limited number of studies performed, and it is necessary to investigate which administration routes of MSCs are best for the diverse range of disease models. Evaluation time-points versus therapeutic effects Short-term (i.e., within a month) but not long-term protection has usually been evaluated in most studies that demonstrate the protective effects of MSCs [29,31]. In contrast, most studies evaluating their long-term effect have shown no or limited protection [32,33]. Therefore, the different time-points used in these investigations probably contribute to their different conclusions on the protective effects of MSCs. As MSCs have low immunogenicity but are not fully immune privileged in vivo, immune rejection of allo-MSCs is induced. However, this is too weak to eliminate them immediately, so allo-MSCs can survive for a short period after transplantation. Therefore, they can exert a protective and/or immunosuppressive function in the short-term but are less effective in the long-term. More studies into implanted MSCs are urgently needed to simultaneously evaluate their shortand long-term protective effects. Disease models versus therapeutic effects It is well established that allo-MSCs can alleviate GVHD in the setting of allogeneic hematopoietic stem cell transplantation in preclinical [34] and clinical studies [35]. Moreover, the Prochymal brand of remestemcel-L, the first stem cell drug, has been approved for the market. Prochymal is a MSC product prepared from bone marrow aspirates of healthy human donors, and shows potential for treating acute GVHD [19,36]. In addition to GVHD models, the efficacy and safety of allo-MSCs have been widely documented in autoimmune disease models. Allo-MSCs can reduce the clinical relapse rate and improve the function of damaged organs in models of autoimmune diseases, including SLE and Crohn's disease [9,37]. The technology available for allo-MSC applications for GVHD and Crohn's disease is currently comparatively mature; of the 13 available clinical trials on Prochymal registered in clinical trials databases, five have been for use in GVHD and Crohn's disease. Although MSCs display a protective function in GVHD and autoimmune disease models, controversy exists about allo-MSC immunosuppression in the setting of solid organ transplantation [20,38,39]. For example, allo-MSCs show no graft protection in many studies [20]. Unexpectedly, some studies have reported that allo-MSCs are ineffective at prolonging allograft survival and tend to cause more rapid-and a greater degree of-immune rejection [20,21]. Therefore, the use of various disease models may be one reason for the controversy about the protective effects of allo-MSCs. Differentiation of MSCs in vivo versus therapeutic effects The low immunogenicity of MSCs does not ensure they are fully immune privileged in an in vivo setting. Allo-MSC immunogenicity after differentiation can weaken or even inhibit their therapeutic effects. Huang et al. [33] reported that expression of immunogenic MHC-Ia and MHC-II is strongly increased in differentiated MSCs compared with undifferentiated MSCs in a rat myocardial infarction model. The implanted allo-MSCs induced expression of a specific anti-donor alloantibody in serum after differentiation (5 weeks), which limited the longterm (more than 5 months) protective effects of MSCs on the heart. However, allo-MSCs were as effective as auto-MSCs in improving cardiac function for at least 3 months. In a diabetic rat model, Gu et al. [25] reported that implanted allo-MSCs did not express MHC-II and did not trigger cellular cytotoxicity and immune rejection until they differentiated into insulin-producing cells. Even so, the therapeutic effects of allo-MSCs for damaged pancreas were maintained after their differentiation. From these results, we find that the presence of immunogenicity after differentiation decreases the therapeutic effects of allo-MSCs, although it does not indicate the definite loss of protective effects immediately, which is consistent with previous reports [40,41]. We speculate on the probable reasons for this. First, even in specific induction conditions in vitro, only some MSCs differentiate; therefore, sufficient allo-MSCs remain in an undifferentiated state to ensure their survival and execute their protective effects on the immune systems of recipients. Second, the immunoreaction is too weak to quickly eliminate differentiated MSCs; a recipient's immune systems needs some time to eliminate all of the allo-MSCs. Current data on this issue are lacking and the specific protective mechanism that functions after differentiation needs to be further investigated. Timing of MSC administration versus therapeutic effect The immune status of a recipient before and after allograft organ transplantation determines the survival of implanted allo-MSCs. Crop et al. [42] reported that, before kidney transplantation, recipient peripheral blood mononuclear cells (PBMCs) did not lyse allo-MSCs, but that PBMCs isolated 3, 6, and 12 months after transplantation showed increasing ability to lyse allo-MSCs. In vivo experiments have shown that the different timing of auto-MSC transplantation determines their therapeutic effect in a myocardial infarction model [43]. As reported for auto-MSCs, a recent study by Rigol et al. [41] showed that allo-ASCs induce better neovascularization and a better long-term prognosis at 15 min after reperfusion than a week later. In addition, Cho et al. [44] reported that a single injection of MSCs, either systemically or subcutaneously, did not induce a detectable adaptive immune response. However, repeated injection of MSCs into the same site resulted in alloantibody production. Therefore, differences in administration timing have probably led to inconsistent conclusions regarding the immunogenicity and therapeutic effects of allo-MSCs. Dosage of MSC administration versus therapeutic effects Different doses of MSCs have different immune response or protective effects. Allo-MSCs injected intracranially induced transient dose-dependent immune rejection, which reduced MSC engraftment levels and their protective effects [45,46]. In contrast, an animal study on myocardial infarction by Wolf et al. [47] indicated that allo-MSCs limited myocardial infarct size and improved the functional outcome in a dose-dependent manner. Currently, the relationship between MSC dose and therapeutic effects is far from clear. Therefore, the optimal dose of implanted allo-MSCs needs to be further investigated to maximize their therapeutic function in various disease models. Application strategies for allogeneic MSCs Many unique features make MSCs a promising therapeutic option in tissue repair and immunosuppression. Although the direct application of allo-MSCs has a certain protective effect, various measures taken during or before transplantation can have a great effect on improving treatment outcomes (Table 1). Combined application with immunosuppressants The co-application of MSCs with immunosuppressants increases their protective effects compared with their separate application. On one hand, immunosuppressants improve the effects of MSCs by prolonging their survival time in allograft organ transplantation and, on the other, MSCs can decrease the side effects of immunosuppressants. For example, Ge et al. [48] observed that the immunosuppressant Rapa enabled successful MSC engraftment by suppressing the immune response to allo-MSCs after heterotopic cardiac transplantation. Moreover, MSCs markedly enhanced the immunosuppressive effect of Rapa, thus enabling the dosage (and side effects) to be reduced [48]. MSCs attenuated acute immune rejection in renal transplantation, and had the potential benefit of reducing the dosage of the conventional immunosuppressant, tacrolimus [49]. Genetic modification of MSCs The effectiveness of genetically modified auto-MSCs has been reported in different disease models [50][51][52]. Similarly, the protective effect of allo-MSCs was improved by gene modification. de la Garza-Rodea et al. [53] observed that BM-MSCs with a modified US11 gene led to decreasing expression of MHC-1. The US11 gene modification contributed to evasion of recognition by cytotoxic lymphocytes and extended the persistence of MSCs in the allogeneic host. In contrast to wild-type allo-MSCs, allo-MSCs expressing cytotoxic T lymphocyte associated antigen-4 (CTLA4Ig) demonstrated enhanced inhibition of T-cell responses [54]. The genetically modified MSCs delayed the onset of inflammatory arthritis and decreased the amount of damage in collagen-induced arthritis. Chen et al. [55] reported that MSCs expressing allogeneic C-X-C chemokine receptor type 4 (CXCR-4) promoted a greater level of hematopoietic recovery and sustained hematopoiesis compared with unmodified MSCs. The protection of MSCs resulted from the enhanced ability to home to bone marrow and spleen. Allo-MSCs with modified Epo gene significantly increased protective effects for kidney and improved the survival of mice in an acute kidney injury model [56]. Method of cell engineering The fate of implanted allo-MSCs is tightly influenced by the microenvironment encountered. Intracellular depots have been generated through cell engineering to provide controlled microenvironments for MSCs. These depots continuously release drugs and cellular factors which affect the homing, viability, differentiation of MSCs, etc. For example, MSCs engineered with poly lactide-coglycolic acid particles containing dexamethasone promoted the osteogenic differentiation of MSCs [57]. Hydrogels were previously reported to be promising allo-MSC carriers for tissue engineering. Dhingra et al. [58] reported that the use of a biodegradable, temperature-sensitive hydrogel for the slow release of prostaglandin E2 at the cell implantation site could prevent rejection of implanted allo-MSCs and restore cardiac function in a myocardial infarction model. Interestingly, hydrogels themselves have been documented to modulate the immunological properties of allo-MSC tissue-engineered cartilage. Neonatal rabbit allo-MSCs induced lower allogeneic lymphocyte proliferation and reduced the expression of MHC class I and II molecules when seeded in a collagen hydrogel compared with sponge and membrane [59]. Recently, there have been fewer studies on applications of allo-MSCs compared with auto-MSCs. Auto-MSC studies have provided insight into allo-MSC applications. For example, the pre-stimulation of auto-MSCs with interferon-gamma increased their immunosuppressive capacity, reduced mucosal damage, and enhanced their therapeutic efficacy in animal models of colitis [60]. In addition, hypoxia preconditioning is reported to increase the protective effect of auto-MSCs in disease models such as hemorrhagic stroke [61], ischemia [62], and pulmonary fibrosis [63]. Conclusion and future perspectives MSCs have shown promise in cell replacement or transplantation for their immunosuppressive and tissue repair effects. However, it is difficult to isolate sufficient quantities of healthy auto-MSCs with high activity from older or thinner people and patients with diabetes, rheumatoid arthritis or SLE. Moreover, auto-MSCs are not suited to the prompt treatment of acute diseases because extraction of them is time-consuming. Because of their immune suppression properties and low immunogenicity compared with other cell types, the implantation of allo-MSCs may, therefore, be more reasonable and appropriate. Although various studies have provided inconsistent conclusions on the therapeutic effects of allo-MSCs, allo-MSCs are still a promising option in immunosuppressive and tissue repair therapy. To date, we have been unable to obtain consistent results from the insufficient pre-clinical and clinical data on the immunogenicity and protective effects of allo-MSCs. The following issues need to be addressed in further research. First, which immune molecules and cells are involved in the potential immune response? Second, what is the dynamic fate of implanted allogeneic ASCs, including being eliminated by recipients, being maintained in the stem cell state, or differentiating into various cell types? It will be helpful to assess the in vivo efficiency of allo-MSCs compared with that of auto-MSCs. Third, the factors that influence their therapeutic effects and how they result in the present inconsistent results are far from clear. Last, strategies to enhance the consistency and efficacy of allo-MSCs as a cell-based therapy should be investigated in inflammatory diseases as well as for tissue repair. Competing interests The authors declare that they have no competing interests. Authors' contributions JZ contributed to the research design and wrote the manuscript. XH and HW participated in the research design, drafting the manuscript and carrying out the literature research. XL and TZ were involved in carrying out the literature research and revising the manuscript critically for important intellectual content. YW and DH contributed to the research design, drafting the manuscript and revising it critically for important intellectual content. All authors read and approved the final manuscript.
2017-07-01T23:45:01.275Z
2015-12-01T00:00:00.000
{ "year": 2015, "sha1": "12e3369a56b093682ff17b4431357b4a55dbd01e", "oa_license": "CCBY", "oa_url": "https://stemcellres.biomedcentral.com/track/pdf/10.1186/s13287-015-0240-9", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5d09fa902f639ee2d26ef9d7be5663709a8833b5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
3498626
pes2o/s2orc
v3-fos-license
Spindly, a novel protein essential for silencing the spindle assembly checkpoint, recruits dynein to the kinetochore The eukaryotic spindle assembly checkpoint (SAC) monitors microtubule attachment to kinetochores and prevents anaphase onset until all kinetochores are aligned on the metaphase plate. In higher eukaryotes, cytoplasmic dynein is involved in silencing the SAC by removing the checkpoint proteins Mad2 and the Rod–Zw10–Zwilch complex (RZZ) from aligned kinetochores (Howell, B.J., B.F. McEwen, J.C. Canman, D.B. Hoffman, E.M. Farrar, C.L. Rieder, and E.D. Salmon. 2001. J. Cell Biol. 155:1159–1172; Wojcik, E., R. Basto, M. Serr, F. Scaerou, R. Karess, and T. Hays. 2001. Nat. Cell Biol. 3:1001–1007). Using a high throughput RNA interference screen in Drosophila melanogaster S2 cells, we have identified a new protein (Spindly) that accumulates on unattached kinetochores and is required for silencing the SAC. After the depletion of Spindly, dynein cannot target to kinetochores, and, as a result, cells arrest in metaphase with high levels of kinetochore-bound Mad2 and RZZ. We also identified a human homologue of Spindly that serves a similar function. However, dynein's nonkinetochore functions are unaffected by Spindly depletion. Our findings indicate that Spindly is a novel regulator of mitotic dynein, functioning specifically to target dynein to kinetochores. Introduction The spindle assembly checkpoint (SAC) is critical for preventing the onset of anaphase until all chromosomes are aligned on the metaphase plate. A single misaligned kinetochore is sufficient to generate a wait anaphase signal, thereby ensuring that all sister chromatids segregate to opposite ends of the spindle and are equally distributed to the daughter cells. Failure of the SAC can lead to premature anaphase onset and aneuploidy (Liu et al., 2003;Kops et al., 2005b; for review see Kadura and Sazer, 2005). Such defects can have consequences for a whole organism, as mice that lack a full complement of SAC genes have more frequent DNA segregation errors and are more susceptible to tumor development (Baker et al., 2005). The presence of the SAC was initially inferred from observations that cells delay in metaphase when meiotic sex chromosomes fail to pair and align or after the spindle is perturbed by either microtubule poisons or microsurgery. Molecules responsible for the SAC were later identifi ed in yeast genetic screens and named Mad1, -2, and -3 (Mad for mitotic arrest defi cient) and Bub1, -2, and -3 (Bub for budding unperturbed by benzimidazole). Subsequent work showed that these proteins together with the MPS1 kinase form distinct complexes that target to the kinetochore (for reviews see Lew and Burke, 2003;Kadura and Sazer, 2005;Malmanche et al., 2006;Musacchio and Salmon, 2007). Two additional metazoan checkpoint proteins, Zw10 and Rough Deal (Rod), were later isolated as cell cycle mutants in Drosophila melanogaster. These two proteins, together with a third protein called Zwilch, form a complex (Rod-Zw10-Zwilch complex [RZZ]) that regulates the levels of Mad1 and Mad2 on the kinetochore (for review see Karess, 2005). Ultimately, the SAC pathway must lead to inhibition of the anaphase-promoting complex (APC), a multisubunit ubiquitin E3 ligase that targets multiple mitotic regulators (e.g., mitotic cyclins as well as the securin protein that inhibits the cleavage of cohesin molecules) for proteosome degradation to allow mitotic exit (Acquaviva and Pines, 2006). Several studies have shown that localization of the checkpoint proteins to misaligned kinetochores is essential for establishing the SAC and keeping the APC inhibited, most likely by generating a diffusible signal that inhibits the APC (Taylor et al., 2004;Pinsky and Biggins, 2005; for review see Musacchio and Salmon, 2007). The nature of the Spindly, a novel protein essential for silencing the spindle assembly checkpoint, recruits dynein to the kinetochore Eric R. Griffi s, 1,2 Nico Stuurman, 1,2 and Ronald D. Vale 1,2 1 Howard Hughes Medical Institute and 2 Department of Cellular and Molecular Pharmacology, University of California, San Francisco, San Francisco, CA 94158 T he eukaryotic spindle assembly checkpoint (SAC) monitors microtubule attachment to kinetochores and prevents anaphase onset until all kineto chores are aligned on the metaphase plate. In higher eukaryotes, cytoplasmic dynein is involved in silencing the SAC by removing the checkpoint proteins Mad2 and the Rod-Zw10-Zwilch complex (RZZ) from aligned kinetochores (Howell, B.J., B.F. McEwen, J.C. Canman, D.B. Hoffman, E.M. Farrar, C.L. Rieder, and E.D. Salmon. 2001. J. Cell Biol. 155:1159-1172Wojcik, E., R. Basto, M. Serr, F. Scaerou, R. Karess, and T. Hays. 2001. Nat. Cell Biol. 3:1001-1007. Using a high throughput RNA interference screen in Drosophila melanogaster S2 cells, we have iden tifi ed a new protein (Spindly) that accumulates on unat tached kinetochores and is required for silencing the SAC. After the depletion of Spindly, dynein cannot target to kinetochores, and, as a result, cells arrest in metaphase with high levels of kinetochore-bound Mad2 and RZZ. We also identifi ed a human homologue of Spindly that serves a similar function. However, dynein's nonkinetochore functions are unaffected by Spindly depletion. Our fi ndings indicate that Spindly is a novel regulator of mitotic dynein, functioning specifi cally to target dynein to kinetochores. diffusible signal is still subject to debate. However, a current model suggests that the kinetochore-bound Mad1-Mad2 complex acts as a template that coverts the free, inactive Mad2 to an active form that can diffuse away from the kinetochore and bind to and sequester Cdc20, a regulatory component of the APC (for review see Musacchio and Salmon, 2007). The capture of microtubules by the kinetochore and the downstream activity of two different microtubule motors are required for silencing the SAC in metazoans. One of these motors is the kinesin centromere protein (CENP) E, which may act as a tension sensor that, when stretched, inactivates the BubR1-dependent inhibition of Cdc20 (Chan et al., 1999;Mao et al., 2005). The second motor is dynein, which transports Mad1, Mad2, and RZZ from the kinetochore to the spindle pole (Howell et al., 2001;Wojcik et al., 2001). Dynein-based removal of Mad1 and Mad2 from the kinetochore may disrupt the template mechanism that generates the active Mad2 that inhibits the APC (De Antoni et al., 2005; for review see Musacchio and Salmon, 2007). After inhibition or depletion of dynein or its cofactors, metazoan cells arrest in metaphase with correctly aligned chromosomes and high levels of kinetochore-bound Mad1, Mad2, and RZZ. Resolving the mechanism of dynein recruitment to kinetochores is important for understanding how kinetochoremicrotubule binding ultimately leads to inactivation of the SAC. Currently, it is thought that dynein is brought to the kinetochore by binding directly to dynactin (a multisubunit complex required for multiple dynein functions; Schroer, 2004), which, in turn, binds to the Zw10 subunit of the RZZ complex (Starr et al., 1998). Lis1, another dynein cofactor, also has been proposed to play a role in targeting dynein to kinetochores (Dzhindzhev et al., 2005). Dynactin, Lis1, and Zw10 are not kinetochorespecifi c factors, as they are involved in targeting dynein to multiple other locations in the cell (Cockell et al., 2004;Hirose et al., 2004). It has not been clearly established whether dynactin and Lis1 are suffi cient for targeting dynein to kinetochores or whether other proteins might be involved. To fi nd new proteins that might participate in the SAC, we undertook an automated 7,200 gene mitotic index RNAi screen in S2 cells. This screen uncovered a novel gene, which we also identifi ed in an independent screen of genes involved in S2 cell spreading and morphology. We show that this protein (termed Spindly) localizes to microtubule plus ends in interphase and to kinetochores during mitosis. Cells depleted of Spindly arrest in metaphase with high levels of Mad2 and Rod on aligned kinetochores, a defect caused by a failure to recruit dynein to the kinetochore. However, Spindly is not required for other dynein functions during interphase and mitosis. We also identify a human homologue of Spindly, which is similarly involved in recruiting dynein to kinetochores. Thus, our results have uncovered a novel conserved dynein regulator that is involved specifi cally in dynein's function in silencing the SAC. RNAi screens Using a double-stranded RNA (dsRNA) library corresponding to ‫002,7ف‬ Drosophila genes (Echard et al., 2004), we performed two screens using Drosophila S2 cells (Fig. 1, b-d). The fi rst screen measured mitotic index (the percentage of phosphohistone H3-positive cells in a population; see Materials and methods). In the second screen, the shape of S2 cells (spread on concanavalin A [Con A]-coated surfaces; Rogers et al., 2003) was evaluated by visual inspection. RNAi of one novel gene, CG15415, produced strong phenotypes in both screens. CG15415 is a novel uncharacterized Drosophila gene encoding a 780-amino acid protein with predicted N-terminal coiled-coil sequences and four repeats with the consensus sequence T P X K P Q X K G T P V K (Fig. 1 a). In the interphase screen, many of the CG15415-depleted cells showed spiky and elongated microtubule-rich projections in contrast to the rounded shape of normal spread S2 cells (Fig. 1, b and c). In the mitotic index screen, the depletion of CG15415 caused an increase in mitotic index that was comparable with that observed predicted coiled-coil sequences in red and repeated motifs in blue; sequence alignment of residues in the repeat motifs is shown below. The locations of two nonoverlapping dsRNAs used to deplete Spindly are shown in green. A third dsRNA to the 3′ UTR was also used (not depicted). (b and c) Wildtype (wt) S2 cells show a uniformly spread morphology (b), whereas Spindly RNAi-treated cells (c) show marked defects in the actin lamellae as well as increased numbers of cells with long microtubule-rich projections. Actin, red; microtubules, green; DNA, blue. (d) The mitotic index of S2 cells is increased after the depletion of Spindly, dynein heavy chain (DHC), or a subunit of the APC (Cdc16; mean ± SEM [error bars]; n = 3 experiments, with 1,000-3,000 cells counted per experiment). Values are expressed as a ratio of RNAi-treated to untreated cells (untreated cells have a mitotic index of 1-3%). (e) The ratio of metaphase to anaphase cells (scored manually after staining with anti-tubulin and antiphosphohistone antibodies; see Materials and methods) reveals a selective increase in metaphase cells after Spindly and DHC RNAi (mean ± SEM; n = 2 experiments, with >200 mitotic spindles scored per experiment). (d and e) The expression of GFPtagged Spindly can rescue the mitotic phenotype after endogenous Spindly is depleted using a dsRNA that targets the Spindly 3′ UTR. Bars, 10 μm. for RNAi of the dynein heavy chain (DHC) and the APC subunit Cdc16 (Fig. 1 d). The majority of the mitotic CG15415depleted cells were arrested in metaphase, which is also similar to DHC depletion ( Fig. 1 e). This result was confi rmed in live cells expressing GFP-tubulin, in which CG15415-depleted cells failed to enter anaphase within 4 h after nuclear envelope breakdown. In contrast, untreated cells initiated anaphase within 20-85 min of nuclear envelope breakdown (unpublished data). Because the depletion of CG15415 produced spindle-shaped interphase cell morphology and arrested cells with metaphase spindles, we refer to this protein as Spindly. The specifi city of the Spindly phenotypes was confi rmed using three nonoverlapping dsRNAs: two in the coding region and one dsRNA that targets the 3′ untranslated region (UTR; Fig. 1 a). Using an antibody generated against Spindly's C-terminal 357 amino acids, we confi rmed that the three dsRNAs effectively depleted the protein after 5 d (Fig. S1 a, available at http://www .jcb.org/cgi/content/full/jcb.200702062/DC1). As further confi rmation of the specifi city of the Spindly RNAi phenotype, we found that expression of a GFP-Spindly fusion protein could rescue the metaphase block after the endogenous protein was depleted with the 3′ UTR dsRNA. This result also indicates that Spindly retains its function after fusion to GFP, enabling the localization studies described in the next section. GFP-Spindly targets to microtubule plus ends in interphase and to kinetochores in mitosis To learn more about Spindly's function, we examined the localization and dynamics of GFP-tagged Spindly. In live cells expressing low levels of GFP-Spindly, the protein was concentrated in punctae that continually moved to the periphery of the cell, which is behavior typical of microtubule plus end-binding proteins (Video 1, available at http://www.jcb.org/cgi/content/ full/jcb.200702062/DC1). Fixation and staining of cells expressing low levels of GFP-Spindly with an antibody to EB1 (a well-established plus end-binding protein) confi rmed this localization, although the plus end enrichment was less pronounced than that displayed by EB1 (Fig. 2 a). At higher levels of GFP-Spindly expression, the protein began to decorate along the length of the microtubule and to localize to the lamella (unpublished data). After cells entered mitosis, GFP-Spindly was no longer localized to microtubule tips but instead was found on kinetochores. In prometaphase cells, GFP-Spindly was found on most kinetochores, a localization confi rmed by colocalization with anti-Cid antibodies, which recognize the Drosophila homologue of CENP-A. However, in metaphase cells, the levels of GFP-Spindly were reduced considerably on the kinetochores of aligned chromosomes, and the protein was more evident on the mitotic spindle, especially at spindle poles ( Fig. 2 b). During anaphase, GFP-Spindly was seen once again at high levels on kinetochores, but, after the nuclear envelope reformed in telophase, the protein was excluded from the nucleus. Time-lapse microscopy revealed that high initial levels of GFP-Spindly on misaligned chromosomes decreased as these chromosomes were pulled toward the metaphase plate ( Fig. 2 c and Videos 2 and 3, avail able at http://www.jcb.org/cgi/content/full/jcb.200702062/DC1). A similar distribution of endogenous Spindly in mitosis was confi rmed using an affi nity-purifi ed antibody in cells expressing the Drosophila homologue of the kinetochore protein Mis12 (CG18156) fused to GFP (Fig. S1). The transient targeting of Spindly to kinetochores is very similar to what has been reported for the mitotic checkpoint proteins Rod and Mad2 (Chen et al., 1996;Scaerou et al., 1999). This dynamic kinetochore localization together with the data from our mitotic index screen led us to focus our efforts on understanding Spindly's role during mitosis. Spindly is shed from the kinetochore in a dynein-dependent manner and requires Rod to target to the kinetochore Components of the RZZ complex as well as Mad2 accumulate on kinetochores in prometaphase and are shed from metaphase kinetochores by dynein-dependent transport along kinetochore microtubules (Howell et al., 2001;Wojcik et al., 2001). Using faster acquisition live cell imaging, we similarly observed punctae of GFP-Spindly moving processively from metaphase-aligned kinetochores toward the spindle poles ( Fig. 3 a and Video 4, available at http://www.jcb.org/cgi/content/full/jcb.200702062/DC1). Kymograph analysis revealed that GFP-Spindly moved poleward at a mean velocity of ‫21ف‬ μm/min (Figs. 3 b and S2), which is similar to rates reported for the dynein-mediated transport of (Wojcik et al., 2001;Basto et al., 2004). However, not all GFP-Spindly particles moved uniformly; some paused or made short reversals toward the kinetochore before continuing toward the spindle pole (Video 4), which is behavior similar to that described for dynein-dynactin complexes in vitro (Ross et al., 2006). RZZ and Mad2 in Drosophila To establish whether dynein is indeed the motor responsible for the poleward transport of Spindly, we examined GFP-Spindly after RNAi-mediated depletion of the cytoplasmic DHC. Under these conditions, high levels of GFP-Spindly accumulated on metaphase-aligned kinetochores ( Fig. 3 d and Video 4), which is similar to what has been described for Rod and Mad2 after the disruption of dynein (Wojcik et al., 2001;unpublished data). Immunofl uorescence localization of endogenous Spindly confi rmed this result (unpublished data). We also no longer observed the poleward transport of GFP-Spindly by time-lapse microscopy. RNAi-mediated depletion of the dynein regulatory proteins Lis1 and p150 Glued produced similar results (Video 4 and not depicted, respectively). These results indicate that kinetochore to pole movement of Spindly depends on cytoplasmic dynein and its activators, as is true of other known components of the SAC. We next sought to determine how Spindly is targeted to the kinetochore. It has been previously shown that recruitment of dynein-dynactin to the corona region of the kinetochore depends on the RZZ complex, which, in turn, links through Zwint-1 to the Ndc80 and Mis12 complexes of the kinetochore (Starr et al., 1998;Obuse et al., 2004;Kops et al., 2005a). The depletion of any of the three RZZ polypeptides destabilizes the whole complex and prevents the recruitment of Mad2 and dynein-dynactin (Scaerou et al., 1999Buffi n et al., 2005). When Rod was depleted by RNAi, GFP-Spindly no longer localized to kinetochores or the spindle poles ( Fig. 3 e and Video 4). These results indicate that Spindly is a part of the corona region of the kinetochore and requires the RZZ complex (but not dynein or dynactin, as discussed above) for its kinetochore localization. Spindly-depleted cells arrest in mitosis with high levels of Rod and Mad2 on aligned kinetochores Because Spindly is required for cells to complete mitosis and localizes to kinetochores in a manner similar to known SAC proteins, we decided to investigate the role of Spindly in the kinetochore localization of Rod and Mad2. In prometaphase cells, Rod and Mad2 are more abundant on misaligned than aligned chromosomes and are also observed on the spindle and spindle poles (Fig. 4, a and d) as previously described (Chen et al., 1996;Williams et al., 1996). However, after Spindly RNAi, the levels of Rod and Mad2 were comparable on misaligned and metaphasealigned kinetochores, which is similar to the outcome of DHC RNAi (Fig. 4, b, c, e, and f). These results indicate that both dynein and Spindly are required for the shedding of Rod and Mad2 from the kinetochore. Consistent with this interpretation, the staining of Rod and Mad2 on the spindle (likely refl ecting the population of molecules undergoing transport) was severely reduced after Spindly and DHC RNAi (Fig. 4, b, c, e, and f). The retention of Rod and Mad2 on metaphase-aligned chromosomes explains the high mitotic index and increased number of metaphases seen after Spindly or DHC depletion (Fig. 2 a). The metaphase arrest and retention of Mad2 and Rod on aligned chromosomes seen after Spindly depletion could be the result of defects in dynein-based transport or of alterations in kinetochore-microtubule interactions, which would keep the SAC activated even on seemingly aligned kinetochores. To test the latter possibility, we examined two parameters of the spindle that probe the microtubule-kinetochore interface. First, we measured the distance between paired centromeres (as marked by anti-Cid staining); larger distances refl ect higher micro tubulegenerated tension pulling the two sister chromatids apart. In colchicine-treated cells (no microtubule-generated tension), the distance between paired centromeres was reduced from 0.99 to 0.66 μm. Interestingly, the depletion of Rod and Cdc27 (an APC subunit; Cdc27 was codepleted with Rod to prevent premature anaphase onset) caused a statistically signifi cant (P < 0.0001) decrease in the stretch between centromeres of 35.3 ± 6.4% (from 0.99 to 0.87 μm [±SEM]). However, the depletion of Spindly and DHC only reduced stretch between paired can be seen moving from the kinetochore to the centrosome. The seconds elapsed are shown at the bottom (see Video 4, available at http://www .jcb.org/cgi/content/full/jcb.200702062/DC1). (b) Kymograph analysis was performed on the GFP-Spindly particles, and a histogram of the rates of 110 GFP-Spindly particles during episodes of continuous motion was produced (data were obtained from four separate spindles). The mean speed was 11.9 ± 6.9 μm/min (±SD). (c-e) GFP-Spindly in live cells was imaged by spinning disc confocal microscopy in untreated (c), dynein (DHC) RNAi-treated (d), or Rod RNAi-treated (e) cells. Dynein depletion caused Spindly to accumulate at high levels on aligned kinetochores, whereas Rod depletion blocked the recruitment of Spindly to the kinetochore. Bars, 5 μm. centromeres by 10.3 ± 5.4% and 17.9 ± 6.3% (from 0.99 to 0.95 or 0.93 μm), respectively, and neither distance was statistically different from untreated cells. As another measure of kinetochore function, we determined the time required to align all chromosomes at the metaphase plate using a cell line expressing GFP-tagged histone H2B and mCherry-tagged α-tubulin and automated time-lapse imaging (see Materials and methods). Intriguingly, the Spindly-and DHC-depleted cells both required 50% more time to form a metaphase plate compared with untreated cells (a mean of 18.5 ± 2.3 min vs. 28.1 ± 4.9 and 28.2 ± 3.7 min [±SEM] for Spindly and dynein, respectively), which might be the result of a defect in initial kinetochore microtubule capture (Fig. 4 h and Videos 5-8, available at http://www.jcb.org/cgi/content/full/ jcb.200702062/DC1). Alexander and Rieder (1991) also proposed that kinetochore-associated dynein could play an important role in making lateral attachments between chromosomes and microtubules before the fi nal end-on attachments observed at metaphase, which could explain the delay in chromosome alignment after DHC depletion. Consistent with our results for centromere tension, cells depleted of Rod and Cdc27 took considerably longer (48 ± 16.9 min) to assemble a metaphase plate, which might refl ect a requirement for the RZZ complex to incor porate multiple proteins into the outer corona of the kineto chore. In summary, these results suggest that Spindly-depleted cells do not have gross defects in kinetochores or kinetochore-microtubule interactions but rather have kinetochores that resemble those found in cells lacking dynein. Spindly is a kinetochore-specifi c dynein recruitment factor The similar Spindly and dynein RNAi phenotypes of mitotic arrest, defects in Mad2 and Rod transport, and delays in forming a metaphase plate suggested that Spindly might somehow play a role in dynein function at the kinetochore. Therefore, we next examined whether Spindly affects the kinetochore localization of dynein. To more easily assay dynein localization, microtubules were depolymerized with colchicine, which causes a substantial accumulation of dynein and dynactin on kinetochores (Fig. 5 a). Spindly RNAi resulted in a profound reduction in DHC staining at kinetochores compared with untreated cells (Fig. 5 c). Interfering with dynactin function has also been reported to abolish kinetochore staining of dynein (Vallee et al., 1995;Starr et al., 1998;Dzhindzhev et al., 2005), a fi nding that we repeated as well (Fig. 5 b). However, dynactin, as assayed by GFP-p150 glued (Fig. 5 e) or with anti-p150 glued antibodies (Fig. S3, a and c; available at http://www.jcb.org/cgi/content/full/jcb .200702062/DC1), was still recruited to kinetochores in Spindlydepleted cells (however, Rod RNAi displaces p150 glued from kinetochores; Fig. 5 f). To confi rm that Spindly is required for dynein kinetochore localization and not the stability of the protein, immunoblot analysis was performed, which revealed that DHC and p150 glued protein levels were unaltered by Spindly RNAi (Fig. 5 g). Thus, Spindly is required for dynein but not dynactin recruitment to kinetochores. The aforementioned results clearly revealed an important role for Spindly in dynein function at the kinetochore. We next investigated whether Spindly participates in other dynein-mediated activities. In S2 cells, dynein is known to be important for spindle focusing, specifi cally in transporting kinetochore fi bers along microtubules emanating from the centrosomes. After DHC RNAi, the centrosomes detach and move away from the minus ends of the K fi bers (Fig. 5 h; Maiato et al., 2004;Goshima et al., 2005). However, Spindly depletion did not produce the centrosome detachment or spindle focusing defects seen in cells lacking and Spindly (f) depletion causes the accumulation of Mad2 on aligned chromosomes (blue) and a decrease in Mad2 staining on the spindle. (g) Intercentromere tension, which was measured as the distance between Cid-stained centromeres, was measured in untreated cells and cells treated with the indicated dsRNAs or 6 μg/ml colchicine (4-h treatment; n ≥ 25 for each condition; error bars represent SEM; *, P < 5 × 10 −5 , **, P < 5 × 10 −10 ). (h) As a second measure of kinetochore function, the time required for untreated and dsRNA-treated cells to form a metaphase spindle after nuclear envelope breakdown (NEB) was measured from time-lapse videos (n ≥ 6 cells for each condition). Bars, 5 μm. dynein (Fig. 5 h). Additionally, after plating on Con A for 3 h, Spindly-depleted and untreated interphase cells generally cluster their endosomes (marked by GFP-Rab5) toward the cell interior, whereas endosomes in dynein-or dynactin-depleted cells tend to remain spread throughout the cell (Fig. S4, available at http://www.jcb.org/cgi/content/full/jcb.200702062/DC1; dynactin depletion data not depicted). Collectively, these experiments suggest that Spindly infl uences dynein function at the kinetochore but not everywhere throughout the cell. Identifi cation of human Spindly We next sought to identify Spindly homologues in other species. Standard BLAST (Basic Local Alignment and Search Tool) searches identifi ed Spindly homologues in the insects Aedes aegypti and Anopheles gambiae but not in more distant species. Multiple Em for motif elicitation was then used to identify conserved motifs present in all three insect homologues, and these motifs were used for MAST (Motif Alignment and Search Tool) searches to identify more distant homologues (Bailey and Elkan, 1994;Bailey and Gribskov, 1998). A conserved 32-amino acid motif found in a break between predicted coiled-coil domains in the N terminus of all three insect proteins also was found in the human protein RefSeq NP_060255 (Fig. S5 a, available at http://www.jcb.org/cgi/content/full/jcb.200702062/DC1). The overall primary sequence conservation between Drosophila Spindly and human NP_060255 is low (14.3% identity), and the putative human homologue is somewhat shorter (605 vs. 780 amino acids). However, the sequences in the 32-amino acid conserved motif are 56% identical (75% similar), and the fi rst nine amino acids of this motif are 100% identical. The predicted coiledcoil organization and charge distribution of the putative human homologue also is similar to Drosophila Spindly, although the sequences of the coiled coils are not conserved. The function of the putative human homologue of Spindly had not been previously characterized. To test whether NP_060255 is a bona fi de functional homologue of Drosophila Spindly, we examined whether depletion of the protein by siRNA caused mitotic defects. Transfection of a siRNA pool targeted to NP_060255 reduced NP_060255 protein levels by 86% (immunoblot analysis; not depicted) and produced a twofold increase in the mitotic index of HeLa cells after 48 h (Fig. 6 a). When these mitotic cells were examined, a dramatic increase in the ratio of metaphase versus anaphase cells was apparent (Fig. 6 b), and a substantial number of these cells had misaligned chromosomes (Fig. S5 b). A similar phenotype has been reported in HeLa cells after the depletion of either CLIP-170 or dynein, which targets CLIP-170 to the kinetochore (Tanenbaum et al., 2006). We next localized NP_060255 with a polyclonal antibody in HeLa cells treated with colchicine to depolymerize spindle microtubules. Similar to the Drosophila protein, we observed punctae of NP_060255 that were coincident with CENP-Astained centromeres (Fig. 6 c). This staining was eliminated by treating cells with the siRNA oligonucleotides that target NP_060255 (Fig. 6 d), confi rming the localization of this protein at kinetochores. To determine whether NP_060255, like Drosophila Spindly, is required to recruit dynein to the human kinetochore, we localized dynein using an antibody to its intermediate chain (dynein intermediate chain [DIC]) in colchicine-treated siRNAtransfected cells. In control siRNA-treated cells, a subset of the DIC-stained punctae colocalized with CENP-A, a marker of the centromere (Fig. 6 e). However, after siRNA against NP_060255, the colocalization of dynein with CENP-A was substantially reduced (Fig. 6 f). Similar to what was found for Drosoph ila Spindly, the depletion of NP_060255 also decreased the stretch between paired centromeres from 1.15 to 0.98 μm (29.6 ± 4.5% decrease; P < 0.00005), a result that is in agreement with the previously reported effect of p50 dynamitin micro injection (a dominant-negative inhibitor of dynactin function) on kinetochore stretch (Howell et al., 2001). Collectively, our data show that the protein encoded by NP_060255 localizes to kinetochores and is required for localizing dynein to the kinetochore (d-f) S2 cells stably expressing GFP-p150 Glued (a dynactin subunit) were treated with 6 μg/ml colchicine for 4 h, and the localization of the protein was assayed after RNAi treatment. In untreated (d) and Spindly-depleted (e) cells, GFP-p150 Glued still bound to the kinetochore, whereas the depletion of Rod (f) prevented the protein from associating with the kinetochore. Images are maximum intensity z projections of 2-μm-thick stacks of images taken of live cells. (g) Immunoblots of lysates from RNAi-treated S2 cells show that Spindly RNAi did not affect dynein (DHC) or dynactin (p150 Glued ) protein levels. (h) The distance between the minus ends of kinetochore (K) fi bers and the centrosome (see insets) was measured for untreated (left inset), Spindly RNAi (middle inset), and DHC RNAi (right inset) cells (n ≥ 69 for each condition; error bars represent SEM; **, P < 5 × 10 −10 ), revealing a defect with dynein but not Spindly depletion. Bars, 5 μm. and for mitotic progression. Thus, we suggest that NP_060255 is a true homologue of Drosophila Spindly and propose to rename NP_060255 as Hs Spindly. These results also indicate that the mechanism for localizing dynein to the kinetochore to silence the SAC is conserved between humans and fl ies. Discussion Using RNAi screens in Drosophila S2 cells, we have identifi ed Spindly, a previously uncharacterized protein, as an essential factor for docking dynein to the kinetochore. Spindly is recruited to the kinetochore in an RZZ-dependent manner, and there, together with dynactin, Spindly recruits dynein to the outermost region of the kinetochore. The dynein motor complex then transports Spindly along with Mad2 and the RZZ complex to the spindle poles to inactivate the SAC. We also identify a Spindly homologue that plays a similar role in human cells, revealing a conserved dynein kinetochore targeting mechanism in invertebrates and vertebrates. These data provide new insight into the mechanism and importance of recruiting dynein to the kinetochore to inactivate the SAC. We also fi nd that Spindly plays a role in maintaining S2 cell morphology during interphase and localizes to the growing ends of microtubules. Involvement of Spindly in mitotic dynein function The depletion of Spindly creates several mitotic defects that appear to refl ect a loss of dynein activity exclusively at the kinetochore. Metaphase arrest is the most evident defect observed after the RNAi-mediated depletion of Spindly in Drosophila or human cells. This metaphase arrest phenotype is most likely explained by the absence of kinetochore-bound dynein in Spindlydepleted cells, and, indeed, our data support the model of Howell et al. (2001), which proposes that kinetochore-bound dynein is required for transporting Mad2 from the kinetochore to inactivate the SAC. Nevertheless, we cannot rule out the possibility that the mitotic delay seen after dynein or Spindly depletion is caused by another kinetochore aberration that keeps the checkpoint activated. However, Spindly-depleted cells ultimately overcome metaphase arrest, as seen in our live cell imaging experiments and by the modest increases in the mitotic indices of Spindly-depleted S2 and HeLa cells (three-to sevenfold and twofold, respectively). The mechanism of slippage from this metaphase arrest is not clear, but it might involve proteins (e.g., p31 comet) that silence the SAC by disrupting the interaction between Mad2 and Cdc20 (Habu et al., 2002;Xia et al., 2004). In addition to mitotic arrest, we observed that chromosomes in Spindly-and dynein-depleted S2 cells required a longer time to align on the metaphase plate. This result may be attributable either to the displacement of CLIP-190 (a microt ubule tip-binding protein) from kinetochores after Spindly or dynein depletion (Dzhindzhev et al., 2005; unpublished data) or Figure 6. Identifi cation of a human Spindly homologue that is also required for targeting dynein to the kinetochore. (a) The mitotic index was determined (n = 3 wells per condition and >1,000 cells per well counted; error bars represent SEM) 24 or 48 h after siRNAs targeting the indicated proteins were transfected into HeLa cells. (b-d) The ratio of metaphase to anaphase cells for these treatments is shown (n = 2 experiments; at least 75 cells per condition). NP_060255 was localized using crude antisera in HeLa cells treated with colchicine to enrich for the protein on kinetochores, and we found that NP_060255 colocalizes with the centromere marker CENP-A (c). Colchicine treatment helped to accumulate NP_060255 on kinetochores; without this treatment, background spindle staining with the NP_060255 antibody made it diffi cult to unambiguously visualize kinetochore localization, even on prometaphase chromosome. To confi rm the specifi city of kinetochore localization in colchicine-treated cells, we depleted NP_060255 with siRNA oligonucleotides and found that the colocalization with CENP-A was eliminated (d). (e and f) The dynein intermediate chain (DIC) was localized in control and NP_060255 siRNA-transfected cells that had been treated with 6 μg/ml colchicine for 4 h to depolymerize all microtubules. The insets (magnifi ed images of boxed areas) show that NP_060255 depletion eliminated the colocalization between CENP-A and DIC, demonstrating that NP_060255 is required for bringing dynein to the kinetochore. Bars, 5 μm. the loss of dynein-mediated lateral attachments to microtubules in early prometaphase (Alexander and Rieder, 1991). In HeLa cells, we also have noticed a defect in chromosome alignment after Hs Spindly depletion, which also has been observed after the depletion of dynein (perhaps mediated through a loss of kinetochore-bound CLIP-170; Dujardin et al., 1998;Tanenbaum et al., 2006). Thus, the spectrum of mitotic defects observed in Spindlydepleted cells is consistent with a loss of dynein function specifi cally at the kinetochore. Spindly depletion did not produce any other defects seen after dynein depletion, such as centrosome detachment and spindle defocusing. Dynactin is another protein that is required for recruiting dynein to kinetochores, but it is important for other mitotic and interphase dynein functions. Depletion of the RZZ complex inhibits the kinetochore recruitment of dynein, but this also prevents Mad1 and Mad2 recruitment and reduces kinetochore tension to a greater degree than Spindly or dynein depletion alone. Thus, Spindly depletion appears to be the most specifi c means identifi ed to date for interfering with dynein function only at the kinetochore. Our fi ndings provide new insight into how dynein localizes to kinetochores. Previous studies have led to a model in which dynactin binds to the RZZ complex and then, either alone or in collaboration with Lis1, recruits dynein to the kinetochore (Vallee et al., 1995;Starr et al., 1998;Tai et al., 2002;Cockell et al., 2004;Dzhindzhev et al., 2005;Siller et al., 2005). Because we fi nd that both dynactin and Spindly are required for dynein localization to kinetochores, we propose an updated model in which Spindly and dynactin target to the kinetochore independently and work together to recruit dynein (Fig. 7). Thus, dynein recruitment to the kinetochore may involve multiple weak interactions. Consistent with the possibility of weak interactions, endogenous dynein, dynactin, and Rod did not coprecipitate with GFP in pull-down experiments, and Spindly did not coenrich with these proteins in sucrose gradient fractions (unpublished data). Lis1 is not included in our dynein localization model, as we found that Lis1 RNAi did not block dynein recruitment to the kinetochore (using our colchicine treatment localization assay; unpublished data), although Lis1 depletion did cause a mitotic delay and substantial increase in GFP-Spindly on aligned kinetochores (Video 4). Thus, we favor a role for Lis1 in dynein activity but not in recruiting dynein to the kinetochore. Spindly's role in regulating interphase cell morphology Spindly's role in the spreading morphology of S2 cells makes it unusual among proteins involved in silencing the SAC (including dynein and dynactin), which did not produce phenotypes in our interphase morphology screen. The Spindly RNAi interphase phenotype of defective actin morphology and the formation of extensive microtubule projections is still not understood. However, a clue may be Spindly's dynamic localization to the growing microtubule plus end. Other plus end-binding proteins (+TIPs) interact with signaling molecules that regulate cell shape, one example being the binding and recruitment of RhoGEF2 to the microtubule plus end by EB1 (Rogers et al., 2004). Spindly may similarly interact with and carry an actin regulatory molecule to the cortex, but this hypothesis will require identifying proteins that interact with Spindly during interphase. The mechanism of Spindly recruitment to the microtubule plus end also warrants further investigation. This interaction must be regulated by the cell cycle because GFP-Spindly no longer tracks along microtubule tips in prometaphase. Seven consensus CDK1 phosphorylation sites are present in the positively charged C-terminal repeats of Spindly, and phosphorylation of these sites could reverse the charge of these repeats and regulate the transition from microtubule tip binding to kinetochore binding at the onset of mitosis. Spindly, an example of a cargo-specifi c dynein localization factor Motor proteins must be guided to the correct subcellular site to execute their biological function. To carry out the multitude of transport activities required in eukaryotic cells, metazoans have evolved numerous kinesin motors (25 genes in Drosophila) with distinct domains that dictate their localization and regulation (Vale, 2003). In contrast, a single cytoplasmic DHC performs numerous roles in interphase and mitosis, suggesting that additional regulatory factors guide dynein to specifi c cargoes (e.g., organelles, mRNAs, and vesicles). The main dynein-associated proteins (the dynactin complex, Lis1, and NudEL) are involved in dynein function at many sites and, thus, do not appear to be cargo specifi c. Zw10 was initially thought to specifi cally regulate the recruitment of dynein-dynactin to the kinetochore, but it now also appears to play an essential role in targeting dynein to membrane-bound organelles (Hirose et al., 2004;Varma et al., 2006). Bicaudal D is another multifunctional adaptor molecule that has a role in the dynein-based transport of multiple cargoes such as RNA, vesicles, and nuclei (Swan et al., 1999;Bullock and Ish-Horowicz, 2001;Matanis et al., 2002). Perhaps the most site-specifi c dynein recruitment factor is the Saccharomyces cerevisiae Num1 protein that binds to the DIC Pac11p to target the motor to the cortex of daughter cells, where it pulls the nucleus into the bud neck (Heil-Chapdelaine et al., 2000;Farkasovsky and Kuntzel, 2001). However, dynein only serves this one function in yeast compared with its plethora of Figure 7. A model of Spindly activity. During mitosis, the RZZ complex binds to the outer kinetochore region and recruits Mad2, Spindly, and the dynactin complex. Spindly and dynactin then cooperatively work to recruit dynein, which then transports the whole complex toward the spindle pole and silences SAC signaling on the kinetochore. See Discussion for details. activities in metazoans, and Num1p homologues have yet to be identifi ed in higher eukaryotes. By our assays performed to date, Spindly appears to be a highly selective dynein-recruiting factor, and, unlike other dynein cofactors, it does not appear to be involved in the motor's nonkinetochore functions in mitosis (e.g., pole focusing) or in interphase (e.g., endosome transport). However, the mechanism by which Spindly recruits dynein to the kinetochore remains to be elucidated. Our observations that Spindly moves from kinetochores to the spindle poles as discrete punctae strongly suggests that it may incorporate into a large and somewhat stable particle that contains the RZZ complex, Mad1-Mad2, dynein, and likely additional proteins. Therefore, Spindly not only serves to recruit dynein to the kinetochore but also is part of a cargo that dynein transports. Future studies will be needed to better understand the protein composition of these transport particles and the contacts that Spindly makes within them. Materials and methods Cell culture, RNAi, and immunofl uorescence Drosophila Schneider cell line (S2) cells (Invitrogen) were cultured, and dsRNA incubation was performed as previously described (Goshima and Vale, 2003;Rogers et al., 2003). The 7,200 gene screens were performed with a previously described library (Echard et al., 2004). After 5 d of dsRNA treatment, cells were plated in glass-bottom 96-well plates (Whatman) coated with Con A (Sigma-Aldrich). Cell shape phenotypes were manually scored and documented on a microscope (Axioplan 200M; Carl Zeiss MicroImaging, Inc.) equipped with a 40× 1.3 NA objective and a cooled CCD camera (Sensicam HQ; The Cooke Corporation) after staining with an anti-tubulin antibody (DM1A, anti-α-tubulin; 1:500; Sigma-Aldrich) and rhodamine phalloidin. For the mitotic index screen, mitotic index was determined by dividing the number of phosphohistone H3-positive nuclei (1:1,000; Upstate Biotechnology) by the total number of nuclei (determined by DAPI staining). These cells were imaged using a 20 or 10× air objective in either an ArraySCAN HCS System (Cellomics Inc.) or an automated microscope (ImageXpressMicro; Molecular Devices). In the follow-up experiments described in this paper, most assays were performed after 7 d of RNAi treatment as previously reported (Goshima and Vale, 2003). At the end of the RNAi treatments, cells were resuspended and seeded on Con A-coated coverglasses or dishes for 2 h before imaging or fi xation. For colchicine treatment, cells were allowed to settle for 20 min, the media was removed and replaced with media containing 6 μg/ml colchicine, and imaging or fi xation and staining was performed 4 h after treatment began. HeLa cells were maintained as previously described (Griffi s et al., 2002). siRNA oligonucleotides were On-TARGETplus SMARTpools (Dharmacon), and transfections were performed using Dharmafect1 (Dharmacon) according to the manufacturer's instructions. Immunofl uorescence was performed with affi nity-purifi ed rabbit anti-Dm Spindly (1:100), rabbit anti-Hs Spindly serum (1:100), chicken anti-Cid (1: Live cell imaging of GFP-Spindly and analysis We cloned Spindly from an S2 cell cDNA pool and found that the sequenced cDNA clone lacks 27 amino acids from the predicted ORF. This ORF was cloned into the pENTR/D-TOPO vector (Invitrogen) and moved into N-or C-terminal Gateway GFP vectors under the control of the metallothionein promoter vector (N-and C-terminal fusions produced the same results). To observe the tip tracking, it was optimal to use cells without inducing GFP-Spindly protein expression with CuSO 4. For observation of protein on kinetochores, GFP-Spindly expression was induced by incubating the cells with 20 μm CuSO 4 for 18 h. S2 cells stably expressing GFPtagged proteins were plated in dishes with coverslip bottoms (MatTek) that had been coated with Con A. Images were collected at 1-20-s intervals at room temperature using a cooled CCD (Orca II ERG; Hamamatsu Photonics) or iCCD (MEGA10; Stanford Photonics) camera attached to a spinning disk confocal scan head (Yokogawa Electric and Solamere Inc.) that was mounted on a microscope (Axiovert 200M; Carl Zeiss MicroImaging, Inc.) outfi tted with a 100× 1.45 NA objective. Images were collected using either MetaMorph software (Molecular Devices), QED (Media Cybernetics), or μManager (www.micro-manager.org). For analysis of GFP-Spindly movement from the kinetochore to poles, cells were imaged on the spinning disk confocal microscope with 300-ms exposures taken every second. Image stacks were opened in ImageJ (National Institutes of Health), and spindles were oriented horizontally. A box was drawn that was wide enough to contain all of the kinetochores on one half of the metaphase plate and long enough to contain the proximal spindle pole. A stack of kymographs (each one representing a given one-pixelthick line within the box) was then generated. These kymograph stacks were then combined into maximum intensity z projections, and particle velocities were determined by measuring the lengths of the lines created by particles moving toward or from the spindle poles (distance traveled) and then dividing that value by the displacement in the y direction (time). To determine statistical signifi cance, datasets were analyzed using the t test. Antibody production and immunoblotting A region of the Drosophila Spindly gene corresponding to amino acids 451-780 was cloned into pET28a (Novagen), and protein expression was induced in BL21 DE3 cells (Invitrogen). Full-length Hs Spindly was also cloned into pET28a, and the protein was expressed in BL21 DE3 cells. The expressed proteins were purifi ed and used for injecting rabbits (Covance). Anti-Dm Spindly antibodies were purifi ed on an Affi -Gel 10 column (Bio-Rad Laboratories) containing the immobilized antigen. To isolate protein from S2 and HeLa cells after RNAi treatment, 100 μl laemmli sample buffer was added per well of cells in a 96-well plate. The sample was then processed for Western blotting as previously described (Rogers et al., 2003). The blot shown in Fig. S1 was pieced together from multiple lanes of a larger gel; the blot was cut between the 100-and 150-kD markers and blotted with the indicated antibodies (rabbit anti-p150 glued ; 1:500; provided by E. Holzbaur, University of Pennsylvania, Philadelphia, PA). The blot shown in Fig. 5 was cut at the 250-kD marker and blotted with the indicated antibodies (mouse anti-DHC; 1:1,000; provided by T. Hays). Fig. S1 shows that the endogenous Spindly protein also enriches on unattached, unaligned, and anaphase kinetochores. Fig. S2 shows kymograph analysis of GFP-Spindly particles. Fig. S3 shows that Spindly depletion does not alter the targeting of endogenous dynactin to the kinetochore. Fig. S4 shows that Spindly is not required for the dynein-dependent reorganization of endosomes in S2 cells. Fig. S5 shows that the depletion of NP_060255 causes defects in chromosome alignment. Video 1 shows that GFP-Spindly tracks on the plus ends of microtubules in interphase cells. Video 2 shows that GFP-Spindly concentrates on lagging chromosomes and then diminishes after alignment at the metaphase plate. Video 3 shows that GFP-Spindly returns to kinetochores during anaphase, and Video 4 shows that GFP-Spindly traffi cs from kinetochores to centrosomes in a dynein-and Rod-dependent manner. Videos 5-8 show that the depletion of Spindly, dynein, or Rod slows the alignment of chromosomes on the metaphase plate. Online supplemental material is available at http://www.jcb.org/cgi/content/full/jcb.200702062/DC1. Online supplemental material We thank T. Hays, R. Karess, E. Holzbaur, G. Karpen, C. Sunkel, K. Vaughan, E. Salmon, and R. Giet for their gift of antibodies. We thank T. Murphy for providing us with the Drosophila Gateway Vector Collection. We are grateful to R. Wollman for use of his MatLab algorithms for automatically identifying and analyzing mitotic cells and to J. Kardon for developing the Rab5 localization assay to monitor interphase dynein activity. We thank E. Quan for assistance with quantitative immunoblotting. We also thank G. Goshima and A. Roll-Mecak for reagents, U. Wiedemann and N. Zhang for excellent technical assistance, and S. Reck-Peterson, K. Slep, J. Kardon, S. Rogers, and G. Goshima for helpful discussions. We thank K. Vaughan for sharing results in advance of publication. E. Griffi s is supported by a postdoctoral fellowship from the American Cancer Society.
2017-04-04T22:16:07.678Z
2007-06-18T00:00:00.000
{ "year": 2007, "sha1": "b65e9bd839e6bb6811f6187038cc67ebdf8481e3", "oa_license": "CCBYNCSA", "oa_url": "https://rupress.org/jcb/article-pdf/177/6/1005/1330992/jcb_200702062.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "45631264244979c22f41008fb95613dc51a460af", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
221569059
pes2o/s2orc
v3-fos-license
A Self-Tuning Algorithm for Optimal QoE-Driven Traffic Steering in LTE Due to the wide diversity of services in mobile networks, cellular operators have changed their focus from Quality of Service (QoS) to Quality of Experience (QoE). To manage this change, Self-Organizing Networks (SON) techniques have been developed to automate network management, with traffic steering as a key use case. Traditionally, traffic steering aims to balance traffic volume or load among adjacent cells. Although more advanced schemes have been devised to balance QoE among cells, these do not guarantee that the overall system QoE is improved. In this work, a novel self-tuning algorithm for parameters in a classical mobility load balancing scheme is proposed to steer traffic among adjacent cells in a Long-Term Evolution (LTE) network driven by QoE criteria. Unlike previous approaches, based on heuristic rules, the proposed algorithm takes a gradient ascent approach to ensure that parameter changes always improve the overall system QoE. For this purpose, the impact of parameter changes on system QoE is estimated with an analytical network performance model that can be adjusted with statistics taken from the real network. The proposed algorithm is tested in a system-level simulator implementing a realistic LTE scenario. Results show that the method outperforms classical load and QoE mobility load balancing schemes. I. INTRODUCTION Over the last few years, an exponential growth in the demand of mobile services has been experienced. Due to the introduction of new services and the success of smartphones and tablets, traditional traffic patterns have substantially changed [1]. These changes will be even faster with the deployment of 5G systems, as new terminals and use cases are introduced [2]. To deal with these changes in a cost-effective manner, Self-Organizing Networks (SON) have been developed, consisting of a group of automation techniques for mobile network management [3], [4]. SON techniques are usually classified into three use cases: self-planning, self-healing and self-optimization [5]. Particularly, self-optimization aims to cope with user trends and traffic changes by modifying network settings. Thus, self-optimization ensures that optimal network performance is achieved along the operational stage. Traffic steering is one of the foremost self-optimization use cases [3]. The aim of traffic steering is to alleviate the negative effects of The associate editor coordinating the review of this manuscript and approving it for publication was Wenjie Feng. uneven traffic demand distribution by sharing traffic between adjacent cells. To this end, different objectives can be defined, amongst which is to balance some network indicator across the network (e.g., average PRB utilization [6], [7] or call blocking ratio [8]) or maximize some overall network performance figure (e.g., total blocked traffic [9] or utility function based on individual cell loads [10]). Likewise, cell re-sizing for traffic steering can be achieved by changing physical parameters (e.g., base station transmit power [11] or antenna tilt angle [12], [13]) or logical parameters (e.g., cell reselection offset [14] or HandOver (HO) margin [15], [16]). The latter is often the preferred option, since it does not affect network coverage, it is effective for connected users and can be dynamically adjusted to cope with rapid fluctuations of cellular traffic demand. While legacy network-centric management procedures were based on system performance indicators, such as average cell throughput or accessibility ratios (a.k.a. Quality of Service, QoS), nowadays operators adopt user-centric approaches focused on user opinion (a k.a. Quality of Experience, QoE). Strictly, QoE is defined as the overall user satisfaction of a service. It is a subjective measure that depends VOLUME 8, 2020 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ on how the experience of the service is perceived by the user. QoE is often measured using the Mean Opinion Score (MOS) scale, ranging from 1 (very bad experience) to 5 (excellent experience) [17]. Due to the difficulty of measuring a subjective figure, QoE (or MOS) is computed with utility functions mapping high-level service performance indicators (e.g., packet delay for Voice-over-Internet Protocol or initial playback time for streaming services). In the literature, several QoE-driven self-tuning algorithms for cellular networks have been proposed. For instance, a self-tuning algorithm for adjusting parameters in a dynamic packet scheduler of a LTE base station is proposed in [18] to balance QoE across services by re-prioritizing them based on service performance statistics. Closer to this work, in [19], a traffic sharing algorithm based on mobility load balancing is proposed to equalize the QoE of cells in a LTE network offering services of very different nature. For this purpose, HO margins are tuned on a per-adjacency or per-service basis based on QoE differences collected in the network management system. In [20], a data-driven traffic steering algorithm based on mobility load balancing is proposed for optimizing user experience in multi-tier LTE networks. Traffic steering is achieved by changing Reference Signal Received Quality (RSRQ) inter-frequency HO margins. The algorithm proposed there relies on an indicator showing the impact of individual HOs on user QoE, derived from connection traces. The above-mentioned approaches formulate network tuning as a control problem. Thus, balancing algorithms are designed as controllers that tune network parameters based on heuristic rules, which makes them suitable for steering traffic in real time. However, equalizing QoE in the network does not necessarily lead to the best overall system QoE. On the contrary, it is shown in [19] that, in some cases, the worst cells (in terms of user experience) improve at the expense of degrading the global cell average QoE. This situation is avoided by formulating the tuning problem as an optimization problem, where a search algorithm evaluates the quality of different network settings and selects that maximizing the overall QoE. Following this approach, sophisticated search algorithms can be used during network planning to find the optimal configuration, provided that a network performance model is available for the QoE metric (e.g., analytical expressions [21] or a simulation tool [19]). However, in the operational stage, this search has to be performed by evaluating candidate configurations in the live network in the absence of a precise QoE model, which might degrade network performance temporarily. For safety reasons, operators prefer to modify parameters in small steps with a heuristic trajectory search method. To the authors' knowledge, no traffic steering method based on mobility load balancing explicitly considering optimality criteria for QoE has been published in the literature. In this work, a novel self-tuning algorithm is proposed to steer traffic between cells in LTE by changing HO margins in a classical mobility load balancing scheme driven by QoE criteria. Unlike previous approaches, based on heuristic rules, the proposed approach uses a gradient ascent algorithm to ensure that changes in HO margins always improve the overall system QoE. For this purpose, the impact of small parameter changes on system QoE is estimated with an analytical network performance model that can be adjusted statistics taken from the real network. The proposed analytical approach is tested in a system-level simulator implementing a realistic macrocellular LTE scenario where users demand a file download service (File Transfer Protocol, FTP). The main contributions of this work are: a) a simple QoE-driven analytical optimization algorithm for tuning HO margins in LTE, and b) an analytical performance model to estimate the impact of cell re-sizing on the QoE of services that can be approximated by a full buffer traffic source. The rest of the work is organized as follows. Section II discusses the limitations of a QoE-driven traffic steering scheme. Section III describes the analytical QoE model and optimization algorithm. Section IV presents algorithm assessment. Finally, Section V summarizes the main conclusions. II. PROBLEM FORMULATION In mobile networks, the HO process ensures a seamless connection between neighbor cells when the user moves. Specifically, a HO is typically triggered when the following condition is fulfilled for a time period TTT (Time-To-Trigger): where P rx (j) is the pilot signal level received from neighbor cell j, P rx (i) is the pilot signal level received from the serving cell i, and HOM (i, j) is the HO margin between cells i and j, defined on a per-adjacency basis (i.e., one value for each pair of cells and direction of the adjacency). In most cases, HO margins are set complementarily in both directions of the adjacency to prevent ping-pong effect, so that where Hyst represents the hysteresis value. HO margins can be adjusted to modify cell service areas for traffic steering. Specifically, a decrease in dB in HOM (i, j) reduces the serving area of cell i while increasing that of cell j, so users located at the border of cell i are handed over to cell j. This cell re-sizing effect affects the QoE of all users in the surrounding area. Handed-over users are re-allocated in a different cell, experiencing different radio link signal level/quality, and different spectral efficiency. At the same time, user re-allocation causes that traffic per cell changes, causing that the amount of available radio resources for both handed-over and static users changes. Both effects might have a strong impact on individual user QoE. The above tuning problem is a large-scale non-separable optimization problem [9]. In this work, the figure of merit is the overall system QoE, defined as: where N u is the number of users in the system and QoE(u) is the QoE experienced by each user u. The latter is given by radio link conditions of individual users, system bandwidth, cell loads and the specific dynamic packet scheduling algorithm in the base station. All these factors make that QoE(u) is non-linearly related to HO margin settings, which makes the search for the optimal solution more complicated. A first solution is to balance the average cell QoE, QoE cell (i), defined as where N u (i) is the number of users in cell i. Such an approach is hereafter referred to as Experience Balancing on a Cell basis (EB-C) [19]. For instance, if cell i is heavily loaded, users served by cell i are most likely unsatisfied due to a lack of radio resources. At the same time, if cell j is underutilized, users served by cell j are most likely satisfied due to overprovisioning. In this situation, a QoE balancing algorithm decreases HOM (i, j), so that users from cell i are handed over to cell j, leading to a more fairly distributed user satisfaction between both cells. As a result, users in cell i experiencing worst QoE see their experience improved at the expense of degrading the experience of those in cell j with the highest QoE. Figure 1 presents an example of how QoE is affected by EB-C, showing the cumulative distribution function of the global (i.e., network-wide) QoE distribution per cell before/after equalizing the QoE. It is observed that traffic steering improves cells with worst average QoE at the expense of degrading the best cells. Thus, a more balanced QoE distribution is obtained, but the overall system QoE is degraded, which can be inferred from the shift of the median value to the left. By enforcing that all cells have the same QoE, highly loaded cells are prioritized over low loaded cells. However, in the example, the QoE increase in the former cells is lower than the QoE decrease in the latter. Alternatively, changes in HO margins can be driven by optimality criteria provided that a network performance model is available. Unfortunately, the large number of factors influencing QoE makes that only approximate analytical models can be derived. Nonetheless, should the approximate model be able to find reasonable estimates of the gradient of the objective function, a gradient ascent algorithm can be used to progressively improve the overall figure for merit. Thus, a local maximum of the problem can be achieved. Equally important, the gradient rule ensures that no parameter change degrades system performance if the magnitude of changes is small. III. OPTIMIZATION ALGORITHM The proposed QoE-driven optimization algorithm, hereafter referred to as OE (for Optimizing Experience), modifies the HO margin between neighbor cells i and j, HOM (i, j), with the aim of maximizing overall system QoE, QoE. For this purpose, OE follows a gradient ascent approach, where an iterative algorithm changes HOM (i, j) based on estimates of the gradient of the objective function, QoE, computed on an adjacency basis with an analytical model. In each iteration (optimization loop), HO margins in all adjacencies are updated as ), (5) where superscripts (n) and (n + 1) denote the optimization loop index, and δQoE δHOM (i,j) is the gradient of the objective function in the direction of the decision variable HOM (i, j), quantifying the impact of increasing HOM in an adjacency on the overall system QoE. The resulting HOM values are bounded in the interval [−7, 13] dB to ensure a minimum signal quality after HO [22]. The lower limit is the minimum signal-to-interference-plus-noise ratio (SINR) needed for the scheduler to assign any radio resource to a connection in most vendors. The upper limit is calculated with (2) to ensure a hysteresis level of Hyst = 6 dB. HOM (n) (i, j) is the value to be estimated with the analytical model described next. A. ANALYTICAL SYSTEM MODEL FOR OPTIMIZATION The aim of the optimization algorithm is to maximize the overall system QoE. To this end, an analytical model is developed to compute gradient estimates of the objective function. For simplicity, it is assumed here that all users demand FTP service, which can be modeled as a full buffer traffic source until session ends. Yet, it is considered that user experience depends on user context (indoors or outdoors), which can be inferred by analyzing connection traces in practice [23]. Thus, two utility functions are used, depending on user location [24]. For outdoor users, QoE is estimated as outdoor (u) = max(1, min(5, 6.5TH (u) − 0.54)) , (6) where TH is the average user throughput in Mbps. For indoor users, QoE is estimated as In (6)- (7), QoE is limited to the MOS scale (1 to 5). By comparing (6) and (7), it is observed that, in the latter, TH is divided by 1.5, reflecting that indoor users experience worse QoE for the same TH value, since expectations of indoor users are higher. In the above utility functions, user QoE only depends on TH . Thus, the analytical model must only establish the relationship between HOM and TH changes, which can then be translated into QoE changes. Specifically, the gradient of the objective function is computed on an adjacency basis by aggregating the impact of changes across users in the adjacency as ], (8) where k is the cell serving user u (i.e., cell i or j), BW (u) is the average system bandwidth assigned to the user, N su (k) is the average number of simultaneous active users with user u in the cell serving (excluding inactive periods), A(k) is the service area of cell k, and SE(u) and SINR(u) are the average spectral efficiency and signal quality of user u. The chain rule in (8) reflects that any user throughput change achieved by traffic sharing is due to: a) a change in radio link conditions (experienced, e.g., by a user re-allocated in a new cell), or b) a change in the number of available resources for the user caused by the new number of simultaneous users in the cell (originated, e.g., by the new cell size or the change of serving cell). To increase the robustness of the method, the gradient is approximated by estimating the impact of a large change in the HO margin of the adjacency under study (i.e., 3 dB) on the overall system QoE, QoE. Such a large perturbation allows to anticipate effects that could not be observed with smaller changes (e.g., 1 dB). Figure 2 shows a flow diagram of the proposed algorithm, whose aim is to estimate the potential impact of traffic sharing on QoE at a connection level, which is then aggregated at a cell level to derive HO margin changes per adjacency. The inputs to the method are: a) user traces, including performance measurements at a connection level, b) cell traces, including instantaneous performance measurements at a cell level, c) signal level statistics, including reference signal measurements from serving and neighbor cells, and d) Inter-Site Distance (ISD), computed from site coordinates. The output is the change in HO margin in the adjacency. For clarity, variables directly taken from measurements are depicted with dashed lines, to isolate them from estimated variables, shown with solid lines. Likewise, stages (i.e., boxes in the figure) dealing with cell-level stats are filled in white and stages dealing with connection-level stats are filled in gray. Hereafter, for brevity, k denotes both source and target cell in the adjacency, i and j. All stages in the figure are described next. 1) DEFINITION OF OVERLAPPING AREA A first step is to estimate the amount of traffic re-allocated by changing the HO margin. To this end, users (connections) in cells i and j are classified into three sets, depending on whether they change serving cell due to traffic sharing. On the one hand, U i and U j denote the part of connections in cell i and j that keep served by i and j after traffic sharing. On the other hand, U ij denotes the part of connections that would be re-allocated by the traffic sharing algorithm. As in [25], U ij is identified precisely from pilot signal level statistics collected by base stations, as those users u fulfilling that where P rx (u, i, t) is the Reference Signal Received Power (RSRP) received by user u from cell i at time t. By aggregating the time these users are in the overlapping area between adjacent cells, the method computes the average number of simultaneous connections removed by traffic steering in active periods of cell i, N su,ov (n) (i). The number of connections removed by traffic steering in active periods of cell j is computed in the same way but interchanging the index of cells i and j. 2) CELL LOAD ESTIMATION In this stage, changes in the average load in cell i and j load are estimated. The inputs to this stage are the sets of users, U x (x ∈ {i, j, ij}), the average cell load and spectral efficiency before changes, L (n) (k) and SE (n) (k), and the ISD in the adjacency, ISD(i, j). The main output of this stage are the new cell loads after traffic sharing. If users in the overlapping area are handed over from cell i to cell j, the load of cell i decreases and that of cell j increases. However, at the same time, the interference from cell j received in cell i increases due to the load increase in the former. Thus, the spectral efficiency of cell i might decrease, causing an increase of cell load that might counteract the congestion relief effect of traffic steering. The contrary effect is observed in the spectral efficiency of cell j. To model both effects, load changes are broken down in two components as where L Specifically, the load change in source cell i due to traffic steering is calculated as where L (n) (u ij , i) is the PRB utilization ratio removed from cell i by steering user u ij to cell j, derived from the total connection time in the overlapping area observed in signal level measurements. Then, the load change in the target cell j due to traffic steering is estimated by rescaling the load removed from the source cell by considering the spectral efficiency in both cells as Load changes due to interference are estimated depending on Inter-Site Distance to differentiate between interferencelimited and noise-limited scenarios. In the source cell, the interference-related term is estimated as L (n) where SE (n) (i) is the average spectral efficiency of cell i measured at loop n, and SE (n+1) (i) is the average spectral efficiency of cell i at loop n+1. In the latter, it is assumed that interference is much larger than noise (interference-limited scenario) and all interference received in cell i comes from cell j (single interferer), so that Similarly, the interference-related term in the target cell is estimated as where It should be pointed out that the single-interferer assumption in (13) and (15) is the worst case of interference-limited scenarios where traffic steering achieves the lowest congestion relief effect. Results presented later show that such an approximation has a negligible impact on method performance. 3) SPECTRAL EFFICIENCY ESTIMATION The next stage is to estimate changes in spectral efficiency per user. The input to this stage is the SINR per user sinr (n) (u), the ISD between cell i and cell j, ISD(i, j), the average cell load before changes, L (n) (k), and the estimation of cell load changes L (n) TS (k) and L (n) I (k). The output is the estimated spectral efficiency for the next loop per user, SE (n+1) (u). This estimation is carried out on a user basis only if the distance from cell i to cell j is less than 1.25 km. Under these circumstances, it can be assumed that interference I received from adjacent cell j is much greater than noise, so that sinr (n) (u) Otherwise, spectral efficiency remains invariant. Specifically, for users served by cell i, spectral efficiency is estimated as where the factor L (n+1) (j) L (n) (j) reflects the increase in interference (decrease in signal quality) due to neighbor cell load increase. Note that cell load estimation for next iteration L (n+1) (k) can be easily calculated from the estimation of cell load changes as VOLUME 8, 2020 Similarly, for users served by cell j, spectral efficiency is estimated as Finally, for users in the overlapping area, spectral efficiency is estimated as where users in the overlapping area are divided in two different groups: those who were handed over from cell i to cell j in the optimization loop n, u ij (j) = 0, and those who were not u ij (j) = 0. 4) ACTIVE USER ESTIMATION The fourth stage addresses the number of simultaneous users in the next loop. A preliminary analysis (not presented here) shows that this variable has to be estimated on a per-user basis to obtain reliable estimates of user throughput and QoE. The The expected number of simultaneous users in source cell i in the next loop for a user u is calculated from the value in the past loop as where N su,ov (n) (i) captures the decrease in the number of active users due to congestion relief achieved by handing over users, and N su,I (n) (i) reflects the increase in the number of active users due to the loss of spectral efficiency from a higher interference. Note that, by definition, the former quantities are measured considering only periods of cell activity (i.e., N su (n) (u i ) ≥ 1). For the same reason, N su,ov (n) (i) and N su,I (n) (i) are measured considering only periods of cell activity in cell i. To estimate N su,I (n) (i), a empirical regression analysis is performed to find the expression relating cell load L with the number of active users N su in a cell, N su (L). In practice, such an analysis can easily be done with performance counters aggregated at a cell level stored in the network management system. In this work, this analysis is carried out by simulations. The resulting regression curve is the exponential function N su (L) = 1.851e 1.505L . The sensitivity in the number of active users due to increments of cell load is obtained by derivating the above exponential function, as δN su δL (L) = 1.851 · 1.505e 1.505L . The latter is used to quantify the increase in the number of active users due to interference from the increase of cell load due to traffic steering, L TS (i)). (24) Note that changes in the number of users in source cell i due to traffic steering (i.e., users in the overlapping area) can be directly taken from network measurements. In contrast, changes in target cell j have to be estimated. For this purpose, it must be taken into account that handed-over users might require a different amount of resources in the new cell, because of new radio link conditions. This can easily be taken into account by multiplying by the ratio of the average cell spectral efficiency in the old and target cell. To account for this effect, the new average number of active users in target cell j is estimated as following the same structure as (21). ρ(i) is the activity ratio of cell i (measured as the ratio of active Time transmission Intervals, TTIs). Similarly to (24), N su,I (n) (j) is estimated as N su,I (n) (j) = L (n) TS (j)), (26) using the same derivative function as that in (23). Finally, for users in the overlapping area u ij that already performed HO from i to j in the previous loop (without traffic steering), the number of simultaneous users in loop n+1 is estimated as a weighted average of the measured number of simultaneous users during the segments of the connections in cell i and j in the previous loop, u i and u j , weighted by the time in each cell i and j, as where t (n) i (u ij ) and t (n) j (u ij ) is the time that user u ij spent in cells i and j, respectively, and t (n) tot (u ij ) is the total time user u is served by both cells. 5) THROUGHPUT AND QOE ESTIMATION Once the number of simultaneous users with every user in the network has been estimated for the next optimization loop n + 1, N su (n+1) (u), as well as the spectral efficiency, SE (n+1) (u), the output to this stage is the best HOM variation QoE (n+1) (u). For it, user throughput variations will be estimated first, then, QoE changes on a user basis. The estimation of user throughput after a HOM change is then calculated as usual as where N PRB is the system bandwidth and SE (n+1) (u) is considered as throughput per PRB. Throughput values are easily translated into QoE with (6)- (7). Then, the change in user QoE due to HOM changes is estimated as (29) 6) CELL LEVEL QOE AGGREGATION The network average QoE variation due to HOM (i, j) modification is calculated as i.e., the aggregation of every individual QoE modification divided by the number of users in cells i and j (which is maintained in iterations n and n + 1). The above analysis considers the case when HOM (i, j) is decreased. The opposite case, when HOM (i, j) is increased, can be evaluated by analyzing the opposite side of the adjacency, where HOM (j, i) is decreased to satisfy (2). The whole estimation process explained above must be repeated for a similar decrease of HOM (j, i) by 3 dB, getting QoE (n+1) (j, i) estimation. Therefore, two estimations of average QoE variations are obtained per adjacency: QoE (n+1) (i, j) and QoE (n+1) (j, i), corresponding to both HOM movements, i.e., reducing cell i or j service areas, respectively. OE algorithm discerns which option obtains the highest QoE value (i.e., move traffic from cell i to j, or viceversa). If both movements degrade the overall QoE in the adjacency, no HOM change is made (gradient ascent rule). 7) NON-LINEAR CONTROLLER Finally, to define the magnitude of HOM changes, the incremental controller shown in Figure 3 is used. It is observed that the controller includes a gain scheduling algorithm modifying the feedback loop gain to control the trade-off between convergence speed and system stability. A coring operation ensures that no changes are implemented when expected QoE benefits are below 0.01. Thus, the control system reaches equilibrium earlier. Beyond that value, a larger slope is used to favor adjacencies with larger expected QoE benefits. To avoid instabilities, the maximum HOM change per iteration is limited to 3 dB. IV. PERFORMANCE ANALYSIS In this section, the proposed optimization algorithm is validated with simulations. For clarity, the simulation tool and analysis methodology are first presented, and results are shown later. Finally, implementation issues are discussed. Figure 4 shows the simulated scenario, consisting of 108 macrocells (36 sites with 3 tri-sectorized antennas per site) covering a seamless area of 60 km 2 [19]. Table 1 presents its main parameters. VOLUME 8, 2020 The FTP traffic model reflects the download a file whose size follows a log-normal distribution of average 2 MB. FTP has been chosen as it is representative of a full-buffer service. A. SIMULATION TOOL The simulation tool includes two types of users: indoor and outdoor. Indoor users are static users with higher demands in terms of QoE, even if propagation losses are 15 dB higher than those of the outdoor users. Outdoor users move at 3 km/h following a random straight path. The percentage of indoor users per cell depends on cell location. For this purpose, 36 % of cells are categorized as urban, 46 % as suburban and 18 % as rural, based on the predominant land use in its service areas. Then, it is assumed that urban cells have 70 % of indoor users, sub-urban cells have 50 % of indoor users and rural cells have 10 % of indoor users. The overall traffic demand is controlled by adjusting the total mean call arrival rate in the scenario so as to generate load (and QoE) congestion problems. Spatial traffic distribution at a cell level follows the same profile as in the live network. With default HOM settings, the average cell load in the network is U (i) = 61 %. Nonetheless, the minimum and maximum cell loads in the scenario are 3.5 % and 100 %, respectively, showing that traffic demand is unevenly distributed. B. ANALYSIS METHODOLOGY Four iterative self-tuning methods are compared. The first three are balancing algorithms that aim to equalize some indicator between neighbor cells by adjusting HOM on an adjacency basis, and they are used with comparison purposes. A first method is a classical mobility Load Balancing (LB) algorithm that seeks to solve local congestion problems by equalizing average PRB utilization. A second method is a Throughput Balancing (TB) algorithm, equalizing average user throughput between adjacent cells [27]. A third method is a QoE balancing algorithm (EB-C), conceived to solve user experience problems by equalizing average cell QoE [19]. Parameter tuning in LB, TB and EB-C is carried out by fuzzy logic controllers implementing simple 'IF-THEN' control rules. The fourth method is the proposed QoE-driven analytical optimization algorithm (OE). For all algorithms, 14 optimization loops of 30 minutes of network time are simulated. To assess the methods, the main figure of merit is the overall system QoE, QoE, defined in (3). For a more detailed analysis, the average deviation from HOM default settings is also computed per iteration as where HOM (n) (i, j) and HOM (0) (i, j) are HOM values for the adjacency (i, j) in the optimization loop n and 0 (i.e., initial state with default settings), respectively, and N adjs is the number of adjacencies in the scenario. In this work, HOM (0) (i, j) = 3 dB for all adjacencies. Moreover, three indicators are used to check the ability of methods to balance a particular indicator by checking performance differences between neighbor cells across the network. An average load imbalance indicator U imb is defined as where N c is the number of cells in the scenario, N adjs (i) is the number of adjacent cells for cell i and A(i) is the set of adjacent cells for cell i. Similarly, an average cell QoE imbalance indicators is defined as [19] Finally, an average cell throughput imbalance indicator is defined as (34) C. RESULTS Figure 5 shows the evolution of the overall system QoE with the four approaches along the 14 Equally important, the overall improvement in user QoE is not achieved at the expense of deteriorating the QoE of the worst users. To confirm this statement, Figure 6 compares the cumulative distribution function of user QoE in the scenario obtained by the methods against that with default settings (initial curve). In the initial situation, a significant share of users (35 %) have the lowest possible QoE (QoE(u) = 1), even if also many of them (40 %) have the highest QoE (QoE(u) = 1). This is due to the large mean call arrival rate and uneven spatial traffic distribution in the scenario. Unexpectedly, it is observed that balancing algorithms designed to equalize performance at a cell level increase the share of bad users. A closer analysis (not presented here) shows that LB and TB try to equalize traffic indicators without considering the service mix. Likewise, EB-C does not improve the average user QoE because when balancing QoE cell (i) it improves the worst users at the worst cells at the expense of degrading users experiencing better QoE in their neighbor cells. In contrast, OE reduces the number of completely unsatisfied users from 35 to 30 %, while also increasing the number of fully satisfied users from 41 to 42 %. Most percentiles of the distribution maintain these differences. For instance, the 35 th percentile of QoE(u) is 1.22 in the initial configuration with default HOM settings, 1 for LB, EB-C and TB, and 1.65 for OE. Table 2 shows all performance indicators for the initial state and at the end of the optimization process for the tested algorithms (columns LB, TB, EB-C and OE ). For clarity, the best algorithm per indicator is highlighted in gray. As expected, U imb (i), TH imb and QoE reach their best performance with those approaches for which were designed (i.e., LB, TB and OE, respectively). In particular, LB achieves the lowest average cell load imbalance, i.e., U imb = 7.43 % and TB achieves the lowest throughput imbalance, TH imb = 0.22 Mbps. However, surprisingly, the lowest QoE cell,imb is achieved by TB (QoE cell,imb = 0.3), and not with EB-C (QoE cell,imb = 0.53), at the expense of a higher degradation of QoE cell (2.64 against 2.88 for TB and EB-C, respectively). This unexpected behavior is due to the limits of the QoE utility functions (6)-(7), reaching saturation values (QoE = 1 or 5) with very different TH (u) values (0.237 and 0.853 Mbps, respectively, for outdoor users ). Very different TH (i) and TH (j) values can coexist with similar average cell QoE, QoE cell (i) and QoE cell (j) (e.g., two cells with TH (i) = 0.05 Mbps and TH (j) = 0.26 Mbps on average would experience QoE cell (i) = 1 and QoE cell (j) = 1.1, respectively). Under these circumstances, while there is a minimum QoE difference between them, there is a high cell average throughput difference. Therefore, TB continues operating by degrading QoE cell (j), while QoE cell,imb keeps improving (i.e., 0.3). Thereby QoE cell achieved by EB-C is 0.24 MOS points higher than that of TB. When mixing different services using different utility functions to compute QoE, the relationship between throughput and QoE is not so limited as in this scenario, and both aims (balancing TH (i) and QoE cell (i)) would follow different optimization trajectories [19]. Figure 7 shows the evolution of HOM deviation from default values in the four tuning approaches. As seen in the figure, changes introduced by OE are smaller than those caused by the balancing approaches (HOM (dev) = 4.8 dB for OE at the end of the optimization process, against 6, 7.3 and 6.4 dB for LB, TB and EB-C, respectively). Thus, OE reaches a better network performance with less parameter changes. Less intervention over the network is an important advantage of OE, since operators are usually reluctant to modify network parameters by large amounts. OE efficiency comes directly from its analytical formulation, so that HOM is changed in the exact amount to achieve an increase in QoE, and stopping when interference to/from neighbor cells becomes too high. In contrast, LB, TB and EB-C keep enlarging underutilized cells, even if this degrades the overall system QoE. D. IMPLEMENTATION ISSUES The OE algorithm is executed on a per-adjacency basis. Therefore, its worst-case time complexity is O(N adjs ). For the considered scenario, consisting of 108 cells and 11664 adjacencies, the average execution time of 1 iteration of OE is 4.34 minutes (22 ms per adjacency) in a personal computer with a 3.6-GHz octa-core processor and 24 GB of RAM. Runtime can be decreased by restricting changes to the closest neighbors. V. CONCLUSION AND FUTURE WORK In this article, a novel traffic steering algorithm for optimizing the average user QoE in a LTE network by adjusting handover margins has been proposed. The method takes a gradient ascent approach to ensure that parameter changes always improve the overall system QoE. For this purpose, the impact of parameter changes on system QoE is estimated with an analytical network performance model adjusted with statistics from the real network. Method assessment has been carried out in a dynamic system-level LTE simulator implementing a realistic macrocellular scenario considering a file download service. Results have shown that OE manages to increase the average user QoE in the network by 11.45 %, outperforming legacy balancing algorithms. Equally important, OE achieves optimal performance with smaller handover margin modifications (up to a 33 % less than other approaches). Future work will extend the proposed analytical approach to consider a multi-service scenario with delay-sensitive applications (e.g., mission critical services). CAROLINA GIJÓN received the B.S. degree in telecommunication systems engineering from the University of Málaga, Spain, in 2016. She is currently pursuing the Ph.D. degree. Her research interests include self-organizing networks and radio resource management. VOLUME 8, 2020
2020-08-27T09:14:50.769Z
2020-08-25T00:00:00.000
{ "year": 2020, "sha1": "25453d0adecb4af8e34893de8afd335f610125fc", "oa_license": "CCBY", "oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/8948470/09177008.pdf", "oa_status": "GOLD", "pdf_src": "IEEE", "pdf_hash": "47fd9bc9c5ced0b23768f88dab5462a7ceca4b54", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
1531265
pes2o/s2orc
v3-fos-license
Spin echo NMR spectra without J modulation being used to suppress the effects of chemical shifts and field inhomogeneity, to measure spin–spin relaxation times T2, and to discriminate between signals with different T2s. Echo modulation was responsible for the original discovery of J couplings, but significantly restricts their use. It is well-known that homonuclear 15 J modulation can be quenched by rapid refocusing. In the Carr– Purcell–Meiboom–-Gill (CPMG) experiment of Fig. 1a, modulation is suppressed (at the cost of high RF power deposition) if it arises from couplings between spins with chemical shift differences Δν << 1/τ. It has recently been shown 20 10 that the cumulative effect of pulse imperfections can reduce or even suppress modulations in CPMG experiments at favourable resonance offsets, even for interpulse spacings 2τ much longer than 1/Δν (the "SITCOM" effect, ‘stabilization by interconversion within a triad of coherences under multiple 25 refocusing’). It is also known that J modulation can be refocused in the special case of a weakly-coupled two-spin system AX, in what Takegoshi et al. call a "perfect echo" (recently rediscovered), by inserting a 90° pulse at the midpoint of a double spin echo; and that this can reduce J modulation for 30 other spin systems. A zero/double quantum filtration method for two-spin systems has also been proposed recently, but is less general than the perfect echo. J modulation can in favourable cases also be avoided by using multiplet-selective 180° pulses, but only for one multiplet at a time. 35 refocusing').It is also known 11,12 that J modulation can be refocused in the special case of a weakly-coupled two-spin system AX, in what Takegoshi et al. call a "perfect echo" 11 (recently rediscovered 13 ), by inserting a 90° pulse at the midpoint of a double spin echo; and that this can reduce J modulation for other spin systems. 12A zero/double quantum filtration method for two-spin systems has also been proposed recently, 14 but is less general than the perfect echo.J modulation can in favourable cases also be avoided by using multiplet-selective 180° pulses, but only for one multiplet at a time. The extra 90° pulse in Fig. 1b exchanges coherence between spins and reverses the apparent sense of J modulation, so that the second half of the double spin echo refocuses the modulation caused by the first.The effect is not in fact restricted to AX spin systems; it extends to arbitrary spin systems provided that modulation.Intriguingly, the bracketed cyclic perfect echo component of Fig. 1b has been used previously as a planar mixing sequence for propagating spin waves in linear spin chains, 18 though not for T 2 weighting.At present, T 2 in coupled spin systems is normally measured using very short interpulse spacings 2τ, which can cause severe sample heating and suppresses the effects of slow chemical exchange processes.It has recently been shown 8 that measurements can be made at significantly longer spacings (approaching 1 ms) provided that these intervals are carefully 10 chosen with respect to resonance offsets.Here we show, for the first time for arbitrary spin systems, that with the sequence of Fig. 1b, interpulse spacings 2τ can be used that are an order of magnitude greater than this, irrespective of offset.T 2 -weighting is frequently used to suppress interfering signals 50 from high molecular weight, relatively low mobility species, for example in NMR metabolomics 19 and in drug discovery methods such as saturation transfer difference. 20Long CPMG sequences with high duty cycles are commonly used to attenuate such signals, but cause undesirable sample heating.Fig. 3 shows, for 55 an aqueous sample of beef and yeast extract, that good suppression of broad spectral components can be achieved with very little RF power deposition (here only 25 mW during the echo train).It should be straightforward to adapt such methods for use if strong T 2 -weighting is required in vivo. 60 The applications of "perfect echo" pulse sequence elements are by no means confined to the above; in particular, the first section of the sequence, -τ -180 -τ -90°y, may be regarded as a "prefocusing" unit, resulting in J modulation equivalent to a time minus 2τ, and hence allowing J modulation to be refocused at a 65 time 2τ later.Such a sequence element can for example be used to suppress the troublesome J modulation seen in experiments such as WATERGATE, 21 or as an alternative to a 45° purge pulse in stimulated echo DOSY sequences such as Oneshot. 22he theoretical background is as follows.Spin echo J 70 modulation in a system of spins-1/2 arises because the 180° pulse has the double effect of rotating the coherence of one ("active") spin, and of exchanging α and β spin states for its coupling partners ("passive" spins).For a weakly coupled two-spin system IS, the effect of a standard Carr-Purcell spin echo experiment in 75 the product operator formalism 23 is where the chemical shifts are refocused and hence have been ignored, and θ J = 2πJ IS τ.If a 90° pulse is now applied about the y axis, the effect is to leave the in-phase y terms unchanged but to exchange the I and S antiphase terms, changing their signs so that the net effect of J modulation has been reversed: If a second echo is now generated, the J modulation is refocused: Thus for an AX spin system, as is already known, adding a 90° pulse at the midpoint of a double spin echo completely refocuses the J modulation.The effect does, however, rely on equal initial magnetisations for the two coupled spins, as for example when the spin system is initially at equilibrium. Consider now the effect of J modulation in such an experiment for general spin system of N weakly coupled spins-1/2 I 1 .. I N .If the quantity θ Jij = 2πJ ij τ 1 (a much easier condition to fulfill than that for the sequence of Fig. 1a, in which J is replaced by the chemical shift difference), then sinθ Jij ≃ θ Jij << 1, so that multiply antiphase terms, which are proportional to higher powers of sinθ Jij , can be neglected, and the effect of J evolution during the first spin echo reduces to Applying a 90°y pulse at this point once again exchanges and inverts the antiphase terms, and a second echo refocuses the J modulation: Thus for short τ the additional 90° pulse fully refocuses J modulation for arbitrary networks of weakly coupled spins-1/2. In the case of strong coupling, a different mechanism of modulation suppression comes into play.Because in the strong 35 coupling case τ is short compared to the inverse of the chemical shift difference Δδ ij as well as to 1/J ij , the effect of differential precession of the chemical shifts of coupled pairs is small, and as a result the effects of the three terms in the coupling Hamiltonian Experimental Section All Fig. 1 85 Fig. 1 . Fig.1bwith n = 1; the cyclic analogue n > 1 is here distinguished by the name PROJECT (Periodic Refocusing of J Evolution by Coherence Transfer), as it does not form perfect echoes for systems of more than two spins but does still suppress J Fig. 2 .Fig. 3 . Fig. 2. 500 MHz 1 H T 2 measurements on 75 mM clarithromycin in dimethylsulfoxide-d 6 .a) and b), spectra obtained using the sequences of Fig. 1a and 1b respectively with a delay τ = 8 ms and a total echo time of 4nτ = 128 ms (n = 4).c) and d), results for the two
2018-04-03T05:47:34.565Z
2012-01-21T00:00:00.000
{ "year": 2012, "sha1": "12b96ae2aed0bcd16f776e1a9fcc379b470e6627", "oa_license": "CCBY", "oa_url": "https://zenodo.org/record/45668/files/article.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParseMerged", "pdf_hash": "1b90baf50db0eb7b76d16ba1f3f19123150a9788", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
8804423
pes2o/s2orc
v3-fos-license
Human Apolipoprotein A-I-Derived Amyloid: Its Association with Atherosclerosis Amyloidoses constitute a group of diseases in which soluble proteins aggregate and deposit extracellularly in tissues. Nonhereditary apolipoprotein A-I (apoA-I) amyloid is characterized by deposits of nonvariant protein in atherosclerotic arteries. Despite being common, little is known about the pathogenesis and significance of apoA-I deposition. In this work we investigated by fluorescence and biochemical approaches the impact of a cellular microenvironment associated with chronic inflammation on the folding and pro-amyloidogenic processing of apoA-I. Results showed that mildly acidic pH promotes misfolding, aggregation, and increased binding of apoA-I to extracellular matrix elements, thus favoring protein deposition as amyloid like-complexes. In addition, activated neutrophils and oxidative/proteolytic cleavage of the protein give rise to pro amyloidogenic products. We conclude that, even though apoA-I is not inherently amyloidogenic, it may produce non hereditary amyloidosis as a consequence of the pro-inflammatory microenvironment associated to atherogenesis. Introduction Apolipoprotein A-I is the major protein constituent of human high density lipoproteins (HDLs), which play a key role in reverse cholesterol transport (RCT), shuttling excess of cholesterol (Chol) from the circulation to the liver for catabolism [1]. Even though only ,5% of the total circulating apoA-I is found in lipid-free or lipid-poor forms [2], it is thought that the highly dynamic catabolism of HDL yields this protein conformation which subsequently acquires lipids, enhancing Chol removal in both physiological [3] and proatherogenic conditions [4]. In addition to its role in lipid homeostasis, apoA-I has been recently shown to exhibit antioxidant and anti-inflammatory properties [5] and to inhibit the aggregation and neurotoxicity of the amyloid-b peptide, the main neurotoxin in Alzheimer's disease [6]. Although the possible association between apolipoproteins and neurodegeneration is unclear, increasing apoA-I concentrations have been reported to correlate with decreasing risk of dementia [7], raising the possibility of a novel role of apoA-I in physiological mechanisms of protection against neurological disorders. About 60% of the secondary structure of apoA-I is organized in amphipathic a-helices, while the N-terminus is composed of ß-sheets and unstructured residues [8]. Based on thermodynamic and circular dichroism measurements, it has been proposed that lipid-free apoA-I exhibits a molten globule-like state under physiological conditions [9]. This state guarantees the structural plasticity of the protein, which partially unfolds when lipids are released and refolds when lipids are taken up. The structural disorder required to fulfill protein biological functions represents, however, a potential risk of self-aggregating unfolded states. Thus amyloidoses constitute a group of diseases characterized by the conversion of a natively folded protein into a misfolded conformation presenting higher content of ß-sheet secondary structure, which aggregates and deposits causing organ damage and serious morbidity [10] [11]. Protein aggregation is characterized by a remarkable polymorphism, in which oligomers, fibers and amorphous aggregates are found as final products [12]. Changes in apoA-I structure induced by oxidation [13] and proteolysis [14] [15] have been described to impair to different extents its interaction with key proteins involved in RCT and its ability to remove Chol from artery walls [13]. Conceivably, changes in apoA-I structure/stability induced by pathological cellular or extracellular conditions could shift the equilibrium from a folded structure towards a misfolded conformation prone to aggregate in extracellular deposits. Indeed, local deposits of wild type apoA-I have been detected in the pulmonary vasculature of elderly dogs [16], in knee joint menisci, inducing amyloidosis associated with aging [17] and in the aortic intima of elderly individuals [18]. Although the reason why wild-type apoA-I-derived amyloid is associated to atherosclerotic plaques [19] is not known, this fact strongly suggests the importance of the local environment on the mechanism of protein folding. In the present study, we have investigated the effects of specific environmental conditions mimicking a pro-inflammatory milieu on the tendency of apoA-I to misfold and to self-aggregate into insoluble, amyloid-like complexes. Results demonstrate the impact of such environmental agents on the equilibrium between native and aggregation-prone conformational states of apoA-I, suggesting the importance of chronic inflammation to induce non-hereditary apoA-I amyloidosis. Methods Cloning, expression and purification of wild-type apoA-I. The cDNA for human apoA-I, kindly donated by Dr A. Jonas (University of Illinois at Urbana-Champaign, IL), was inserted into a pET-30 plasmid (Novagen, Madison, WI). Quick Change site directed mutagenesis kit (Stratagene, La Jolla, CA) was used to introduce a modification that created an acid labile Asp-Pro peptide bond between amino acid residues 2 and 3 of apoA-I, allowing specific chemical cleavage of an N-terminal His-Tag fusion peptide [20]. Protein expression and purification were performed as described [20]. Purity of the final protein preparation (checked by denaturing polyacrylamide gel electrophoresis SDS-PAGE) was higher than 95%. To confirm that the wild-type form behaved as the native protein, structural and functional tests were performed comparing the recombinant protein with apoA-I purified from plasma of healthy donors (not shown). Both, structure and lipid-binding behaviors were indistinguishable between both proteins, and thus we will name the wild-type form as ''apoA-I'' in this manuscript. Protein denaturation and stability. Chemical denaturation was performed by 2-h incubation of 0.1 mg/mL apoA-I (at pH 7.4, 5.0 or 4.0, obtained using citrate-phosphate McIlvaines buffer) at increasing concentrations of GndHCl at 25uC. Preliminary experiments showed that 2 h were sufficient to reach equilibrium under these conditions. Measurements were performed on an Olis upgraded SLM4800 spectrofluorometer (ISS Inc, Champaign, IL). The free energy of unfolding in the absence of denaturant (DG 0 ) was obtained from the shift in spectral center of mass of the fluorescence emission of Trp residues, assuming a two-state process as previously described [21,22]. ApoA-I fluorescence quenching by acrylamide. The effect of pH on secondary structure and exposure of aromatic amino acids to the solvent was determined by the analysis of intrinsic fluorescence quenching by acrylamide. ApoA-I emission spectra were acquired in the presence of increasing concentrations of acrylamide (0-0.4 M). After correction for buffer effects, the quenching parameters were calculated using a modified Stern-Volmer equation as [23]: where f a is the fraction of the initial fluorescence which is accessible to the quencher, K is the Stern-Volmer quenching constant of the accessible fraction and [Q] is the concentration of the quencher. F 0 is the initial fluorescence (contributed by the 4 Trp residues present in apoA-I) and DF is the remaining fluorescence after the addition of acrylamide at each concentration. From a linear plot of F 0 /DF versus 1/[Q] both K and f a can be obtained. Binding of bis-ANS. ApoA-I (0.1 mg/mL) was incubated at different pH values for 24 h at 37uC. Bis-ANS was then added at a 2:1 molar ratio (probe:protein), and fluorescence emission was measured on a Beckman DTX 880 Microplate Reader, using excitation and emission filters centered at 395 nm and 490 nm, respectively. Fluorescence correlation spectroscopy (FCS). FCS measures fluctuations in fluorescence produced when a small number of fluorescent molecules move through an illuminated volume. The fluctuation can be characterized by the autocorrelation function, from which the diffusion coefficient (D coef ) and the amplitude of the fluctuation G(0) can be readily obtained [24]. For a single species, G(0) is related to the number of particles that move through the illuminated volume by G(0) , c/ Ň , where Ň is the average number of molecules inside the excitation volume and c is a geometric factor determined by the shape of the point spread function, the mathematical model used and the width parameters [25]. FCS was measured on a two-photon fluorescence microscope at the Laboratory for Fluorescence Dynamics (University of California at Irvine, Irvine, CA), as previously described [26]. Experimental autocorrelation functions were fit using a 3D-Gaussian intensity profile model [26]. Fluorescein was used to calibrate the beam waist of the excitation profile function, considering a D coef of 300 mm 2 /s [27]. Detection of amyloid-like aggregates. ApoA-I (0.2 mg/ mL) was incubated at different pH values. After 24 h at 37uC, thioflavin T (ThT) was added at a 1:1 molar ratio and fluorescence intensities were measured on a Microplate Reader, using excitation and emission filters centered at 430 nm and 480 nm, respectively. Plates were then centrifuged at 800 xg at 25uC for 30 min, and fluorescence in the supernatant was measured under the same conditions. A set of samples treated exactly under the same conditions was used to quantify total protein remaining in solution after centrifugation using a Qubit Quantitation Platform (Invitrogen, Carlsbad, CA). Sedimented protein was expressed as percentage of the initial amount loaded in each well (20 mg). Binding to heparin. Binding of apoA-I to heparin was followed by light scattering. ApoA-I (0.05 mg/mL) was incubated at different pHs in the absence or presence of heparin (molar ratio 2:1 heparin:protein for 2 h). Light scattering was monitored at 90u in an Olis upgraded SLM4800 spectrophotofluorometer, with incident light set at 400 nm [28,29]. In parallel experiments, formation of amyloid-like complexes was followed by incubation of apoA-I (0.2 mg/ml) with heparin for 24 h at 37uC followed by measurement of ThT fluorescence as described above. Integration time was set in each experiment separately. The influence of high salt concentrations on complex formation with heparin was investigated by carrying out incubations in the presence of 500 mM NaCl. Pro-inflammatory processing of apoA-I Processing induced by activated neutrophils. Human polymorphonuclear neutrophils (PMNs) were isolated from venous blood of healthy volunteers using a standard method of dextran sedimentation prior to centrifugation in a Ficoll Hypaque gradient and hypotonic lysis of erythrocytes. Purified neutrophils contained .98% viable cells, as determined by trypan blue exclusion. After isolation, PMNs (1610 5 cells in 500 mL) were resuspended in Hanks' balanced salt solution (HBSS), pH 7.4, containing 1 mM calcium chloride, 0.5 mM magnesium chloride and 1 mg/mL glucose. ApoA-I (0.2 mg/mL) was added and, after 5 min at 37uC, cells were stimulated with 12-O-tetradecanoylphorbol-13-acetate (TPA) (200 nM), followed by 45 min incubation. Activation was verified by detecting the conversion of nitroblue tetrazolium to formazan due to the neutrophils oxidative burst [30]. Reaction was stopped by spinning the cells at 1,000 x g for 5 min. ApoA-I in the supernatant was then loaded onto a 16% SDS-PAGE gel and developed by western blotting using a polyclonal antibody against apoA-I [31]. An aliquot of apoA-I incubated with PMNs under identical conditions was used to analyze ThT binding. Chlorination of apoA-I in a cell-free system. ApoA-I was dissolved in HBSS, pH 7.4, to obtain a final concentration of 0.2 mg/mL and increasing molar ratios of hypochlorous acid (HClO) were added while vortexing. To ensure that all the HClO had reacted, solutions were left for 1 h at room temperature. The concentration of HClO was determined by measuring the absorbance at 292 nm (e = 350 M -1 cm -1 ) at pH 9.0. Reaction products were analyzed as described above for PMN assays. Proteolysis with metalloproteinase-12(MMP-12). ApoA-I was incubated with MMP-12 (at a molar ratio 1:3,000 enzime to apoA-I) at 37uC for 3 h. An aliquot of the reaction mixture was analyzed by SDS-PAGE as described above for PMN assays. In another aliquot, MMP-12 was inhibited by addition of EDTA (final concentration 5 mM), following 24 h incubation at 37uC to determine ThT binding as described above. Chol removal from Chinese Hamster Ovary (CHO) Cells. To check the influence of the oxidative processing on protein function, we incubated apoA-I with HClO for 1 h as described above, and next analyzed the efficiency of the modified protein to solubilize Chol from CHO cells. Cells were grown until confluence, and Chol removal determined as described by Jaureguiberry et al [31]. Efflux was quantified as the percent of the total Chol removed after 12 h incubation with 12 mg/mL of HClO-treated apoA-I in comparison with untreated protein. Other analytical methods. Protein content was quantified by optical density on a Helios b spectrophotometer (Thermo Scientific, Waltham, MA), using an extinction coefficient of 1.13 mL/mg at 280 nm. Transmission electron microscopy was carried out on a JEOL-1200 EX microscope operating at 100 kV. After different incubation periods, samples were centrifuged at 800 x g and the pellet applied onto Formvar-coated grids for 5 min and negatively stained with uranyl acetate (2% solution). For long time incubations of apoA-I, antibiotic-antimycotic mixture (Invitrogen, Carlsbad, CA) was added to the solution. For Atomic Force Microscopy (AFM) analysis, protein under the different treatments was incubated at 0.6 mg/mL for 24 h at 37uC, and spotted stepwise on a freshly cleaved muscovite mica. Following, the residual sample was blotted off with pure water to remove salts, and dried under N 2 . In the case of HClO treatment and control at pH 7.4, a final concentration of 1 mM CaCl 2 was achieved right before spotting the sample in order to favor protein adhesion to the substrate. All images were obtained in ambient conditions using a Multimode-Nanoscope V (Veeco, Santa Barbara, CA) operating in Tapping Mode with an etched silicon Probe model Arrow-NCR-50 Nano World (cantilever resonance frequency: 258 kHz, Force constant 42 N/m; tip radius 5-10 nm). Typical scan rates were 1 Hz-1.5 Hz. Unless otherwise stated, experiments are representative of three independent measurements. Results were means 6 S.E of at least 3 samples. Statistically significant differences between experimental conditions were evaluated by ANOVA followed by Tukey's test (p,0.05). pH effect on the folding and stability of apoA-I The effect of pH on the structure and stability of apoA-I was investigated in equilibrium unfolding experiments by following the intrinsic fluorescence emission of the protein in the presence of increasing concentrations of GndHCl. The emission spectrum of native apoA-I corresponds to the average signal from four naturally occurring Trp residues (residues number 8, 50, 72 and 108). Shifts in spectral center of mass of the fluorescence emission indicate the average polarity of the environments surrounding the Trp residues in a protein. As previously reported [32], equilibrium at each GndHCl concentration was achieved within a few minutes and the protein was fully unfolded in the presence of 2.0 M GndHCl at pH 7.4 ( Figure 1). The equilibrium unfolding transition of apoA-I at pH 7.4 was clearly cooperative and welldefined by a two-state model. The calculated free energy of unfolding was 2.3 kcal/mol (Table 1), which suggests that native apoA-I exhibits a flexible structure likely resembling a moltenglobule state [9]. No changes in the denaturation profile were observed at pH 5.0 (Fig 1 gray symbols) compared to 7.4 (empty circles). Instead, at pH 4.0 ( Fig. 1 closed symbols) the unfolding transition appeared to be considerably less cooperative. This pattern is no longer defined by a two-state model and thus DG 0 cannot be estimated. The [GndHCl] 1/2 was displaced to higher concentrations of guanidine and the shifts in Trp exposure to the aqueous medium extended up to 3 M GndHCl. This behavior suggests that at pH 4.0 denaturation of apoA-I proceeds via partially folded intermediate states rather than as a two-state transition. The relative exposure of Trp residues to the aqueous solvent is indicative of protein conformation. To investigate in more detail the solvent accessibility of Trp residues at different pH values, we performed quenching of the intrinsic fluorescence emission by acrylamide. Interaction of Trp residues that are exposed to the aqueous medium with acrylamide results in nonradiative relaxation of the excited state, detected as a decrease in fluorescence intensity [23]. Quenching parameters at the three studied pH values are summarized in Table 1. The quenching constant (K) measured at pH 7.4 (5.62 M 21 ) is in good agreement with a previous report for apoA-I isolated from plasma [33], and with Davidson et al [34] with the difference that their construct included an additional Trp residue in a pro-peptide sequence (position -3). Consistent with results of equilibrium unfolding experiments described above, a similar quenching constant was determined at pH 5.0 (5.87 M 21 ), indicating similar exposure of the Trp residues of apoA-I to the medium. In contrast, at pH 4.0 a higher quenching constant was determined (7.39 M 21 ), suggesting increased exposure of Trp residues to the aqueous medium. By calculating the fraction of fluorescence accessible to the solvent (f a ) (see ''Methods''), and considering that apoA-I contains 4 Trp residues, it was possible to estimate that ,3 Trp residues (f a ,0.7) were accessible to the quencher in the native state of the protein (pH 7.4 or 5.0), and that all 4 Trp residues became exposed at pH 4.0 (f a ,1). To further examine the influence of pH on apoA-I conformation, we analyzed the binding of the fluorescent probe bis-ANS to the protein. This probe has been widely used to detect surface hydrophobicity of proteins [21,22], as its fluorescence quantum yield increases markedly upon binding to organized hydrophobic patches at protein surfaces. Figure 2 shows the fluorescence intensity of bis-ANS added to apoA-I previously incubated (for 24 h at 37uC) at different pH values. Binding (detected as an increase of fluorescence) was similar at pH 7.4, 6.0 and 5.0. At pH 4.0, however, the fluorescence intensity was ,60% lower than that at the other pH values, indicating loss of binding sites for bis-ANS on the apoA-I surface. Fluorescence Correlation Spectroscopy (FCS) was employed to determine the hydrodynamic properties of apoA-I in a diluted solution at different pH. Measurements were performed at 37uC at the two extremes of pH studied (7.4 and 4.0) using apoA-I labeled with Alexa 488. This probe was chosen because its fluorescence emission is not dependent on pH. The theoretical diffusion coefficient (D coef ) of a molecule can be calculated using the Stokes-Einstein equation [25]; for apoA-I, with a molecular weight of 28 kDa, this value can be estimated at ,100 mm 2 /s. From FCS data, experimentally measured D coef values were 11967 and 9869 mm 2 /s at pH 7.4 and 4.0, respectively (different with p,0.05). Identical protein concentrations (0.01 mg/mL) were used and the same average number of molecules in the illumination volume was recovered from the G(0) value at both pH values (data not shown). The decrease (by ,18%) in D coef at pH 4.0 compared to the value obtained at pH 7.4 could reflect either the existence of dimers in solution (a theoretical 20% decrease in D coef would be expected in this case) or the slower diffusion rate of a monomeric protein in an unfolded configuration. Although it is difficult to discriminate between these two possibilities using only D coef as a parameter, the fact that the same G(0) value (number of particles in the sampled volume) was obtained at both pHs is consistent with the interpretation that apoA-I remained monomeric but in a more disorganized (less compact) conformation at pH 4.0. This conclusion is also consistent with the above described results on equilibrium unfolding, fluorescence quenching and bis-ANS binding at different pHs values. Interestingly, incubating the samples for longer periods (,12 h) resulted in the appearance of large aggregates in the sample at pH 4.0, which were not present at pH 7.4. In addition, some fluorescence was detected when focusing the laser at the bottom of the microscope plate, indicating that acidic pH induced partial sedimentation of the protein during prolonged incubation times. Taken together, these results suggest that, at physiological pH, apoA-I exhibits a flexible conformation that is largely preserved under mildly acidic conditions (pH 5.0). At pH 4.0 and at low protein concentrations, apoA-I remains monomeric, but it shows typical characteristics of partially folded states, including exposure of Trp residues to the solvent and loss of hydrophobic surface patches and folding cooperativity. The fact that FCS results indicated protein aggregation at longer incubation times suggested that the partially unfolded state of apoA-I induced by acidic pH exhibited increased propensity to undergo self-association, driving us to further investigate the nature of aggregates formed. Thioflavin-T (ThT) binding and ultrastructural analysis of apoA-I aggregates Misfolded proteins often give rise to the formation of different types of protein aggregates, including amyloid fibrils and nonfibrillar species such as soluble oligomers, which have been increasingly implicated in a number of important human diseases [12,35]. To determine whether acidic pH induced amyloid aggregation of misfolded apoA-I, we measured ThT binding to the protein following incubation at different pH values. The fluorescence quantum yield of ThT is very low in aqueous solution, and it increases significantly upon binding of the probe to amyloid aggregates [36]. Although ThT binding usually increases proportionally to the yield of more organized aggregates, significant fluorescence is detected even when proteins are present as oligomeric states [37,38]. ApoA-I was incubated at different pHs for 24 h and ThT fluorescence was measured before and after low speed centrifugation of the samples (Fig. 3). As expected, ThT fluorescence was very low (similar to the fluorescence of the same amount of ThT in the absence of protein, not shown), at both pH 7.4 and 6.0, conditions in which the protein remained native and soluble after centrifugation. ThT fluorescence increased significantly at pH 5.0 and ,15% of the protein in the samples sedimented upon centrifugation, revealing the presence of insoluble aggregates. At pH 4.0, ThT binding was also high and even more protein (30-40%) sedimented after centrifugation, indicating that acidification of the medium promotes aggregation of apoA-I to form high molecular weight ThT-binding aggregates. ThT fluorescence in the supernatant was negligible after centrifugation at all pHs tested. We next characterized the morphology of apoA-I aggregates by transmission electron microscopy, after incubating the protein at 0.4 mg/mL and 37uC for 24 h at pH 5.0 and 7.4. As expected, a homogeneous pattern was observed at pH 7.4 after different incubation times, indicating the absence of aggregates (Fig. 4A). Instead, the most conspicuous structures observed at pH 5.0 were small oligomers ranging from 10 to 50 nm in diameter (Fig 4B), similar in size and structure to oligomers reported for other amyloid peptides [39] [40] [41] [12] [42]. Interestingly, these aggregated species were present at different incubation times and organized fibers of typical amyloid morphology were not detected even after 48 days incubation either at pH 5.0 (Fig 4C) or at pH 4.0 (not shown). In order to expand the morphology characterization, we performed AFM analysis of the sample in higher concentrations and overloading the mica with successive applications of the sample. In this condition, the predominant pattern is observed as a background composed of closely packed material of protein oligomers, of an average height ranging between 5 and 10 nm (Fig 4D). In addition, some long, unstructured protofibers appeared (Fig 4E). Control of protein loaded in the same condition at pH 7.4, showed small amount of oligomers scattered on a bare mica (not shown). Altogether, results indicate that acidic pH alters the delicate equilibrium between the native and self-associating protein structure, inducing the formation of insoluble aggregates. Interestingly, an aggregation-prone conformation of apoA-I is detected in conditions where only mild structural changes are observed (pH 5.0). Aggregation induced by heparin binding There is abundant evidence that glycosaminoglycans (GAGs) stimulate the formation of amyloid aggregates from different proteins [43,44]. Binding of heparin and other GAGs to proteins has been shown to depend on protein conformation and on pH [28]. Although it was previously described that apoA-I does not interact with heparin at neutral pH [45], we tested the possibility that binding could be modulated by changes in extracellular pH. Interaction of apoA-I with heparin was analyzed by right-angle light scattering at low protein concentrations. Light scattering is proportional to the size of the molecules in solution, and thus it has been a widely used tool to estimate the formation of high molecular weight complexes in dilute solution [28]. ApoA-I (0.05 mg/mL) was incubated at different pH values with or without heparin (molar ratio 2:1 heparin:protein) for 2 h, and scattered intensities at 400 nm are shown in Figure 5A. In the absence of heparin, light scattering was low at all pHs investigated, indicating that incubation under those conditions had no significant effect on the aggregation of apoA-I. In contrast, a different behavior was observed in the presence of heparin. At both pH 7.4 and 6.0, light scattering was also low, in agreement with the reported absence of binding sites for heparin on apoA-I at neutral pH. Acidification of the medium at pH 5.0 and, especially, at pH 4.0 caused marked increases in light scattering, indicating that heparin binds to and promotes aggregation of apoA-I at acidic pH. Interestingly, incubation of apoA-I in the presence of heparin at pH 5.0 for 24 h resulted in a marked increase of ThT fluorescence (Fig. 5B), showing an amyloid-like structure of formed aggregates. Addition of 0.5 M NaCl completely blocked this effect, suggesting that interaction between apoA-I and heparin at acidic pH is mediated by salt bridges. Influence of oxidative and proteolytic modification of apoA-I on protein folding and function The effect of a pro-inflammatory environment on the structure and function of apoA-I was mimicked by incubating the protein with TPA-stimulated neutrophils, as the activation of these cells is known to lead to complex pathways including oxidative and proteolytic reactions. First, we set up to establish the consequence of such events on protein structure and misfolding. Incubation of apoA-I with activated PMN resulted in partial protein degradation, determined by a decrease of intensity in the 28 kDa band corresponding to the intact protein and appearance of a fragment of ,22 kDa in SDS-PAGE (Fig. 6A). In some experiments, degradation was considerably more drastic, with complete disappearance of the intact apoA-I band and appearance of high-molecular weight cross-linked products (not shown). Interestingly, partially degraded apoA-I bound significantly more ThT than the intact protein (Fig. 6B), suggesting that PMN-mediated processing of apoA-I gives rise to a pro-amyloidogenic conformation. Proteolytic degradation has been previously observed upon incubation of apoA-I with macrophages, yielding protein fragments with sizes 26, 22, 14 and 9 kDa, which corresponded to both, the N and C terminus of the protein [14]. As metalloproteinases are known to be highly activated in leukocytes, we checked the effect that metalloproteinase 12 (MMP-12, present in atherosclerotic lesions [46]), could exert on apoA-I pro amiloidogenic processing. Figure 6C shows, as expected, that apoA-I is in some extent substrate of this enzyme, detected as a slight decrease in the intensity of the band associated to original molecular size, together with the appearance of fragments of lower molecular weight; nevertheless, the product does not seem to be amiloidogenic, as the binding to ThT does not change significantly (Fig 6D). In addition to proteolysis, different oxidative species are involved in the respiratory burst of activated neutrophils [47]. One of the characteristic responses is the formation of the powerful oxidant and microbicidal agent, HClO, in a reaction catalyzed by myeloperoxidase [48]. To test this effect in vitro, apoA-I was incubated at increasing concentrations of HClO and protein integrity was analyzed by SDS-PAGE. Incubation of apoA-I with HClO has been previously tested, and fragmentation of the protein appeared to occur in a random process, as distinct low molecular mass complexes were not detected in that case [49]. Also, in agreement with that report, apoA-I degradation occurred as a function of increasing concentrations of HClO (Fig. 7A, lower panel). To analyze the pro-amyloidogenic processing, we tested ThT fluorescence associated to these products. Interestingly, ThT binding did not linearly correlate with HClO-induced degradation of apoA-I. Instead, ThT fluorescence was maximal at an intermediate HClO:protein molar ratio, but it decreased at higher oxidant concentrations (Fig. 7A, upper panel). Again we characterized these products by microscopy techniques. Samples incubated at 100 mM HClO at low concentrations were observed by electron microscopy as amorphous aggregates (Fig 8A). When incubated at higher concentrations (0.6 mg/mL) and overloaded on the mica, AFM images showed that, in addition to the predominant aggregates, long and short, thin protofibers could be detected in small yield (Fig 8B and C) suggesting that, under more drastic conditions the oligomers could give rise to more organized structures. These evidences suggest that partial oxidative modification of apoA-I gives rise to amyloidogenic products, whereas further modification by HClO likely results in a more severely unfolded protein conformation that is no longer amyloidogenic. Formation of cross-linked products was also sometimes observed at higher HClO concentrations (Fig. 7B). Next, we attempted to characterize the influence of partial oxidation events on the ability of apoA-I to remove Chol from CHO cells. Results showed a significant decrease in protein ability to solubilize Chol (Fig 9), indicating that partial structural modification impairs its ability to participate in cellular Chol homeostasis. Discussion Mutant forms of apoA-I have been involved in late-onset familial amyloidosis [50]. Such mutations are rare and usually associated with systemic deposition of amyloid in tissues, the major clinical features being related to renal, hepatic, and cardiac dysfunction. The possibility that wild-type apoA-I could also be amyloidogenic has been previously raised, as it was localized in senile plaques or associated to atherosclorosis lesions [18,19]. However, very little is known about the events that might trigger amyloid aggregation of wild-type or mutant forms of apoA-I. Following its synthesis, apoA-I circulates in plasma and lymph associated to human HDLs and is normally eliminated by filtration in kidney after 4-6 days. Nevertheless, the fact that diffuse deposits of wild type apoA-I are often found associated to age-related and atherosclerotic lesions indicates that, under specific conditions, the protein could lose its structure and aggregate. Thus, both reversible agents (local pH, molecular crowding, interaction with ligands or other biomolecules, etc) and permanent chemical modifications should be taken into account to investigate those factors responsible for aggregation of apoA-I and their possible relationship with amyloidosis. Atherosclerosis represents a pathological process that underlies the formation of plaques in the intima and media of the arterial wall, resulting from the progressive accumulation of Chol, other oxidized lipids and inflammatory cells. This landscape has been shown to impair HDL and apoA-I function [51]. As activation of inflammatory cells results in a decrease of pH [52], proteolysis and oxidation of specific and unspecific substrates, we have analyzed here the influence of some of these events as possible mediators of apoA-I misfolding. Effect of local pH on protein structure and solubility A critical condition to be considered is the extracelullar pH in the interstitial compartment. Normal oxidative catabolism yields protons which are mostly neutralized by different physiological buffers. However, a local decrease in extracellular pH can occur not only under inflammation [52] but also in chronic hypoxic conditions, which, in addition, induce lactic acidosis. To investigate the influence of acidification of the medium on apoA-I folding, we exposed the protein to buffers at different pHs. Incubation at pH 6.0 had no effect on the stability and solubility of apoA-I compared to pH 7.4, and no significant binding to ThT was detected, indicating preservation of the native fold. This is not surprising as the iso electric point of apoA-I is 5.27 and thus, the net charge of the protein was largely preserved at pH 6.0. In contrast, at pH 4.0 the structure and stability of apoA-I were significantly modified. Equilibrium unfolding by GndHCl revealed loss of cooperativity in the unfolding transition. Changes in protein conformation were further evidenced by fluorescence quenching, FCS, bis-ANS binding and ThT fluorescence measurements. Although relevant, results at pH 4.0 are far from in vivo landscapes, and thus especial attention should be paid to incubations performed at higher pH. Interestingly, the detection of some increase in ThT fluorescence, along with a decrease in protein solubility and the presence of small aggregates in these samples indicate that the amyloidogenicity of the protein becomes evident at pH 5.0, in spite of the fact that apoA-I retained a mostly native structure at this pH. Similar behavior has been detected for other amyloidogenic proteins, such as transthyretin [53]. From over 70 pro-amyloidogenic natural mutations described for transthyretin, several of them did not show drastic structural changes, except for weaker bonding at protein contacts which increased protein insolubility. It is also conceivable that at low pH new interactions could be created between apoA-I and other biomolecules in the cellular matrix. As glycosaminoglycans have been shown to stimulate formation of fibrils from different amyloidogenic proteins [43], we analyzed the interaction of apoA-I with heparin at different pHs. Results showed that a mild decrease in pH not only generated binding sites for heparin in apoA-I, but it gave rise to formation of complexes with amyloid characteristics (i.e., ThT binding), stabilized by electrostatic interactions. Heparin binding sites in proteins are usually characterized by a cluster of basic residues capable of interacting with the negatively charged heparin polymer. Many different motifs have been postulated as heparin binding domains in proteins [54]. Hileman et al. [55] have proposed a consensus sequence in which turns from the protein folding bring basic interacting amino acid residues into proximity. The pK a of free histidine is about 6, but it is well-known that ionizable side chains can exhibit changes in their apparent pK a , depending on the physicochemical properties of the surrounding protein environment. Thus, it is possible that some histidine residues in apoA-I could become protonated at pH close to 5. ApoA-I has 5 histidine residues (positions 135, 155, 162, 193, 199). Two of them (residues 155 and 162) are located in a putative amphipatic class A a-helix, separated by a periodicity that allows a turn of the helix to bring them in close proximity, and near an arginine residue that is seven amino acids apart in the same helix. By considering the helix-wheel model (Fig. 10), this spatial sequence of 3 amino acids is located in the polar face adjacent to the non polar face in the helix 6. Thus, protonation of the His residues would increase the concentration of positive charges on the polar side of the helix, favoring interaction with negatively charged membranes (like in apoptotic cells) or components of the extracellular matrix. Effect of chemical modifications on apoA-I Our findings strongly suggest that chemical modification of apoA-I induced by conditions mimicking a pro-inflammatory environment results in the processing of apoA-I into partially degraded, amyloidogenic products. As metalloproteinases are highly activated in atherosclerotic plaques [2], we checked whether the incubation of apoA-I with MMP-12 could result in the formation of a peptide with amyloidogenic properties. As expected apoA-I degradation was observed, but this event did not seem to be responsible as the product is not amyloidogenic. Nevertheless, it could be possible that other peptide could be freed under proteolytic processing, as apoA-I hereditary amyloidosis is usually characterized by the presence of Nterminal fragments of the protein in the lesions. By purifying one of these fragments, Andreola et al [56] demonstrated that the acidification of the medium induced the shift from unordered to b sheet conformation and the formation of fibrils from the isolated peptide. Instead, oxidation is likely to be critical in the pathological processing of the protein, as myeloperoxidase, an enzyme present in polymorphonuclear cells, and its products, as HClO, have been directly involved at different stages of atherosclerotic lesions [57]. When incubating apoA-I with HClO, we showed that the protein was extensively degraded (Fig 5). Degradation included proteolysis and probably oxidation mediated by HClO, as methionine [13], tyrosine [58] and tryptophan [59] residues have been shown to react with myeloperoxidase-generated HClO. Interestingly, partially degraded apoA-I showed a significant ThT-associated fluorescence, indicating a product more prone to aggregate into an amyloid form than the native protein (Fig 6). This observation is really important, as it shows a clear demonstration of the relationship between atherosclerosis and apoA-I-induced amyloidosis. In addition, the oxidized protein loses its ability to remove Chol. This fact could result in the accumulation of Chol in peripheral cells, which is in fact a signal to induce apoptosis, leading to exposure of phosphatidylserine in the outer leaflet of the plasma membrane. Presence of negatively charged lipids in the plasma membrane can induce further local acidification of the pH [60] in the interstitial fluid, which in turn could elicit the conformational shift of apoA-I into a pathological misfolding. A lower pH induced by pro-inflammatory conditions could also increase binding of apoA-I to the extracellular matrix, extending the time during which apoA-I and HDLs are exposed to macrophage-mediated oxidative damage. In conclusion, our results strongly suggest that different events taking place in the inflammatory hallmark of atherosclerosis conduct to a pro-amyloidogenic processing of apoA-I which in turn could impair the vascular disease. Although most of the apoA-I circulates associated to HDL, the protein in the lipidbound state is more stable and protected from the chemical processing, and thus the lipid-poor or lipid-free conformation is the candidate to induce amyloidogenic products. The fact that macrophages increase the yield of this conformation locally at the artery wall [4] supports this concept. Furthermore, the concentration of lipid-free apoA-I in the aortic intima has been shown to increase during the progression of atherosclerosis [61]. Finally, although not tested here, the amyloidogenic conformations of apoA-I are likely to perpetuate the vascular disease as the protein pathological conformation could be cytotoxic and elicit the inflammatory environment. Further research will be done on this topic.
2014-10-01T00:00:00.000Z
2011-07-19T00:00:00.000
{ "year": 2011, "sha1": "f1e0f32d86472c1e9aa3dfdc978cd62a3ab65e26", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0022532&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "72cb96da2efe705b9fc61e4286aaae649b9e444a", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
119282267
pes2o/s2orc
v3-fos-license
Wigner functions of thermo number state, photon subtracted and added thermo vacuum state at finite temperature Based on Takahashi-Umezawa thermo field dynamics and the order-invariance of Weyl ordered operators under similar transformations, we present a new approach to deriving the exact Wigner functions of thermo number state, photon subtracted and added thermo vacuum state. We find that these Wigner functions are related to the Gaussian-Laguerre type functions of temperature, whose statistical properties are then analysed. I. INTRODUCTION In recent years photon subtracted and added quantum states have been paid much attention because these fields exhibit an abundant of nonclassical properties and may give access to a complete engineering of quantum states and to fundamental quantum phenomena [1][2][3][4][5][6][7][8]. However, all these discussions are restricted to the case at zero point temperature. In fact, most systems are not isolated, but are immersed in a "thermal reservoir", excitation and de-excitation processes of a system are influenced by its energy exchange with reservoirs. In this work we study field properties by photon subtracting and adding at finite temperature. S : as if the "fence" : : : : did not exist, so S can pass through it. We also appeal to the Takahashi-Umezawa thermo field dynamics (TFD) [16][17][18], we consider it convenient to obtaining the explicit expressions of WFs. II. BRIEF REVIEW OF THERMO STATE The main point of TFD lies in converting the evaluation of ensemble average at nonzero temperature into the equivalent expectation value with a pure state. This worthwhile convenience is at the expense of introducing a fictitious field (or a so-called tilde-conjugate field, denoted as operator a † ) in the extending Hilbert spaceH, thus the original optical field state |n in the Hilbert space H is accompanied by a tilde state |ñ inH. A similar rule holds for operators: every annihilation operator a acting on H has an imageã acting onH. At finite temperature T the thermal vacuum |0(β) is defined by the requirement that the vacuum expectation value agrees with the statistical average [16][17][18], i.e. where β = 1 kT , k is the Boltzmann constant and H is the system's Hamiltonian. For the ensemble of free bosons with Hamiltonian H 0 = ωa † a, the thermal vacuum state |0(β) is where 0,0 is annihilated by a andã, ã,ã † = 1, and is the thermo squeezing operator which transforms the zero-temperature vacuum 0,0 into the thermo vacuum state |0(β) , and θ is related to the Bose distribution by which is determined by comparing the Bose-Einstein distribution and 0(β)| a † a |0(β) = sinh 2 θ. In particular, when operator A is the Wigner operator ∆ (α) itself, it is easy to see that which is just the WF of thermo vacuum state. From Eq.(10) one can see that the calculation of WF for thermo states is converted into the expectation value of Wigner operator in themo vacuum state |0(β) (ρ c → |0(β) 0(β)|), which is defined in the enlarged Fock space. This implies that it is convenient to deriving some WFs of density operators at finite temperature by doubly enlarging the original space. Recalling that the definition of Laguerre polynomials [22], Eq. (27) can be further put into the following neat form, which is just the WF of photon-subtracted thermo vacuum state, a Gaussian-Laguerre type function of temperature, since tanh θ = exp − ω 2kT . Due to cosh 2θ > 0 and L n (− 4 sinh 2 θ cosh 2θ |α| 2 ) 0, for the photon-subtracted case, W 1 (α) has no chance to present the negative value in phase space, which can be seen from Fig.1. On the other hand, the amplitude value of WF in (|α| , θ) space decreases with the increasing temperature (corresponding to θ). In appendix A, in order to check the result in Eq. (30), we have derived the WF of photon-subtracted thermo vacuum state by using the coherent state representation of Wigner operator. Comparing with the result in Ref. [20], Eq.(30) seems more concise and convenient for further discussion. By the same procedures as deriving Eqs. (22) and (26), we have and π cosh n+1 2θ L n 4 cosh 2 θ cosh 2θ a Gaussian-Laguerre type function which may present negative region in phase space (see Fig.2). In particular, when n = 1, Eq.(34) reduces to In Fig. 2, the behaviour of WF distributions of photon-added thermo state are plotted in (q, p) phase space and (|α| , θ) space. From Fig.2, one can see clearly the modulation action of photon-added number and temperature. The "oscillating frequency" of WF increases with the increasing photonadded number; while the amplitude value of WF in (|α| , θ) space decreases with the increasing temperature (corresponding to θ), which indicates that the nonclassicality is weakened at finite temperature. VI. WIGNER FUNCTION OF THERMO NUMBER STATE At finite temperature, according to TFD, the number state |n is replaced by |n,ñ , thus the thermo number state (i.e., number states at finite temperature) is S (θ) |n,ñ in the enlarged Fock space. Using the un-normalized coherent state representation of number state, where |z,z = exp[za † +zã † ] 0,0 is the non-normalized two-mode coherent state, and employing Eq. (19), we calculate the WF W 3 (α) of thermo number state as where we have set Expanding the exponential term exp [(rt − f z) sech2θ] as series, we have Then making the variable replacement for f, r, t, z we can rewrite Eq.(39) as Noticing the formula we have W 3 (α) = n! 2 e −2|α| 2 sech2θ π cosh 2θ n l,k=0 (−1) k sech l+k 2θ tanh 2(n−l) 2θ (42) From Eq.(42) one can see clearly that the WF of thermo number state is a real number. In particular, when n = 0, noticing that tanh θ = e − 1 2 ωβ , cosh 2 θ = 1 1−e −βω , sinh 2 θ = e −βω 1−e −βω , Eq.(42) reduces to the WF of thermo vacuum state |0(β) in Eq. (20). On the other hand, when T → 0,(i.e., finite temperature case reduces to zero temperature case) e −βω → e −∞ → 0, sinh θ → 0, cosh θ → 1, E → 2α, F * tanh 2θ → α * , and noticing Eq. (29) and the definition of two-variable Hermite polynomials [24,25], which leads to H n−k,0 (2α, α * ) = (2α) n−k , then Eq.(42) becomes which is just the WF of number state |n at zero temperature. In sum, by using TFD and Weyl ordered operators' order-invariance under similar transformations, we present a new approach to deriving the exact expressions of Wigner functions for thermo number state, photon subtracted and added thermo vacuum state. These WF are related to the Gaussian-Laguerre type functions, which are easily to be further analysed. The affection of temperature to nonclassical behaviour of the fields is manifestly shown. For discussions about the decoherence at finite temperature, we refer to [30,31]. Note that e a † a ln nc nc+1 a †n e −a † a ln nc nc+1 = n n c (n c + 1) n a †n , and e a † a ln nc nc+1 |z = e − 2nc+1 2(nc+1) 2 n c z n c + 1 ,
2009-01-12T02:19:18.000Z
2009-01-12T00:00:00.000
{ "year": 2009, "sha1": "60207d434767b3dc0d4bca55095be1e49279f7c0", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0901.1424", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "60207d434767b3dc0d4bca55095be1e49279f7c0", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
264437150
pes2o/s2orc
v3-fos-license
Self-Powered Sensing in Wearable Electronics—A Paradigm Shift Technology With the advancements in materials science and micro/nanoengineering, the field of wearable electronics has experienced a rapid growth and significantly impacted and transformed various aspects of daily human life. These devices enable individuals to conveniently access health assessments without visiting hospitals and provide continuous, detailed monitoring to create comprehensive health data sets for physicians to analyze and diagnose. Nonetheless, several challenges continue to hinder the practical application of wearable electronics, such as skin compliance, biocompatibility, stability, and power supply. In this review, we address the power supply issue and examine recent innovative self-powered technologies for wearable electronics. Specifically, we explore self-powered sensors and self-powered systems, the two primary strategies employed in this field. The former emphasizes the integration of nanogenerator devices as sensing units, thereby reducing overall system power consumption, while the latter focuses on utilizing nanogenerator devices as power sources to drive the entire sensing system. Finally, we present the future challenges and perspectives for self-powered wearable electronics. INTRODUCTION Humans increasingly depend on wearable sensors to monitor their physical and physiological conditions, 1−5 thereby enhancing the quality of life in the digitalized and intelligent world.For example, wearable or attachable devices can monitor heart rate, electrocardiograph signals, 6−8 providing crucial information for diagnosing heart diseases.Skin temperature can be obtained through flexible devices, 9−11 serving as direct indicators of certain diseases related to immune response.−17 Sensing devices have also been developed for monitoring respiration, 18,19 blood, 20,21 and skin 22 to provide early warning signs for related diseases. Despite significant advancements in benchside research over the past decade, the market adoption of flexible sensors remains limited due to several bottlenecks that hinder their maturation, one of which is power supply.Specifically, incorporating a battery in a wearable sensor system imposes constraints on its volume and weight, thereby limiting its potential applications.Moreover, when the battery is depleted, continuous monitoring is disrupted, potentially affecting the accuracy of estimations and consequently reducing user engagement.Thus, the development of wearable systems with innovative power supply technologies is of paramount importance.In 2008, Wang introduced the concept of self-powered nanosystems 23 based on the developed piezoelectric nanogenerator, which is capable of converting the mechanical trigger from an AFM tip into an electrical output, as reported in 2006 (Figure 1a and 1b). 24The self-powered nanosystem comprises a sensor, a data processing and transmission circuit, and an energy harvesting and storage unit, as depicted in Figure 1c and 1d.The energy harvester can convert ambient mechanical, thermal, chemical, and even solar energy into electricity, which then powers the subsequent sensing processes, including data acquisition, processing, and transmission. Subsequently, numerous self-powered prototypes have been developed.As illustrated in Figure 2, a nanogenerator based on piezoelectric materials was reported in 2006. 26−25 In 2011, a fiber-based piezoelectric nanogenerator featuring thousands of ZnO nanowires was fabricated, and the first self-powered electronic watch powered by a ZnO− nanowire array was reported. 27In 2012, a novel mechanical energy harvesting approach based on triboelectrification and electrostatic induction emerged, named a triboelectric nanogenerator (TENG). 28Due to its high output performance, selfpowered nanosystems have advanced further.For example, a self-powered energy cell consisting of a TENG and a Li-ion battery was reported in 2013. 29In 2016, a combination of TENGs and solar cells was woven into fabric, serving as a wearable power supply. 30In 2018, employing a TENG as a self-powered sonic sensor was proposed, with the device functioning as a hearing aid for humans or robots. 31In 2020, a self-powered sweat sensing system powered by human motion energy was demonstrated.Furthermore, a symbiotic cardiac pacemaker was unveiled in 2021, 32 enabling the pacemaker to be driven by the host organism's own heartbeat. Over the past decade, various innovative technologies and self-powered prototypes have been developed, Figure 3, making self-powered technology promising for wearable applications.As researchers worldwide devote significant effort to this field, two main research strategies have emerged: one involves using nanogenerators as power sources to drive sensing units and processing circuits, thereby constructing complete self-powered systems; the other entails employing energy harvesters as sensing units, serving as self-powered sensors, and then integrating them with processing circuits and external power sources to create low-power-consumption systems.The former approach benefits from mature sensing devices, but its drawback lies in the depletable nature of the generated power from current powering techniques, which limits the system's functionalities.Conversely, the latter approach enables the use of self-powered sensors to extend the overall system's battery life, directly integrate with mature processing and power circuits, and in some scenarios enhance the signal-to-noise ratio due to the self-generating-signal ability. 34However, this method requires external power sources and the development of additional self-powered sensors.Therefore, in the subsequent sections, we will discuss both strategies and present other innovative powering techniques suitable for wearable applications. SELF-POWERED WEARABLE SENSORS In this section, we present typical self-powered wearable sensors based on piezoelectric, triboelectric, piezotronic, and tribotronic effects. Piezoelectric Sensors 2.1.1.Mechanism.Piezoelectricity has been reported for a long time. 35,36It stands for the electricity generation due to the breaking of the material structure's central symmetry under pressure.It has been widely exploited in applications, such as the sonic production 37 and detection, 38 inkjet printing, 39 scanning microscopes, 40,41 high-voltage electricity generation, 42 etc.The first nanogenerator is based on the piezoelectricity of zinc oxide (ZnO).Under an external force, the central symmetry in the ZnO crystal structure is broken, forming a piezopotential.For example, in the wurtzite structure of a ZnO crystal, Zn 2+ and O 2− are stacked layer-by-layer along the c axis 43 (see Figure 4a), and the charge center of the cations and anions overlap at this stage.When the structure is deformed under an external force, the charge centers are separated and thus form an electric dipole, resulting in a piezopotential (Figure 4b).Therefore, free electrons are driven to flow through the external circuit in order to screen the piezopotential and achieve electrostatic equilibrium.That is the mechanics−electricity conversion process (see Figure 4c). 44If the external force is periodically applied, the nanogenerator will continuously output (Figure 4d).−48 2.1.2.Piezoelectric Materials for Wearables.The integration of sensing elements with the soft and curvilinear surfaces of the human body requires attention to materials design to obtain seamless and breathable interfaces, which ensures devices' robustness and wearers' comfort during daily motions. 48,49Therefore, flexible/stretchable materials are required.Traditional piezoelectric materials are piezoelectric ceramics, e.g., lead zirconate titanate (PZT). 50Although it possesses a high piezoelectric coefficient (PC) of 218.7 pC (d33), the toxicity of ingredients and low flexibility restrict its wearable applications.Hence, soft piezoelectric materials have been rapidly developed.Figure 5a shows an array of ZnO nanorods capsulated by soft jelly, such as polydimethylsiloxane (PDMS) or Ecoflex.Under bending or stretching, the jelly compresses the ZnO nanorods, delivering electricity output.Poly(vinylidene fluoride) (PVDF), as an organic material with inherent flexibility, is also widely used as the piezoelectric material.It can be fabricated as fibers and textiles, which are suitable for wearable applications.Lin reported PVDF fibers fabricated via electrospinning (Figure 5b) with in situ mechanical stretching and electrical poling to produce piezoelectric properties. 51,52Besides, diversiform piezoelectric fibers, combining wearability and piezoelectricity, have been discussed in the literature (including inorganic ceramics or organic polymers). 53Another approach to obtain soft piezoelectric materials is softening traditional piezoelectric ceramics.Figure 5c shows a buckled PZT ribbon array, which makes the whole film retractable.Compared with PVDF, it exhibits 10times higher output; however, the reported largest strain is 8%. 54Figure 5d shows the barium carbonate-doped soft dielectric materials. 55The challenge lies in how to align the barium carbonate c axis to achieve a high PC value.Furthermore, as more new materials emerge, two-dimensional MoS 2 shows its piezoelectric property and forms a soft and transparent device, as shown in Figure 5e. 56Besides, poly(Llactic acid) (PLLA) and poly(vinyl alcohol) (PVA)/glycine/ PVA were recently proposed due to their excellence in softness, piezoelectric property, and biocompatibility (Figure 5f and 5g). 56,57,59,60Notably, PVA is employed to promote the crystallization of glycine due to the hydrogen bonding at the PVA−glycine interface, which results in a large-scale generation of a piezoelectric film, with water-soluble and biodegradable properties. 2.1.3.Devices and Applications.In this section, we show examples of wearable sensors based on a piezoelectric mechanism.Figure 6a shows a finger bending monitoring device made up of a ZnO nanowire on a flexible substrate with an output around 0.1 V. 62 And, the sensor output can be enhanced by serially connecting the devices.Figure 6b presents a sonic sensing application based on PVDF, where the sensing film will detect the sonic signals. 63Figure 6c shows the application of using a piezoelectric device as the phonation sensor to help dumb people.The serpentine mesh layout was employed to achieve the desired stretchability instead of modifying the intrinsic mechanical properties of PVDF. 64igure 6d shows a smart insole, where the piezoelectric film of PVDF serves as a pressure sensor to detect the wearer's foot pressure distribution. 65It is useful to assist doctors in the diagnosis of some orthopedic disorders, e.g., lumbar stenosis.In this work, the authors collected time-varying pressure data during patients walking and then employed machine learning analysis to evaluate and predict the patients' disorders.Eventually, the system can recognize the patients from the healthy group and evaluate the patients' postoperative recovery status. Figure 6e illustrates the piezoelectric sensor's application in robotics.Notably, Han et al. utilized a threedimensional processing technique to fabricate a stereostructure.Therefore, the sensor can detect multidirectional force trigger, making it a high-sensitivity touch sensor. 662.Triboelectric Sensor 2.2.1.Mechanism.The triboelectric nanogenerator was first proposed in 2012.28 It combined contact electrification and electrostatic induction, converting mechanical motions into electricity output.67−69 Generally, when two materials come in contact, due to the difference in the electron affinities, electrons are inclined to transfer from the one with lower affinity to the other with higher affinity; therefore, polarization is generated across the interface.Afterward, when the two materials separate or approach periodically, the electric field varies and causes electrons to move back and forth between the two electrodes located behind the contacting materials via external wires/circuits.Thus, alternative current is obtained (Figure 7a, more details about the working principle of the TENG can be found in the literature 70−72 ). Th output could be employed as a power supply 73−76 or as a sensing signal. 77−80 In this section, we focus on the sensing performance of the TENG. It is also worth noting that both TENGs and PENGs are on the basis of the dielectric materials' polarization.The physical theory is derived from the classical Maxwell equation, specifically the displacement current ∂P s /∂t, as shown in Figure 7b. 81The PENG output is determined by the variation of the displacement current inside the piezoelectric material, whereas that of the TENG is determined by the variation of the displacement current during the interface of two contacting materials.Specifically, the output of the TENG is characterized by a term P t s , where P s is mainly due to the existence of the surface charges and the relative movement of the objects as driven by mechanical motion.In general, the conventional Maxwell equations are for media whose boundaries and volumes are fixed and stationary.But, for the cases that involve moving objects, such as the case in the TENG, the equations have to be expanded.Starting from the integral forms of the four physics laws, Wang derived the expanded Maxwell equations in differential forms for slow moving objects (v ≪ c).The Maxwell equations for a mechano-driven slow-moving media system are given by 82−84 The moving velocity of the unit charge inside the medium is split into two components: the moving velocity v of the moving reference frame and the relative moving velocity (v r ) of the point charge inside the medium with respect to the moving reference frame.These equations are most useful for describing the electromagnetic behavior of moving media with acceleration, and they are fundamental for dealing with the coupling among mechano−electric−magnetic multifields and the interaction.The expanded equations are the most comprehensive governing equations including both electromagnetic interaction and power generation as well as their coupling for a TENG.Of course, the applications of Maxwell's equations for a mechano-driven system are more general, and their application fields are way beyond the cases for a TENG. Another interesting research focus of TENGs is the origin of the transferred charge.In the past, an electron transfer mechanism dominated, and theories based on the metals' work function were adopted to explain the variety of materials' electrification abilities. 85Afterward, Whitesides reported an ion transfer mechanism in cases of solid−liquid. 86Meanwhile, there are some other opinions which point out that the contact process will cause mass transfer and then lead to electrification. 87Since 2018, Wang has published a series of works on this topic.It was found that in the case of solid−solid contact, electron transfer is dominant as the as-generated charge recession follows the hot electron emission rule, which are both proved at macro-and microscales 88−90 (Figure 7c).When the contact happens in water and a solid, both electron and ion transfer exist 91,92 (Figure 7d), where the ratio is determined by the contact angle of the solid.Other works about the charge transfer mechanism have been discussed in detail in the references, including liquid−liquid contact, 93,94 gas−solid contact, 95 etc. (Figure 7e). As for the TENG device, it has been widely investigated, and four modes have been developed, i.e., contact-separation mode, sliding mode, single-electrode mode, and free-standing mode, as shown in Figure 7f. 96They are suitable for different application scenarios.For example, the contact-separation mode can sense pressing or triggering, while the sliding mode is suitable for displacement sensation.A single-electrode mode can be used in many cases for its simple structure configuration, but the drawback is its lower interference resistance and signal output.The free-standing mode works similarly to the sliding mode and takes the feature of the facile fabrication process.Many self-powered triboelectric sensors have been designed based on the above four modes. 97−99 2.2.2.Triboelectric Materials for Wearables.As for wearable applications, materials are also required to be flexible, stretchable, and lightweight.In this section, we recall common materials used for wearable triboelectric sensors.Among them, polydimethylsiloxane (PDMS) is commonly utilized at the beginning for the mold fabrication, as shown in Figure 8a.Researchers employed the molding process to fabricate various PDMS films, covered by gratings, pillars, pyramids, and even micro−nano dual structures, 77,100,101 which increases the contact area, also weakens the sticky surface to some degree, and eventually enhances the sensors' signal output.Bao introduced a porous structure in PDMS (Figure 8b), serving as a Young's module switchable approach for contact materials. 102Besides, paper 103,104 and wood, 105 which are eco-friendly, are also utilized as triboelectrification materials.Mao et al. proposed a paper-based triboelectric device (Figure 8c), with a maximum power density of 53 W/m 2 .It can effectively convert mechanical energy from the action of turning book pages into electricity, serving as a document monitor. 106Luo et al. reported a wood-based triboelectric sensor for athletic big data collecting and it applied in table tennis. 105ith the development of a human−machine interface, devices based on materials with high flexibility, gas permeability, and wearability have been reported.Yi et al. showed a stretchable device by using a conductive liquid as the induction electrode covered with rubber.It can stand 300% strain and be able to sense arm motions. 107Similarly, Pu et al. presented a hydrogel triboelectric sensor, 108 which can stand up to 1160% strain and delivers an output of 35 mW/m 2 (Figure 8d).Another factor for wearable applications that should be considered is breathability.It is important to adjust the thermal−moisture balance and achieve gas exchange between human skin and the environment, and low breath-ability could cause skin discomfort and even induce inflammation and itching. 109Dong et al. fabricated a triboelectric sensor by sandwiching a silver nanowire (Ag NW) between polylactic-co-glycolic acid (PLGA) and poly-(vinyl alcohol) (PVA). 110With the micro-to-nano hierarchical porous structure, the device has a high specific surface area and numerous capillary channels for thermal−moisture transfer (Figure 8e and 8f).Besides, researchers also developed functional fibers to compose an inherent wearable triboelectric sensor.For instance, Yang et al. fabricated yarn-based stretchable triboelectric sensor arrays (Figure 8g), detecting hand motions, which allows real-time translation of signs into spoken words. 111Other fiber-based TENGs can be found in the references. 106,107.2.3.Force-Sensitive Devices.Triboelectric sensors are inherent to deliver electricity output under external mechanical triggers.Thus, many self-powered wearable sensors have been proposed.Among them, devices based on force-sensitive mechanisms have been widely investigated.Figure 9a show a schematic of the general sensing mechanism.When the external mechanical trigger is applied, the gap between two contact materials varies and then induces an output potential.Figure 9b illustrates a soft pressure sensor, which is made up of PTFE/nylon with ITO as the induction electrode.The device works in the single-electrode mode.When an external mechanical force is triggered, PTFE approaches nylon, causing electron flow in the circuit, delivering a sensitivity of 51 mV/Pa with a response time less than 6 ms.Then, it was demonstrated to measure the dynamic pressure of human sphygmic or work as an anti-interference throat microphone, which could be used for recovering the human throat voice even in an extremely noisy environment.114 Figure 9b illustrates a contact-separation mode TENG, working as an auditory sensor.It is based on a porous structure, with the Au/FEP as the electrification pair, and delivers a sensitivity of 110 mV/decibel.Then, it was used to construct a hearing aid, which simplified the signal processing and reduced the power consumption.31 In order to enhance the comfort and air permeability, researchers employed electrospinning to fabricate a skin-compliant strain sensor (Figure 9c).Its high specific surface area and numerous capillary channels assure the thermal−moisture transfer, which was found to be 120 mm/s, compared to commercial jeans of 10 mm/s.110 Figure 9d illustrates a TENG strain sensor made up of fibers.It offers excellent mechanical durability, high sensitivity, and a quick response time and then was used to construct a wearable sign-to-speech translation system.A total of 660 sign language hand gestures based on American Sign Language (ASL) were acquired and successfully analyzed, with a high recognition rate of 98.63% and a short recognition time of less than 1 s.112 2.2.4.Displacement-Sensitive Devices.A force-sensitive mechanism has been widely utilized for its shape compliance and simple structure.However, environmental influences, the materials' viscoelasticity, and fatigability will inevitably affect the sensors' output amplitude and thus lower the precision as well as the stability.To solve this problem, a displacement-sensitive mechanism was found to be a promising solution.It combines the relative sliding with the TENG's grating electrodes, making the sensor output alternative waveforms, according to the displacement (Figure 10a).Thus, the signal's phase variation indicates the information on the external mechanical motion.Even if the signal's amplitude varies owing to influencing factors, its phase variation remains stable, assuring the sensing precision. Figure 10b shows a TENG angular sensor. 115It consists of two rotation disks: one serves as the stator and the other as the rotator.Under mechanical triggers, the rotator rotates relative to the stator and continuously delivers periodic waveforms with one waveform corresponding to one electrode unit.According to the signal's phase variation, the rotation degree can be obtained.When the temperature and humidity influence the signal amplitude, the phase will not change, so that the sensor possesses high precision.Subsequently, researchers demonstrated its applications in a medical rehabilitation exoskeleton, where the angular sensor was embedded and able to monitor the total knee arthroplasty (TKA) patients' postoperative knee bending motions long term (Figure 10c).By combining data sharing and doctors' guidance, patients achieved over 20−30% enhancement in recovery. 116−120 For instance, Lee reported a bidirectional angular sensor, as shown in Figure 10d. 117It is embedded in an arm exoskeleton, serving as an economic and advanced human−machine interface for supporting the manipulation in both real and virtual worlds.Figure 10e presents a stretchable sensor based on a cyclic annular TENG encapsulated in a retractable reel. 118It was demonstrated to be able to monitor the spinal bending of the participant, delivering a displacement resolution of 0.6 mm, which corresponds well to the traditional inclinometer and deep camera.Furthermore, researchers also demonstrated its high durability even after 1 million stretching cycles due to the displacement-sensitive mechanism. 119ere, we provide a table to show the general differences between the two sensing mechanisms discussed above, Table 1. As we can see, displacement-sensitive sensors could be inert to the environmental factors and materials' fatigue and then deliver accurate sensing performance.However, the present sensing objects are not diverse; devices with more functionalities, e.g.sensing force, pressure, etc., need be exploited based on this mechanism. Piezotronics and Tribotronics Piezotronic and tribotronic sensors are typical extensions of piezoelectric and triboelectric sensors, which utilize the piezoor tribopotentials as the gate voltage to modulate various semiconductor devices (e.g., field effect transistors, memristors, Schottky diodes) and realize the sophisticated sensing functions. 2.3.1.Piezotronic Mechanism and Typical Wearable Sensors.−125 As is known, the piezopotential is a locally induced electrical field in a noncentrosymmetric crystal by external strain, which originates from the nonannihilative and nonmobile ionic charges.As illustrated in Figure 11a and 11b, when a semiconductor device (in asymmetric structure) is subjected to an applied tensile strain, a negative piezopotential will be induced, repel electrons away from the p−n junction or M−S contact interface, and lead the local SBH to increase under the influence of negative polarization charges.On the contrary, externally applied compressive strain will induce a positive piezoelectric potential, attract electrons toward the M−S interface/p−n junction, and cause the local SBH to decrease under the influence of positive piezoelectric polarization charges. In addition to the strain-gated two-terminal devices, piezotronics can be extended to a more general definition, e.g., the piezopotential/strain-gated transistors which can be functionalized under the piezopotential induced by externals strain.The rational design of PENG-gated three-terminal devices is exhibited in Figure 11b.The integrated PENG component is composed of the active piezoelectric material sandwiched between two electrodes.The induced piezopotential in a piezoelectric polymer is an intrinsic inner crystal field according to the enhanced/weakened electric dipole moments in the piezoelectric materials under external stress. 126The applied stress can lead to the rearrangement of electric dipoles along different directions.Applying tensile or compressive stress can induce a negative or positive gate voltage (−qV PENG and +qV PENG ) to the FET and lead to energy band bending and carrier depletion/accumulation in the semiconductor channel. Based on the interfacial modulation of charge carriers, the piezotronic effect mainly describes the coupling effect between piezoelectricity and charge carrier transport in piezoelectric semiconductors. 123,124,127,128A typical flexible strain sensor relying on the piezotronic effect is exhibited in Figure 12a, 129 which is composed of a horizontal single-crystal ZnO nanowire bonded to a plastic substrate.Its volt−ampere characteristics (I−V curves) indicate the strain sensor is highly sensitive due to the fact that the induced remnant piezoelectric charges can have a significant influence on the SBH and dramatically change the output current.Notably, the piezotronic effect should be distinguished from the piezoresistive effect (commonly existing in conventional semiconductor materials) according to the typical asymmetric effect on the two contacts, nonlinear and asymmetric rectifying I−V curve, strong polarity/interface effect/switch function, etc.The measured gauge factor for the ZnO piezotronic strain sensor, defined as the curve slope of the normalized current vs strain, is evaluated to be 1250 with a fast response time of 10 ms.The ZnO piezotronic strain sensor is also readily modulated by solidstate electrolyte to achieve tunable piezotronic strain sensors in a low-power-consuming way. 125In addition to 1D materials, the piezotronic effect has also been observed in 2D materials, e.g., single-layer MoS 2 , MoSe 2 , and multilayer α-In 2 Se 3 flake. 130or instance, a piezotronic sensor based on an α-In 2 Se 3 flake has been prepared for breath monitoring in a self-powered fashion (Figure 12b), which can effectively record three different breath states.The prepared In 2 Se 3 piezotronic sensor can operate in an active and direct coupling means, exhibiting significant advantages in breath state detection according to the synchronization between breath frequency and output signals. In addition to single piezotronic devices, the integration of multiple piezotronic nanodevices into an active matrix/array plays a critical role in realizing a functional system for highresolution sensation.Wu et al. tried to demonstrate an integrated piezotronic transistor array with vertically aligned ZnO nanowires by combining the bottom-up and top-down microfabrication techniques.The piezotronic transistor array can be utilized as a taxel-addressable active matrix for imaging the external tactile information even by removing the gate electrodes (Figure 12c). 131The demonstrated piezotronic active matrix enables tactile pressure imaging in a self-powered means by only applying external mechanical stimuli.The simplified fabrication and integration process for the active matrix also offers a significant route to developing more diversified smart wearable sensors with multifunctionalities. Tribotronic Mechanism and Typical Wearable Sensors.Similar to piezotronics, tribotronics 133−135 is an emerging field utilizing the triboelectric potential to modulate the charge carrier transport in semiconductor devices instead of the applied gate voltage (bottom panel in Figure 11b).According to the material diversity and excellent functionality, 77,96,136 tribotronic devices have been widely investigated in a variety of wearable sensing applications, including tribotronic logic gates, 137,138 tactile controlled light emitting diodes (OLEDs), 139 memory devices, 140 smart tactile sensors, 141−143 and wearable/flexible displacement sensors. 144,145Wang's group has reported the first tribotronic device by combining a TENG with a metal−oxide−semiconductor FET (MOSFET), also known as contact electrification FET (CE-FET) (Figure 13a). 133The contactseparation-induced triboelectric potential can fully replace the external gate voltage and reduce corresponding energy consumption.An equivalent circuit diagram of the tribotronic transistor and the basic output characteristics under different vertical displacements (D) are presented in Figure 13b and 13c, respectively.Different from the traditional electrical behaviors of a transistor under sweeping gate voltage, the drain current (I D ) of the CE-FET increases with increasing displacement D from 0 to 80 μm.The corresponding working mechanism and energy band diagram are shown in Figure 13d. 146At an initial state of D = D 0 , no charge transfer occurs between the integrated TENG and the transistor device.When D is controlled to increase or decrease, the induced positive/ negative charges in the TENG are transferred to the gate and couple the positive/negative potential (σ+/σ−) to the transistor, resulting in the carrier accumulation/depletion and energy band bending in the semiconductor channel.In analogy with the important parameters of traditional transistors, e.g., the threshold voltage (V th ) and subthreshold swing (SS), the tribotronic transistor can also be evaluated by analogous parameters, e.g., the tribotronic threshold value (D t ) and the tribotronic subthreshold swing (SS t ).D t indicates the minimum TENG displacement required to establish a conductive path between the source and the drain electrodes.SS t , defined as SS t = ∂(D)/∂(log 10(I D )), describes the minimum change of the TENG displacement (ΔD) required to contribute to 1 order of magnitude variation in I D . Tribotronic devices are potentially applicable for flexible and wearable touch sensors, human−machine interfaces, and artificial robotics.For instance, a tribotronic MoS 2 transistor has been developed by integrating a MoS 2 FET with a singleelectrode mode TENG and applied as a smart tactile switch (Figure 14a). 141The triboelectric potential induced by the TENG displacement is available as the gate voltage to modulate the charge carrier transport in the MoS 2 channel, achieving an on/off ratio of ∼16 to realize a direct tactile switch using fingers to light up double LEDs and indicate the tactile information.Notably, Wei et al. tried to demonstrate a high-performance tribotronic transistor array with record high current on/off ratios (>10 8 ) by combing an integrated TENG with a large-area organic transistor array, 149 which further offers an effective wearable interactive intelligent system, artificial robotic skin, and wearable mechano-driven electronic terminals. Relying on ion-gel gating, a tribotronic graphene tactile sensor is prepared on a flexible substrate based on a singleelectrode mode TENG and a coplanar gate graphene transistor (Figure 14b). 142The tribotronic transistor shows excellent performance in wearable tactile sensing and spatial mapping, including high sensitivity (2% kPa −1 ), a superior detection limit (1 kPa), a fast response time (30 ms), and excellent stability.In order to extend the functionality of material recognition and approaching sensation, a mechanosensationactive matrix is prepared based on a direct-contact tribotronic transistor array (Figure 14c). 144A typical ion gel is utilized as both the dielectric layer of the graphene transistor and the friction layer of the TENG component to realize the directcontact sensing process with high sensitivity (0.16 mm −1 ), fast response time (15 ms), and excellent durability (>1000 cycles).When the tribotronic device contacts with different materials, different output sensing performance can be successfully characterized according to the triboelectric series.Accordingly, the direct-contact tribotronic graphene transistor can enable the wearable sensing of the contact distance and the identification of different friction materials, exhibiting the following advantages: (i) realizing noninvasive sensing based on triboelectrification and charge transfer, (ii) greatly simplifying the fabrication process of the tribotronic transistor, and (iii) effectively reducing the power consumption through electrical double-layer gating (i.e., triboiontronics).The demonstrated applications of the simplification and lowpower consumption in triboiontronic transistors promise more opportunities for flexible multifunctional electronics, intelligent interactive sensing systems, and diversified neuromorphic applications. Advanced Artificial Synapse Applications.The integration of different types of sensors with synaptic devices in a synergistic fashion has pushed forward the development of interactive neuromorphic devices, which can not only sense/ store/process external stimuli information in a direct means but also implement the biomimetic functions (e.g., perception, learning, memory, and even computation). 152The cooperation of receptors/neurons/synapses in the somatosensory system allows for effective recognition/processing on complex external tactile information. 153As shown in Figure 15a, the tactile stimuli signals are physiologically detected by mechanorecep-tors on the skin and transmitted along the axons to postsynaptic neurons for further recognizing/processing and the tactile information. 154,155Generally, human skin is covered with different types of mechanoreceptors to record specific types of tactile stimuli and implement pressure/touch/tactile recognition.For instance, pressure receptors are commonly fast adaptive receptors made of Pacinian corpuscles to perceive pressure, while the touch receptors are usually slowly adaptive receptors made of Meisner bodies/Rufini bodies/Merkel discs to perceive touch information. 156Among the reported different types of mechanosensors, the resistive/capacitive mechanoreceptors can capture continuous static forces, while the piezoelectric/triboelectric mechanoreceptors can capture instantaneous dynamic pressures. 157Accordingly, to construct an interactive neuromorphic system, an artificial afferent nerve has been proposed to simulate the function of the human sensory system by integrating a TENG mechanoreceptor and an electrolyte-gated neuromorphic transistor (Figure 15b).Based on the working mechanism of the triboelectric−neuromorphic tactile system, the artificial afferent nerve can be activated by the induced triboelectric potential, utilized to monitor different types of stimulus information (e.g., mechanical displacement, tactile signal, lateral-sliding motion, and pressure), and conducted to identify the frequency/amplitude of external motions for simulating the behavior of a virtual stimulus in the cerebral cortex. 132Notably, the electrolyte-gated transistors reported in this work are generally classified into the electrostatically controlled electric double-layer transistors and electrochemical transistors according to whether the ions react with the semiconductor materials. 158−162 For instance, a versatile triboiontronic MoS 2 transistor via a proton conductor has been demonstrated to utilize the triboelectric potential gating to modulate the transistor performance via proton migration/accumulation.It has been demonstrated as a mechanical behavior-controlled logic device and a neuromorphic sensory system, representing reliable and effective triboelectric potential modulation through protonic dielectrics. 163esides, another mechanoplastic triboelectric neuromorphic tactile system has been constructed based on a floating gate FET (the terminology of "mechanoplastic" indicates the utilization of mechanical behavior to tune synaptic plasticity or update synaptic weight). 130The applied mechanical displacement can readily induce triboelectric potential to gate the transistor, trigger the PSC signal, and adjust the synaptic weight so as to realize the mechanical behavior modulated synaptic plasticity (i.e., mechanoplasticity).In this device, the system can implement both short-term and longterm plasticity according to the charge trapping in floating gate, realizing by mechanical displacement modulation in an active and interactive way.The triboelectric potential derived from TENGs can also be readily utilized to integrate with dual-gate transistor and implement multiple sensing applications.By integrating a triboelectric potential-powered dual-gate IGZO transistor with a common bottom gate and an air−dielectric top gate, a device-level versatile sensory platform is constructed to implement multifunctional sensations (including pressure/ distance/optical sensors and artificial photonic synapse). 164rthermore, Sun's group 165 has tried to introduce a bioinspired mechano−photonic artificial synapse with synergistic mechanical and optical plasticity (i.e., multimode/mixedmode synaptic plasticity) (Figure 15c).Based on the integration of a graphene/MoS 2 heterostructure-based phototransistor and an integrated TENG component, a mechanical displacement-tuned photoresponse is fulfilled based on the charge transfer/exchange in the heterostructure by the triboelectric potential.The reported mechano−photonic artificial synapses have provided an efficient route in implementing mixed mode interactions, simulating more complex biological neural systems and facilitating the development of interactive artificial intelligence. At the end of this section, we selected two typical sensor devices, i.e., a strain sensor and a pressure sensor, and provide a comparative table to show the performance and characteristics based on various working principles, as shown in Table 2. SELF-POWERED WEARABLE SYSTEMS Unlike self-powered sensors, self-powered systems consist of functional circuits (sensor units), energy harvesting units, and energy management and storage units, as shown in Figure 16.The system's power consumption is totally supplied by the energy harvesting units.In this section, we will review several of the main harvesting principles, including piezoelectric, triboelectric, thermoelectric, and photoelectric, biofuel cells, and hybrid generators.In addition, energy storage units are also discussed.At the end, we give a table summarizing the power levels of these devices. Wearable Power Sources 3.1.1.Piezoelectric Nanogenerator.The first nanogenerator is proposed on the basis of the piezoelectric principle and then widely used as a power source for some self-powered applications, including environmental monitoring systems 166−168 and security systems. 169,170Meanwhile, selfpowered wearable systems have been developed. 171,172Figure 17a shows an electronic watch, for the first time, powered by a one-layer ZnO nanowire generator.Specifically, the nanogenerator delivers a voltage of 20 V and a current of 6 μA.After regulation by a LTC3588 power management chip, the device successfully drives a commercial electrical watch for more than 1 min under mechanical triggering for 1000 times. 27igure 17b illustrates a high-performance and hyper-stretchable elastic-composite piezoelectric nanogenerator by using Ag nanowire (Ag NW) stretchable electrodes. 173It delivers an output of 4 V and 500 nA and can convert biomechanical stretching energy to electricity.Figure 17c shows a wearable UV sensor powered by a PENG, where the generator is made up of dense lead zirconate titanate (PZT) parallel nanowires, fabricated via electrospinning.Its output voltage and current are about 6 V and 45 nA. 174Besides, You also presented a selfpowered wearable system via embedding PVDF films into shoes. 175.1.2.Triboelectric Nanogenerator.Compared to a PENG, a TENG shows high output performance.Therefore, more wearable electronics driven by TENGs have been proposed.For instance, Zhong et al. reported a self-powered wearable temperature monitoring system. 176The TENG was fabricated by using commodity cotton threads, a polytetrafluoroethylene aqueous suspension, and carbon nanotubes.It can convert human motion/vibration energy into electricity with an average output power density of 0.1 μW/cm 2 .Then, it was demonstrated as an effective power shirt to drive a wireless body temperature sensor system (Figure 18a).Song et al. proposed a TENG by using a flexible printed circuit board (FPCB), which achieved a high power output of ∼416 mW/ m 2 . 177And, it can be used in a battery-free sweat monitoring system with multiplexed biosensors and wirelessly transmit data through Bluetooth during on-body human trials (Figure 18b).Li reported a symbiotic cardiac pacemaker that is fully driven by the large animal's cardiac pacing via a TENG (Figure 18c).The system consisted of a pacemaker, a power management circuit, and a TENG device, which delivers a voltage of up to 65 V and an energy generation of 0.495 μJ with the energy consumption of a traditional pacemaker of around 0.377 μJ. Figure 18d shows a self-powered electric stimulation system for neural differentiation.It is found that the neural differentiation is dramatically improved by the asgenerated electric pulse via a TENG triggered by human normal walking (Figure 18d).−184 It is worth noting that the PENG/TENG possesses pulse outputs, and therefore, related power management circuits for voltage regulation and energy storage are required.−189 In brief, a high-impedance circuit element is utilized to obtain the as-generated high-voltage electricity, and then, the energy will be converted into low-voltage electricity by switchable capacitor arrays 185 or bulk circuits. 186.1.3.Thermoelectric Generator.Generally, thermoelectric devices (TEDs) can realize the conversion of heat energy into electricity through the Seebeck/Soret effect.When there is a temperature gradient between two electrically connected conductors/semiconductors, the diffusion of charge carriers/ions can be induced to move away from the hot side which leads to a thermopotential and a consequent direct current (dc) flowing through the external circuit.Commonly, the maximum efficiency of the energy conversion process for power generation at a given temperature point is determined by the thermoelectric materials figure of merit ZT, given by ZT = σS2T/κ, where σ is the material's electrical conductivity, κ is thermal conductivity, and S is the Seebeck coefficient, which changes with temperature T. General state-of-the-art thermoelectric materials have ZT values of 2−3 with power densities reaching several tens of microwatts per square centimeter.For instance, Jin et al. reported a flexible thermoelectric material composed of Bi 2 Te 3 nanocrystals in highly ordered crystalline alignment anchored on a carbon nanotube network (Figure 19a) The achieved maximum thermoelectric figure of (ZT) was evaluated to be ∼0.89 at room temperature due to the strong phonon scattering effect.191 According to the synergistic thermodiffusion and thermogalvanic effects, Han et al. demonstrated a giant positive thermopower (∼17.0 mV/K) in a flexible ionic thermoelectric material (Figure 19b).192 Hong et al. demonstrated a wearable TED with a high coefficient of performance (COP > 1.5), which can deliver more than a 10 °C cooling effect.The reported wearable TED with high flexibility is available to achieve long-term active cooling according to the novel design, which may inspire sophisticated personalized cooling with lower power consumption and improved comfort (Figure 19c).193 Lee et al. reported a compliant TED with stretchable interconnects and flexible conductors (silver-nanowire-based soft interconnects and metal particle magnetically self-assembled conductors) to realize high thermoelectric performance combined with excellent conformability (Figure 19d).194 Byun et al. prepared a flexible TED constructed on a gallium platform (Figure 19e), which can realize active temperature control to advance the solid−liquid phase transition of gallium based on the compact design and fast mechanical mode switching function.The flexible TED system provides new chances for personalized electronics, artificial robotics, and intelligent biomedical devices.195 3.1.4.Photovoltaic Cell.Solar cells, generally containing active layers, carrier-selective layers, and electrodes, can readily convert photonic energy into electrical energy based on the photovoltaic effects.Generally, the incident light is absorbed by the active layers and induces the generation of electron− hole pairs/excitons, which can be separated by the built-in potential and collected by the electrodes to produce an output current.As the thickness of the active layers in a solar cell may range from a few hundred nanometers to a few micrometers, how to reduce the thickness of the flexible substrate is of significant importance to increase the device flexibility/power density and reduce its weight (Figure 20a and 20b).196−199 Until now, the power conversion efficiencies (PCEs) of flexible organic solar cells and single-junction flexible perovskite solar cells under AM 1.5 G standard conditions have reached 16.61% and 21.73%, respectively.200,201 The PCE of flexible perovskite solar cells can be further increased to 23.33% at 400 lx and 28.63% at 5000 lx under the exposure of a weak light condition by a white light-emitting diode (Figure 20c).202 The PCT of a flexible organic solar cell has also been reported with an improvement to 20.5% under an indoor light illumination of 1500 lx (Figure 20d).203 3.1.5.Biofuel Cell.Biofuel cells can readily convert biochemical energy into electricity, which generally relies on the redox reactions of various biofluids.Biofuel cells are commonly based on oxidizing biocatalysts, which can promote an oxidation reaction of fuels at bioanode and a reduction reaction to water at the biocathode.The induced electrons flowing through an external electric circuit can help to generate output current and output power (Figure 21).The output current is determined by the concentration of the biofluids and the efficacy of the electron transfer process between the biocatalyst and the electrode.The power density is also improved to 3.5 mW/cm 2 in the first developed milliwatt-level flexible sweat biofuel cell.33,204−206 3.1.6.Hybrid Cell.As various and different forms of ambient energy are available as the energy sources in their related working environment, an effective strategy has been proposed to enhance the power density and sustainability by utilizing hybrid energy harvesters to power mobile/portable electronic devices.The first hybrid energy harvester was reported in 2009, combining a solar cell and a PENG to harvest both photonic and ultrasonic energy 207 (Figure 22a).Other hybrid energy harvesters or energy harvesting strategies range from integrated TENGs and biofuel cells, integrated TENGs and solar cells, hybridized TENGs/PENGs (Figure 22b and 22c), and hybridized TENGs/TEDs (Figure 22d).30,208,209 The proposed integration of different/multiple energy harvesting devices on the same platform can be used to harvest two or multiple kinds of energy simultaneously, which is highly demanded to compensate for the intermittent drawbacks from one single energy source and significantly enhance the output power. Wearable Energy Storage Units Since the power generation is discontinuous or sometimes higher than the power consumption, power storage is required.In this section, we briefly show two typical wearable power storage technologies, i.e., supercapacitors and batteries. 3.2.1.Supercapacitors.Basically, a supercapacitor consists of two electrodes with current collectors, an electrolyte and a separator. 210,211Researchers have devoted great efforts on developing soft materials, high-density structures, as well as a facile fabrication process.Figure 23a shows a coaxial fiber supercapacitor consisting of carbon microfibers coated with carbon nanotubes.The capacitance reached 6.3 mF/cm. 212igure 23b presents a paper-based supercapacitor based on PPy soak and polymerization processing.It exhibits a capacitance of 0.42 F/cm 2 with a high energy density of 1 mWh/cm 3 at a power density of 0.27 W/cm 3 . 213Cui presented a cotton-fabric-based wearable supercapacitor with carbon nanotubes coated on, leading to an electrically conductive interconnecting network.Aqueous lithium sulfate is employed as the electrolyte.The device shows a high specific capacitance (∼70−80 F/g at 0.1 A/g) and cycling stability (negligible decay after 35 000 cycles). 214Figure 23c shows an example of a facile fabrication.Luo et al. utilized direct laser writing to modify the surface of the polyimide.The as-generated graphene serves as the electrode of the capacitor.Then, the electrolyte and PDMS were successively deposited. 123More research on wearable supercapacitors can be found in the literature.been continuously developing. 218,219For instance, a cable-type battery with hollow-spiral, multiple-helix electrodes has been 220 showing a softening method for portable and wearable LIBs (Figure 24a).Textile-shaped batteries have also been presented, 221 as shown in Figure 24b, where the conductive Ni-coated polyester textile is utilized as the current collector with coated active slurries of LiFePO 4 and Li 4 Ti 5 O 12 , with a Celgard separator between.After 30 times folding, the LIB exhibits 84.5% capacity retention.In recent research, He et al. 222 showed a landmark in wearable LIBs (Figure 24c).They produced meters fiber LIB via a scalable process.Additionally, the fiber LIB shows an energy density of 85.69 Wh/kg and maintains over 80% capacity after being bent for 100 000 cycles.−226 We give a summarized table to show the output levels of various energy harvesting devices as well as the energy density of the storage devices in Table 3. OTHER POWER TECHNOLOGIES FOR WEARABLE SYSTEMS Although researchers have attempted to find techniques to power wearable sensors, work is still required to further improve the power output for the increasing monitoring demands.Therefore, some recent novel powering technologies from outside are disclosed.−230 Specifically, there are two components: the soft pressure sensor is attached on human skin, while the power components are located on the prosthetic.Two components utilized NFC coils for power and sensing data transmission.This approach ensures skin compliance and the sustainable power supply.A similar methodology has been extended to radio frequency (rf) powering, 231 as shown in Figure 25b.It aims to monitor wound healing, where the suture is engineered with an electrode, serving as a rf antenna, and equipped with a variable resistor.When the wound changes, the resistor will change.Therefore, we can use rf transmission to transmit electricity to the suture and then obtain the resistor changing from the reflected signal.Figure 25c shows an example that employs infrared radiation to transmit energy to the chip implanted in the living creature, which can generate an electric signal for nerve stimulation. 232Figure 25d illustrates a system that use surface acoustic wave to get energy and sensing data transmission. 233Specifically, they present a chipless wireless patch made of freestanding single-crystalline piezoelectric gallium nitride membranes, forming a surface acoustic wave (SAW) device.And, wireless strain sensing can be done by calibrating the resonant peak shifts in the SAW device in response to strain induced by bending the patch. We show these examples using wireless energy transmission to solve the wearable electronics' power consumption problem, which could be an alternative approach where self-powered technology is not applicable. CHALLENGES AND PERSPECTIVES This review highlights the significant advancements made in wearable power generation technology in recent years.The self-powered approach is particularly appealing, yet it presents its own set of challenges.Currently, two primary strategies have been employed: self-powered sensors and self-powered systems.A variety of self-powered devices have developed; however, the current power output remains somewhat insufficient.As the demand for wearable monitoring continues to grow, the need for increased power generation becomes more pressing.Consequently, innovative technologies must be explored, addressing key research aspects such as mechanisms, materials, devices, and systems in order to overcome specific issues and limitations (Figure 26).Mechanisms: One of the primary challenges in self-powered wearable sensing systems is efficiently harvesting energy from the environment or the user's movements.Various energy sources are discussed above, including piezoelectric, triboelectric, thermoelectric, and photovoltaic, among others.Developing innovative materials and mechanisms to optimize energy harvesting or sensing approaches will be crucial. Materials: A key consideration involves the development of soft materials for high-power harvesting devices.For example, PVDF is widely utilized now in PENGs due to its high piezoelectric coefficient and flexibility.Researchers should determine methods to further enhance these properties while maintaining softness and stability.In terms of triboelectric materials, although numerous options exist, high electrification density materials are in demand, and meanwhile, the fabrication complexity must be taken into account.Additionally, there is also a demand for developing high-performance soft thermoelectric and photovoltaic materials.Besides, compliance between the materials and the human body should be considered, including stretchability, softness, as well as breathability.In our opinion, it is anticipated that more bionic materials might be investigated and exploited. Devices: Since these devices are based on new materials, standardized and scalable fabrication processes should be developed in order to ensure the devices' consistency and stability in practical applications.Besides, more investigations on device structure design, for high-efficiency coupling with human motions, heat, biochemical energy, etc., are required.As for self-powered sensors, the precision is always affected by factors such as the material's viscoelasticity, fatigue, and environmental variations. 222Techniques are required to diminish that influence, for instance, via encapsulation or circuit compensation.Although displacement-sensitive sensors provide high precision and stability, there is a need for prototypes with extended functionality to accommodate diverse applications, such as force or pressure sensing. Systems: Circuitry plays a crucial role in compensating for sensor drift and managing power.Given that the energy pulses generated are typically discontinuous and exhibit varying voltage ranges, power management circuits are vital for adjusting the output, storing energy in conventional storage mediums, and providing power to the entire system.Moreover, signal processing and transmission circuits are also in demand.Finally, proper integration and packaging methods should be investigated.Furthermore, exploring novel monitoring principles, such as surface acoustic wave (SAW)-based sensing systems that eliminate the need for chips and batteries, may offer solutions for particular applications. In summary, it is always a goal for enhancing the output of various power generation devices, from mechanisms, materials, design optimizations to system integrations, so as to ensure the sustainable operation of emerging wearable devices.With the advancement of micro/nanoelectronics and power technologies, we anticipate the development of multifunctional wearable devices and systems that will provide more comfortable and reliable health monitoring for individuals.Concurrently, the gap between system consumption and power supply is expected to narrow, ultimately leading to a paradigm shift toward self-powered wearable sensing and systems. Figure 1 . Figure 1.Emergence of the self-powered concept.(a) ZnO nanowires and (b) electricity generation by using an AFM tip to bend a ZnO nanowire.Reproduced with permission from ref 24.Copyright 2006 AAAS.(c) Sketch of the self-powered nanosystem.Reproduced with permission from ref 25.Copyright 2010 Elsevier Ltd.(d) Self-powered nanosystem with wireless data transmission.Reproduced with permission from ref 26.Copyright 2012 Wiley-VCH. Figure 2 . Figure 2. Rapid development of self-powered nanosystems and their applications in wearable electronics.Piezoelectric nanogenerator in 2006.Reproduced with permission from ref 24.Copyright 2006 AAAS.Self-powered nanosystems in 2008 and 2011.Reproduced with permission from ref 27.Copyright 2011 WILEY-VCH.Triboelectric nanogenerator in 2012.Reproduced with permission from ref 28.Copyright 2012 Elsevier Ltd.Self-charging power unit in 2013.Reproduced with permission from ref 29.Copyright 2013 American Chemical Society.Hybrid power suit in 2016.Reproduced with permission from ref 30.Copyright 2016 Springer Nature.Self-powered sensors in 2018.Reproduced with permission from ref 31.Copyright 2018 AAAS.Self-powered skin electronics powered by sweat in 2020.Reproduced with permission from ref 33.Copyright 2020 AAAS.Symbiotic cardiac pacemaker.Reproduced with permission from ref 32.Copyright 2019 The Authors. Figure 3 . Figure 3. Self-powered sensors and self-powered systems in wearable electronics: self-powered sensors employ energy harvesters as sensing units.(i) Piezoelectric sensor.Reproduced with permission from ref 66.Copyright 2019 Springer Nature.(ii) Triboelectric sensor.Reproduced with permission from ref 31.Copyright 2018 AAAS.(iii) Triboelectric sensor.Reproduced with permission from ref 118.Copyright 2021 The Authors.(iv) Piezotronic sensor.Reproduced with permission from ref 130.Copyright 2019 American Chemical Society.Self-powered systems employ energy harvesters as power sources.(i) Triboelectric powered.Reproduced with permission from ref 76.Copyright 2015 The Authors.(ii) Thermoelectric powered.Reproduced with permission from ref 192.Copyright 2020 AAAS.(iii) Photovoltaic powered.Reproduced with permission from ref 234.Copyright 2018 Springer Nature.(iv) Biofuel powered.Reproduced with permission from ref 33.Copyright 2020 AAAS. Figure 4 . Figure 4. Sketch of the piezoelectric mechanism.(a) Wurtzite-structured ZnO.(b) Piezopotential in tension and compression.(c) Numerical calculation of the piezoelectric potential distribution in a ZnO nanowire under axial strain.Reproduced with permission from ref 43.Copyright 2009 AIP Publishing LLC.(d) Potential of a piezoelectric nanowire under bending.Reproduced with permission from ref 48.Copyright 2008 Springer Nature. Figure 6 . Figure 6.Piezoelectric sensors for wearable applications.(a) Finger motion sensor.Reproduced with permission from ref 62.Copyright 2009 American Chemical Society.(b) Sonic wave sensor.Reproduced with permission from ref 63.Copyright 2015 MDPI.(c) Vibration sensor.Reproduced with permission from ref 64.Copyright 2019 American Chemical Society.(d) Plantar pressure sensors.Reproduced with permission from ref 65.Copyright 2022 WILEY-VCH.(e) Three-dimensional touch sensor.Reproduced with permission from ref 66.Copyright 2019 Springer Nature. Figure 7 . Figure 7. Mechanism of triboelectric nanogenerators.(a) First triboelectric nanogenerator.Reproduced with permission from ref 28.Copyright 2012 Elsevier Ltd.(b) Theory of the triboelectric devices.Reproduced with permission from ref 82.Copyright 2016 Elsevier Ltd.(c) Mechanism of tribocharges' transfer between the interface pair.Reproduced with permission from ref 88.Copyright 2018 WILEY-VCH.(d) Electron/ion transfer both exist in the solid−water contact.Reproduced with permission from ref 91.Copyright 2020 The Authors.(e) Charge transfer during liquid−liquid contact.Reproduced with permission from ref 94.Copyright 2022 WILEY-VCH.(f) Four working modes of TENG devices.Reproduced with permission from ref 97.Copyright 2014 Elsevier Ltd. Figure 9 . Figure 9. Force-sensitive triboelectric sensors.(a) Schematic of the general sensing mechanism.(b) Sphygmic monitoring.Reproduced with permission from ref 114.Copyright 2015 WILEY-VCH.(c) Sonic sensing.Reproduced with permission from ref 31.Copyright 2018 AAAS.(d) Human motion detecting.Reproduced with permission from ref 110.Copyright 2020 The Authors.(e) Gesture monitoring.Reproduced with permission from ref 111.Copyright 2020 The Authors. Figure 10 . Figure 10.Displacement-sensitive sensors.(a) Schematic of the general sensing mechanism.(b) Angle sensor.Reproduced with permission from ref 115.Copyright 2020 WILEY-VCH.(c) Medical exoskeleton.Reproduced with permission from ref 116.Copyright 2020 The Authors.(d) Bidirectional rotation sensor and exoskeleton.Reproduced with permission from ref 117.Copyright 2021 The Authors.(e) Stretch sensor.Reproduced with permission from ref 118.Copyright 2021 The Authors. Figure 11 . Figure 11.Piezotronic effect and piezo/triboelectric potential modulation.Schematic energy diagrams of the piezotronic effect at the metal− semiconductor contact (a) and p−n junction (b) under tensile and compressive strains.Reproduced with permission from ref 126.Copyright 2010 Springer Nature.(c) Piezoelectric and triboelectric potential modulation in FET. Figure 12 . Figure 12.Piezotronic wearable sensors.(a) Piezotronic strain sensors.Reproduced with permission from ref 124.Copyright 2010 WILEY-VCH.(b) Self-powered piezoelectric sensors based on a multilayer α-In2Se 3 flake for real-time monitoring of breath signals.Reproduced with permission from ref 130.Copyright 2019 American Chemical Society.(c) Array integration of vertical ZnO nanowire piezotronic transistors for tactile imaging.Reproduced with permission from ref 131.Copyright 2013 AAAS. Figure 13 . Figure 13.Tribotronic transistor and working mechanism.(a) Schematic illustration of a typical tribotronic transistor.Reproduced with permission from ref 133.Copyright 2014 American Chemical Society.(b) Equivalent circuit diagram of the tribotronic transistor.Reproduced with permission from ref 147.Copyright 2017 American Chemical Society.(c) Output characteristics of a tribotronic transistor under at different TENG displacements.Reproduced with permission from ref 133.Copyright 2014 American Chemical Society.(d) Working principle (top) and energy band (bottom) diagrams of the tribotronic transistor at three modes: accumulation mode, flat-band mode, and depletion mode.Reproduced with permission from ref 148.Copyright 2019 WILEY-VCH. Figure 14 . Figure 14.Tribotronic transistor for a smart touch sensor.(a) Tribotronic MoS 2 transistor for tactile switch.Reproduced with permission from ref 150.Copyright 2016 WILEY-VCH.(b) Tribotronic graphene transistor for touch screen applications.Reproduced with permission from ref 151.Copyright 2016 WILEY-VCH.(c) Mechanosensation-active matrix based on a tribotronic coplanar graphene transistor array.Reproduced with permission from ref 144.Copyright 2018 American Chemical Society. Figure 16 . Figure 16.Schematic of self-powered sensing systems consisting of energy harvesting units, energy management units, energy storage units, and functional circuits (sensing units).Reproduced with permission from ref 167.Copyright 2022 The Royal Society of Chemistry. Figure 17 . Figure 17.Piezoelectric nanogenerators as power sources.(a) Driving an electronic watch.Reproduced with permission from ref 27.Copyright 2011 WILEY-VCH.(b) Stretchable energy harvesting device.Reproduced with permission from ref 173.Copyright 2015 WILEY-VCH.(c) Driving a UV sensor.Reproduced with permission from ref 174.Copyright 2012 American Chemical Society. Figure 18 . Figure 18.Self-powered systems employing TENGs as power sources.(a) Wireless temperature sensing system.Reproduced with permission from ref 176.Copyright 2014 American Chemical Society.(b) Wireless sweat sensing system.Reproduced with permission from ref 177.Copyright 2020 The Authors.(c) Symbiotic cardiac pacemaker.Reproduced with permission from ref 129.Copyright 2019 The Authors.(d) Electric stimulation system.Reproduced with permission from ref 190.Copyright 2016 American Chemical Society. Figure 19 . Figure 19.Thermal electric energy sources.(a) Ionic thermoelectric material using synergistic thermodiffusion and thermogalvanic effects.Reproduced with permission from ref 192.Copyright AAAS.(b) Illustration of the fabrication and structure of a free-standing highly ordered Bi 2 Te 3 −SWCNT hybrid thermoelectric material.Reproduced with permission from ref 191.Copyright 2018 Spring Nature.(c) Schematic illustration of cooling garments with wearable TEDs.Internal structure of the wearable TED with TE pillars connected by flexible copper electrodes and sandwiched between two stretchable sheets (right).Reproduced with permission from ref 193.Copyright AAAS.(d) Photographs of the compliant TEGs showing excellent conformability under various deformations.Scale bars 1 cm.Reproduced with permission from ref 194.Copyright Spring Nature.(e) Schematic illustration of the key design concept of the TES for rapid bidirectional conversion between a rigid handheld electronic device and a soft wearable sensor.Reproduced with permission from ref 195.Copyright 2021 WILEY-VCH. Figure 20 . Figure 20.Photovoltaic cell.(a) Schematic of the ultralight and flexible organic solar cell.Layer thicknesses are drawn to scale.Extreme bending flexibility demonstrated by wrapping a solar cell around a 35 μm radius human hair.Scale bar 2 mm.Reproduced with permission from ref 199.Copyright 2012 Springer Nature.(b) Snapshot of the model plane during solar-powered outdoor flight.Scale bar 10 cm.Close-up photograph of the horizontal stabilizer with an integrated solar panel.Scale bar 2 cm.Photograph of the washing process for the devices conforming to a dress shirt.Scale bar 1 cm.Photograph of the dipping process.OPVs are submerged in deionized water.Scale bar 1 cm.Reproduced with permission from ref 196.Copyright Springer Nature.(c) J−V curves of the flat and bent devices (r = 10 mm) measured under illumination with a white LED (1000 lx).Reproduced with permission from ref 202.Copyright 2020 American Chemical Society.(d) Schematic diagram for surface-textured PDMS substrate fabrication and corresponding device structure.PCE/PCE enhancement of devices based on glass/ITO and 6k-PEDOT:PSS under AM 1.5G and 1000 lx of LED 2700 K. Reproduced with permission from ref 203.Copyright 2021 WILEY-VCH. Figure 21 . Figure 21.Battery-free, skin-interfaced microfluidic/electronic systems for simultaneous electrochemical, colorimetric, and volumetric analysis of sweat.Reproduced with permission from ref 205.Copyright 2019 AAAS. Figure 22 . Figure 22.Hybrid energy harvesting for flexible wearable sensing.(a) Design and structure of a hybrid cell (HC) composed of a serially integrated solar cell (SC) and nanogenerator (NG) for raising the output voltage.Reproduced with permission from ref 207.Copyright 2009 American Chemical Society.(b) Schematic illustration of the hybrid power textile, which is a mixture of two textile-based all-solid energy harvesters: fabric TENG and photovoltaic textile.Reproduced with permission from ref 30.Copyright 2016 Springer Nature.(c) Schematic illustration of the H−P/ TENGs mounted in the custom frame.(Inset) Enlarged structure of a single H−P/TENG.Reproduced with permission from ref 208.Copyright 2018 Elsevier Ltd.(d) Schematic representation of the working principle of a hybrid thermotriboelectric generator (HThTG).Reproduced with permission from ref 209.Copyright 2019 American Chemical Society. Figure 25 . photovoltaic effect dc, a few tens of mW cm −2 , a few tens of W g −1 (at a light intensity of 100 mW cm −2 ), ∼1 V biofuel cell electrochemical energy: body fluid, sweat, blood electrochemical reaction dc, a few mW cm −2 ), ∼1 V Figure 26 . Figure 26.Perspectives on self-powered wearables' future development.Reproduced with permission from ref 235.Copyright 2018 The Authors. Table 1 . Comparisons of Two Different TENG Sensing Mechanisms Table 2 . Typical Self-Powered Sensors' Performance and Characteristics Table 3 . Output Performance of Energy Harvesting Devices and Storage Devices Commonly Used in Wearables 227 μW to a few mW cm −2 (peak), μW to a few mW g −2 (peak), up to 100 V TENG mechanical energy: vibration, ocean wave, body motion triboelectrification and electrostatic induction ac, μW to mW cm −2 (peak), μW to mW g −1 (peak), p to kV (peak) TEG heat: body, instrument, facility, the sun ZnO nanowire dc, ∼μW cm −2 ≈ mV K −1
2023-10-25T06:17:32.720Z
2023-10-23T00:00:00.000
{ "year": 2023, "sha1": "ab633ca4673aeef22866b4657dd694c9f4ed0a2c", "oa_license": "CCBY", "oa_url": null, "oa_status": "CLOSED", "pdf_src": "PubMedCentral", "pdf_hash": "18d922adf832e6e2e902f28bfc1dc3a9fe67ffa0", "s2fieldsofstudy": [ "Engineering", "Materials Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
258689149
pes2o/s2orc
v3-fos-license
Progressive resolution optimizer (PRO) predominates over photon optimizer (PO) in sparing of spinal cord for spine SABR VMAT plans Background we assessed the performance of the optimization algorithms by comparing volumetric modulated arc therapy generated by a progressive resolution optimized (VMATPRO) and photon optimizer (VMATPO) in terms of plan quality, MU reduction, sparing of the spinal cord (or cauda equina), and plan complexity. Methods Fifty-seven patients who received spine stereotactic ablative radiotherapy (SABR) with tumors located in the cervical, thoracic, and lumbar spine were retrospectively selected. For each patient, VMATPRO and VMATPO with two full arcs were generated with using the PRO and PO algorithms. For dosimetric evaluation, the dose-volumetric (DV) parameters of the planning target volume (PTV), organs at risk (OARs), the corresponding planning organs at risk (PRV), and 1.5-cm ring structure surrounding the PTV (Ring1.5 cm) were calculated for all VMAT plans. The total number of monitor units (MUs) and the modulation complexity score for the VMAT (MCSv) were compared. To investigate the correlations of OAR sparing to plan complexity, Pearson’s and Spearman’s correlation tests were conducted between the two algorithms (PO – PRO, denoted as Δ) in the DV parameters for normal tissues, total MUs, and MCSv. Results For the PTVs, Target conformity and dose homogeneity in the PTVs of VMATPRO were better than those of VMATPO with statistical significance. For the spinal cords (or cauda equine) and the corresponding PRVs, all of the DV parameters for VMATPRO were markedly lower than those for VMATPO, with statistical significance (all p < 0.0001). Among them, the difference in the maximum dose to the spinal cord between VMATPRO and VMATPO was remarkable (9.04 Gy vs. 11.08 Gy with p < 0.0001). For Ring1.5 cm, no significant difference in V115% for VMATPRO and VMATPO was observed. Conclusions The use of VMATPRO resulted in improved coverage and uniformity of dose to the PTV, as well as OARs sparing, compared with that of VMATPO for cervical, thoracic, and lumbar spine SABR. Better dosimetric plan quality generated by the PRO algorithm was observed to result in higher total MUs and plan complexity. Therefore, careful evaluation of its deliverability should be performed with caution during the routine use of the PRO algorithm. Supplementary Information The online version contains supplementary material available at 10.1186/s12885-023-10925-z. Progressive resolution optimizer (PRO) predominates over photon optimizer (PO) in sparing of spinal cord for spine SABR VMAT plans Sangjun Son 1* and So-Yeon Park 2,3* Background Bone metastases occur in approximately one-third of all patients with advanced malignant cancers, of which 70% originate within the spine [1][2][3][4]. Radiotherapy has been the standard treatment for decades for patients with spinal metastasis not requiring or amenable to surgery [5]. With the rapid development of technology and equipment, stereotactic ablative radiotherapy (SABR) can deliver a high dose in a few fractions (one-five fractions) with a steep dose fall-off, providing a high biologically equivalent dose to the target volume and sparing nearby normal organs adjacent to the target volume. Several studies have shown that SABR for spinal metastasis is more effective for local tumor control and pain relief than traditional radiotherapy [6][7][8][9][10]. Conversely, other studies reported that radiation myelopathy, the most morbid complication associated with spine SABR, has been observed [11,12]. The risk of radiation myelopathy is low, but it can have a huge negative impact on the quality of life and prognosis. Symptoms can include difficulty walking, numbness, limb weakness, loss of bladder and bowel control, and death [11,12]. Therefore, to prevent radiation myelopathy, it is important to reduce the dose to the spinal cord and cauda equina as much as possible. Thus, an extremely rapid dose fall-off between the spine and spinal cord should be achieved because the spinal cord is surrounded by irregular vertebral bodies and the target volumes for spinal metastasis are irregularly shaped. In this regard, volumetric modulated arc therapy (VMAT) with varying gantry speeds, dose rates, and multi-leaf collimator (MLC) speeds is a suitable treatment option for spine SABR. These modulations of VMAT can generate steep dose gradients between target volumes and organs at risk (OARs) and provide highly conformal target coverage within a shorter treatment time, compared with the intensity modulated radiotherapy (IMRT) technique [13][14][15]. For the generation of VMAT plans, an inverse optimization process that determines the combination of field shapes and segment weights has been used based on dose-volume histogram (DVH) information. However, this method leaves little room for user intervention during the optimization process. Therefore, the dosimetric quality of the VMAT plans is highly dependent on the performance of optimization algorithms in the treatment planning system (TPS). Recently, Varian Eclipse TPS (Varian Medical Systems, Palo Alto, CA, version 13.5) introduced a new optimization algorithm called the photon optimizer (PO). The PO algorithm can be used for both IMRT and VMAT plans, whereas the dose-volume optimizer (DVO) and progressive resolution optimizer (PRO) from the previous version of Eclipse were used for IMRT and VMAT plan generation, respectively. The main difference of the PRO algorithm from the PO algorithm is that the PO algorithm uses a new structure model, where the structure, DVH calculations, and dose sampling are defined spatially by using a single matrix over the image instead of a point cloud model that is used in PRO algorithm [16][17][18]. User-specified fixed values (1.25 mm, 2.5 mm, or 5 mm) are used for the voxel resolution of the matrix [16][17][18]. For fast dose estimation during optimization, both the PRO and PO algorithms utilize a multiresolution dose calculation algorithm that go through four multi-resolution levels, and include the intermediate dose calculation option to acquire better dosimetric plan quality [16,18]. Several studies have analyzed the dosimetric impact, treatment efficiency, and plan complexity between various plans generated by the PRO and PO algorithms for various sites. Liu et al. demonstrated that the PO algorithm showed comparable plan quality and less plan complexity with fewer total monitor units (MUs) for VMAT planning of both lung SABR and brain stereotatic radiosurgery (SRS) [17]. Other institutions have also shown the superiority of the PO algorithm over the PRO algorithm in terms of treatment efficiency (MU reduction) without compromising the VMAT plan quality for lung SABR [18,19]. However, some studies have reported contradictory results for the PO algorithm. Binny et al. investigated the plan quality of intensitymodulated arc therapy for the prostate, head and neck, and brain treatment sites [16]. They observed that plans optimized using the PO algorithm had higher MLC complexity and higher total MUs, while improving OAR sparing with a similar degree of dose conformity to the target volume, compared with those optimized using the PRO algorithm [16]. Kim et al. yielded conflicting results for IMRT and VMAT planning techniques [20]. Although prostate IMRT and VMAT plans generated using the PO algorithm showed an improvement in plan quality for the target volume over the DVO and PRO algorithms, total MU reductions for the PO algorithm were observed only in the IMRT plans, whereas more total MUs for the PO algorithm were used in the VMAT plans [20]. Therefore, the superiority of the PO algorithm is not obvious and varies based on the radiotherapy regimen used and the treatment site. To the best of our knowledge, no planning study of VMAT for spine SABR generated using the PRO and PO algorithms has been performed. In this study, we assessed the performance of the optimization algorithms by comparing PRO-generated VMAT plans (VMAT PRO ) with PO-generated VMAT plans (VMAT PO ) in terms of plan quality, MU reduction, sparing of the spinal cord (or cauda equina), and plan complexity. We included 57 patients who received spine SABR with tumors located in the cervical, thoracic, and lumbar spine. Patient selection, simulation, and contouring From January 2016 to September 2020, 57 patients with spinal metastasis who had a single target volume were retrospectively chosen at our institution. Twenty-eight patients with cervical or thoracic spinal metastases and 29 patients with lumbar spinal metastasis were selected. All patients were previously treated with SABR using the VMAT technique. Approval for this study was obtained from the Institutional Review Board (IRB No. 2020-11-008). All patients underwent computed tomography (CT) scans using various immobilization techniques at the treatment sites using the Brilliance CT Big Bore™ (Philips, Amsterdam, Netherlands). CT images were acquired with 512 × 512 pixels at a 1-mm slice thickness. The target volume of this study was the planning target volume (PTV). The clinical target volume (CTV) and OARs were defined by a single oncologist based on T1-weighted and T2 MR images. The OAR was selectively determined as the spinal cord or cauda equina, according to the tumor location. Normal organs, except the spinal cord and cauda equina, were not analyzed as OARs in this study. PTV and planning organ-at-risk volume (PRV) were generated by adding an isotropic margin of 1 mm from the CTV and OAR, respectively. For dosimetric evaluation and plan optimization, a 1.5-cm ring structure surrounding the PTV (Ring 1.5 cm ) was created. The PRV overlap inside the PTV was excluded from the PTV to spare more normal tissues, including the spinal cord and cauda equina. Treatment planning Every VMAT plan was generated using 10 MV flattening filter-free photon beams from TrueBeam STx with a high-definition 120™ MLC (Varian Medical Systems, Palo Alto, CA, USA). Each VMAT plan consisted of two full arcs with collimator angles of 350° and 273°. All VMAT PO were optimized with the PO algorithm of the Eclipse TPS (version 13.7, Varian Medical Systems, Palo Alto, CA, USA) using a fixed 2.5-mm voxel resolution. Additionally, the jaw-tracking option was employed to minimize the leakage dose to the normal tissues. The prescription dose of the PTV was 18 Gy in a single fraction of the spine SABR. During optimization, planning constraints of the Radiation Therapy Oncology Group (RTOG) 0631 study were followed to spare normal organs and avoid complications. Table 1 lists the planning constraints of the target volume, and OARs for the spine SABR. Conservatively, these constraints are applied to the corresponding PRVs. Automatic normal tissue optimization (NTO) with a priority of 300 was used. To improve the dosimetric plan quality, all VMAT PO were reoptimized using the current dose distribution as a reference. Dose distributions were calculated using the Acuros XB advanced dose calculation algorithm (version 13.7, Varian Medical Systems, Palo Alto, CA, USA) with a calculation grid of 2 mm. Each plan was normalized such that at least 80% of the PTV received a prescribed dose. For comparison, all VMAT PRO were optimized with the PRO algorithm of Eclipse TPS (version 13.7) using the identical beam geometry and planning protocols. To investigate the variation due to the optimization algorithms separately, the same planning constraints, objectives, automatic NTO and priorities for the target volume and normal tissues were used for both VMAT PRO and VMAT PO . Evaluation of treatment plan The dose-volumetric (DV) parameters calculated from each plan were analyzed to evaluate the dosimetric quality with respect to the target coverage and dose received by normal organs. For the PTV, the evaluated DV parameters were the maximum dose, minimum dose, mean dose, and the dose received at least 98% volume of the target volume (D 98% ), D 90% , D 5% , and D 2% were calculated. The conformity index suggested by Paddick et al. (CI paddick ) and the homogeneity index (HI) were calculated as follows [21][22][23]: where TV prescription dose is the target volume covered by the prescription dose, TV is the target volume, and V prescription dose is the volume of the prescription dose. For the spinal cord and the corresponding PRVs, the evaluated DV parameters were the maximum dose, mean dose, D 1.2 cc , D 0.35 cc , and D 0.035 cc . For the cauda equina and the corresponding PRVs, the maximum dose, mean dose, D 1.5 cc , D 0.5 cc , D 0.1 cc , and D 0.035 cc were evaluated as DV parameters. For the Ring 1.5 cm , the volumes receiving at least 105% of the ring structure (V 105% ), V 110% , and V 115% were calculated from each type of plan. To assess treatment efficiency and deliverability, the total number of MUs and the modulation complexity score for the VMAT (MCS v ) were compared. The MCS v proposed by Masi et al. can evaluate the complexity of the MLC movement and beam aperture shape of the VMAT plans [24]. The value of MCS v decreased as the modulation complexity increased. This metric for each plan was calculated using in-house software (MATLAB R2021a, MathWorks, Natick, MA, USA). Based on the Shapiro-Wilk test for the normality of the two corresponding datasets, a paired t-test or Wilcoxon signed rank test was used for pairwise comparisons of the DV parameters, total MUs, and MCS v between the PO and PRO algorithms. To investigate the correlations of OAR sparing to the level of modulations, we utilized the differences between the two algorithms (PO -PRO, denoted as Δ) in the DV parameters for normal tissues, total MUs, and MCS v . With these data, correlation coefficients and corresponding p-values were obtained by conducting Pearson's and Spearman's correlation tests for parametric and non-parametric data, respectively, and p-values were considered statistically significant at p < 0.05. All analyses were performed using the PRISM statistical program (version 8.4.3, GraphPad Software Inc., San Diego, CA, USA). Table 2 summarizes the average DV parameters of both VMAT PRO and VMAT PO for the cervical and thoracic spine cases. For the PTVs, the differences in all the DV parameters analyzed in this study between VMAT PRO and VMAT PO were statistically significant (p < 0.05), except for D 98% and minimum dose (p = 0.167 and 0.141, respectively). The values of D 5% , D 2% , maximum dose, and mean dose were lower for VMAT PRO than for VMAT PO while the values of D 90% were slightly higher for VMAT PRO than for VMAT PO . Target conformity and dose homogeneity in the PTVs of VMAT PRO were better than those of VMAT PO with statistical significance (0.90 vs. 0.82 with p < 0.0001 for CI paddick and 0.32 vs. 0.35 with p < 0.001 for HI). The overall quality of the DV parameters of the PTVs was superior in VMAT PRO than those in VMAT PO . Dose-volumetric (DV) parameters For the spinal cords and corresponding PRVs, all of the DV parameters for VMAT PRO were markedly lower than those for VMAT PO , with statistical significance (all p < 0.0001). Among them, the difference in the maximum dose to the spinal cord between VMAT PRO and VMAT PO was remarkable (9.04 Gy vs. 11.08 Gy with p < 0.0001). For Ring 1.5 cm , no significant difference in V 115% for VMAT PRO and VMAT PO was observed. The values of V 105% and V 110% for VMAT PRO were much smaller than those for VMAT PO , with the differences being statistically significant (0.44 cm 3 vs. 2.66 cm 3 with p < 0.001 for V 105% , and 0.02 cm 3 vs. 0.63 cm 3 with p = 0.039 for V 110% ). Table 3 summarizes the average DV parameters of both VMAT PRO and VMAT PO for the lumbar spine cases. For the PTVs, all of the DV parameters showed significant differences between VMAT PRO and VMAT PO , except for the minimum dose (p = 0.207). Similar to the DV parameters of the PTVs for cervical and thoracic spine cases, the values of D 5% , D 2% , maximum dose, and mean dose were lower in VMAT PRO than in VMAT PO . In contrast, the values of D 98% and D 90% were slightly higher for VMAT PRO than for VMAT PO , demonstrating that VMAT PRO exhibited coverage and uniformity of the dose to the PTV. In the same vein, the target conformity and dose homogeneity in the PTVs of VMAT PRO were better than those of VMAT PO , with statistical significance (0.92 vs. 0.86 with p < 0.0001 for CI paddick and 0.26 vs. 0.29 with p < 0.0001 for HI). For the cauda equina and the corresponding PRVs, all of the DV parameters for VMAT PRO were considerably smaller than those for VMAT PO , showing statistically significant differences (all p < 0.0001). In particular, the difference in D 0.035 cc of the cauda equina between VMAT PRO and VMAT PO was remarkable (11.19 Gy vs. 12.40 Gy with p < 0.0001). For Ring 1.5 cm , V 115% showed no significant differences between VMAT PRO and VMAT PO . Similar to the DV parameters of the PTVs for the cervical and thoracic spine cases, the values of V 105% , and V 110% for VMAT PRO were much smaller than those for VMAT PO , with the differences being statistically significant (1.03 cm 3 vs. 3.20 cm 3 with p < 0.0001 for V 105% , and 0.09 cm 3 vs. 0.46 cm 3 with p = 0.008 for V 110% ). Overall, the use of the PRO algorithm for generating VMAT plans could provide better target coverage and sparing of normal tissues surrounding the target volumes for spine SABR. For the dosimetric evaluation, the dose distributions of VMAT PRO and VMAT PO from a representative patient are shown in Fig. 1. The dose-volume histograms for this patient are shown in Fig. 2. Total MU and MCS v The average total MUs and MCS v values are listed in Table 4. In the cervical and thoracic spine SABR cases, the PRO algorithm generated more complex VMAT plans with significantly higher total MUs than the PO algorithm, showing statistical significance (6020.4 vs. 4850.1 with p < 0.0001 for total MUs and 0.389 vs. 0.495 with p < 0.0001 for MCS v ). Similarly, the lumbar spine SABR showed higher total MUs and modulation for VMAT PRO than VMAT PO (6267.8 vs. 5038.2 with p < 0.0001 for total MUs and 0.425 vs. 0.528 with p < 0.0001 for MCS v ). Correlation of DV parameters with total MU and MCSv The values of the correlation coefficient (r) and corresponding p-values of Δ in the DV parameters with total MUs and MCS v for cervical and thoracic spine SABR are shown in Table 5. In general, Δ in the DV parameters of the spinal cord and spinal cord PRV showed a strong correlation with Δ in the total MUs and MCS v , except for the Ring 1.5 cm (with p < 0.05). The values of r of ΔD 1.2 cc and Δmean dose of spinal cord, and ΔD 0.035 cc and Δmean dose of spinal cord PRV for ΔMU and ΔMCS v were larger than 0.69, and 0.79, respectively, with statistical significance (p < 0.001). The values of r and the corresponding p-values of Δ in the DV parameters with total MUs and MCS v for lumbar spine SABR are shown in Table 6. Overall, Δ in the DV parameters of the spinal cord and spinal cord PRV showed a moderate correlation with Δ in the total MUs and MCS v , except for the Δmaximum dose of the cauda equina, ΔD 0.1 cc , ΔD 0.035 cc , and Δmaximum dose of the cauda equina PRV ( p < 0.05). For the Δmean dose of the cauda equina, the maximum values of r with p values less than 0.0001 were observed (-0.802 for ΔMU and 0.834 for ΔMCS v ). Discussion In this study, we evaluated the performance of the PRO and PO algorithms for generating spine SABR VMAT plans by comparing the DV parameters of the target volume and surrounding normal tissues, total MU, and Note: DV = dose-volumetric; PRO = progressive resolution optimizer algorithm; PO = photon optimizer algorithm; PTV = planning target volume; D n% = dose received by at least n% volume of the planning target volume; CI = conformity index; HI = homogeneity index; D n cc = dose received by at least n cc volume of the planning target volume; PRV = planning organ at risk volume; V n% = absolute volume of a structure irradiated by at least n% of the prescription dose; Ring 1.5 cm = 1.5-cm ring structure surrounding PTV MCS v . To date, this study is the first attempt to assess the plan quality of VMAT PRO and VMAT PO in patients with cervical, thoracic, and lumbar spinal tumors. When comparing the DV parameters of the target volume and surrounding normal tissues, VMAT PRO achieved better PTV coverage and dose uniformity while reducing the dose to the spinal cord or cauda equina and Ring 1.5 cm than VMAT PO . However, for VMAT PRO , improvements in plan dosimetric quality can lead to increases in overall plan complexity and total MUs, which can compromise treatment deliverability and efficiency, respectively. Similar studies have investigated the optimization algorithms in terms of dosimetric quality for various SABR treatment sites, such as the lungs and brain [17][18][19]. Visak et al. investigated the dosimetric quality of VMAT PRO and VMAT PO in 12 lung SABR patients with a single dose of 30 Gy [18]. They demonstrated that the PRO algorithm provided higher MUs and higher modulation of lung SABR VMAT plans, while the dose to normal tissues was reduced compared with the PO algorithm. They also reported that the PO algorithm increased the intermediate-dose spillage, which can result from more exposure to normal tissues [18]. In contrast, some institutions demonstrated that the plan quality between both algorithms was comparable with no statistical significance, although VMAT PRO increased the total MUs and plan complexity [17,19]. For the prostate, head and neck, and brain treatment sites, which do not involve SABR VMAT plans, there was an increase in the total MUs and the level of modulation when the PO algorithm was used, showing better OAR sparing and an opposite result to the SABR VMAT plans [16,20]. Thus, the efficacy of the optimization algorithms may vary depending on the radiotherapy regimen or treatment site. Therefore, it is necessary to analyze the plan quality for each condition. The PRO algorithm, which utilizes a point-cloud model, can have a high number of calculation points (1) inside small or narrow structures, such as lenses, optic nerves, and spinal cords, or (2) around the edge of irregular structures, such as vertebral bodies and head and neck nodes [25]. In addition, the grid size for the structure can be adjusted as much as the user wants, resulting in an increase in the calculation points inside the structure. Because this facilitates more degrees of freedom for the calculation grid size, a sophisticated modulation scheme of VMAT plans is possible using the PRO algorithm [16]. With these characteristics, the PRO algorithm can generate many small and irregular MLC openings compared with the PO algorithm. In contrast, the PO algorithm uses only one fixed grid size with a single matrix over the CT images during optimization, and the degrees of freedom for the calculation grid size are relatively small compared with the PRO algorithm. The matrix resolution of PO algorithm can be selected from three options (Fine, Normal, and Fast) of 1.25, 2.5, and 5 mm, respectively. This study used the Normal resolution rather than the Fine resolution because it required huge amount of computer memory and long computation time. There were no studies evaluating the effect of the matrix resolution of PO algorithm on the optimization performance. Empirically, changes in the matrix resolution below 2.5 mm did not show the dosimetric differences when a few spine cases were tested for our study. The PO algorithm tends 0.09 ± 0.21 0.46 ± 0.79 0.008 V 115% (cm 3 ) 0.00 ± 0.01 0.02 ± 0.07 0.155 Note: DV = dose-volumetric; PRO = progressive resolution optimizer algorithm; PO = photon optimizer algorithm; PTV = planning target volume; D n% = dose received by at least n% volume of the planning target volume; CI = conformity index; HI = homogeneity index; D n cc = dose received by at least n cc volume of the planning target volume; PRV = planning organ at risk volume; V n% = absolute volume of a structure irradiated by at least n% of the prescription dose; Ring 1.5 cm = 1.5-cm ring structure surrounding PTV to significantly remove small openings when compared with the PRO algorithm [17]. The MLC openings for randomly selected control points between VMAT PRO and VMAT PO for a representative spine SABR patient are shown in Fig. 3. It was observed that MLC shapes defined by the PRO algorithm were smaller and more irregular than those defined by the PO algorithm, and were associated with sparing critical normal organs during optimization. Spine SABR has the characteristic of having a long and narrow OAR including the spinal cord or cauda equina that must be protected within an irregular PTV of the vertebral body. Limiting the number of calculation points per volume leads to a potential loss of information Note: Δ = differences in the values (PO minus PRO) between the two algorithms, MU = monitor unit, MCS v = modulation complexity score for volumetric modulated arc therapy proposed by Masi et al. (2013). DV = dose-volumetric, D n cc = dose received by at least n cc volume of the planning target volume, PRV = planning organ at risk volume, V n% = absolute volume of a structure irradiated by at least n% of the prescription dose, Ring 1.5 cm = 1.5-cm ring structure surrounding PTV Fig. 3 Multi-leaf collimator openings for randomly selected control points between volumetric modulated arc therapy plans generated by a progressive resolution optimizer (VMAT PRO ) (a) and by a photon optimizer (VMAT PO ) (b) for a representative spine stereotactic ablative radiotherapy patient that must be considered during optimization. To effectively reduce the dose to the spinal cord or cauda equina, which are widely recognized as critical organs, during optimization, the use of the PRO algorithm would be more advantageous for spine SABR VMAT plans because of the ability to generate small or irregular MLC shapes, and the greater degrees of freedom for the calculation grid size. For the dosimetric evaluation, the dose distributions in VMAT PRO and VMAT PO for spine SABR are shown in Fig. 1. A noticeable dose reduction in the spinal cord or cauda equina for VMAT PRO was achieved compared with VMAT PO that had good PTV coverage. Until now, the latest released version of Varian Eclipse is 16.2. Varian has announced that PRO algorithm is no longer supported from version 16.0 onwards and has no improved functions up to the version 16.0. On the other hand, PO algorithm has been continuously improving its features until 16.2. Among them, the most representative feature to affect the optimization performance is the aperture size controller (ASC) released in version 15.5. With this function, the users are allowed to control the field aperture shape, which results in increasing the field size and then decreasing the complexity of the MLC apertures. The users can adjust the complexity of the plan generated PO algorithm through the ASC function, which may lead to different results from the version of the plan used in our study. However, even before ASC was developed, it was demonstrated that PO algorithm (ver. 13.7) had a tendency to generate simpler field apertures than PRO algorithm shown in Fig. 3. For this reason, it is likely to show similar results for the dosimetric comparison between PRO and PO algorithms, regardless of version of algorithm. Nevertheless, This will be investigated in future studies because no planning study of VMAT for spine SABR generated using different versions of PO algorithms has been performed. To improve the dosimetric plan quality, high modulation, implying complex MLC movements and the usage of small or irregular MLC apertures, is required; however, this leads to an increase in the number of total MUs and a decrease in plan delivery accuracy [16,26]. Liu et al. compared PRO and PO algorithms in terms of plan quality and correlations between gamma passing rates and plan complexities for both lung SABR and brain SRS [17]. The criteria for the gamma analysis were 3%/3 mm and 2%/2 mm for lung SABR and 5%/1 mm and 3%/1 mm for brain SRS, with 10% as the threshold value. They reported less agreement between planned and delivered dose distributions when VMAT PRO had higher MLC variability and total MUs. Although the overall gamma passing rates with all gamma criteria for VMAT PRO decreased compared with those for VMAT PO , the average gamma passing rates for lung SABR and brain SRS were above 90% and 95% under the criteria of 2%/2 mm and 3%/1 mm and 3%/3 mm and 5%/1 mm, respectively, and VMAT PRO was considered clinically acceptable [17]. In this regard, our institution acquired the gamma passing rates of portal dosimetry with gamma criteria of 2%/1 mm, and all VMAT PRO and VMAT PO (> 90%) were found to be clinically acceptable. Additionally, our previous study investigated the correlation between gamma passing rates and the modulation degree of VMAT plans [27]. We utilized identical TrueBeam STx with a highdefinition 120™ MLC and then generated 100 VMAT plans with various tumor sites, including the lung, spine, liver, brain, and head and neck, using the PRO algorithm. Measurements of the dose distributions for each VMAT plan were acquired using MapCHECK2™ and ArcCHECK™ (Sun Nuclear Corporation, Melbourne, FL, USA). As a result, the average gamma passing rates for all criteria were above 90%, which is regarded as clinically acceptable. It was found that there was less correlation between gamma passing rates with all criteria and MCS v , with no statistical significance, except for the correlation of the 3%/3 mm criterion with ArcCHECK™ (r = 0.210, p-value = 0.036) [27]. Thus, we can conclude that TrueBeam STx has a guaranteed performance with a high degree of agreement between the planned and actual delivered doses, regardless of plan complexity. For the TrueBeam STx used in this study, the modulation degree of the VMAT plans according to the optimization algorithm could be considered less important. Nevertheless, careful evaluation of its deliverability as well as that of other treatment machines is needed for the clinical implementation of the PRO algorithm. Radiation myelopathy is a rare but catastrophic complication of radiation exposure to the spinal cord or cauda equine [1][2][3]. The RTOG 0631 guidelines recommend dose constraints to the spinal cord (D 1.2 cc < 7 Gy, D 0.35 cc < 10 Gy, and D 0.035 cc < 14 Gy) and cauda equina (D 5 cc < 14 Gy, and D 0.035 cc < 16 Gy) for spine SABR [28]. However, according to the retrospective study by Sahgal et al., the maximum dose of the spinal cord in a single fraction was estimated, ranging from 9.20 to 12.40 Gy which was associated with a 1-5% risk of radiation myelopathy [12,29]. For the cervical and thoracic spine SABR in our study, the maximum dose to the spinal cord for VMAT PRO (9.04 Gy) was approximately 2 Gy less than that for VMAT PO (11.08 Gy), resulting in a 2% reduction in the risk of radiation myelopathy. The maximum dose of spinal cord PRV for VMAT PO (13.83 Gy) exceeded 12.40 Gy while that for VMAT PRO (12.25 Gy) did not. Since radiation myelopathy can lead to death, it is important to reduce the risk of this complication by sparing the spinal cord or cauda equina as much as possible. Therefore, patients treated with spine SABR may benefit from VMAT PRO .
2023-05-16T13:59:34.838Z
2023-05-16T00:00:00.000
{ "year": 2023, "sha1": "c9873ca67b5838a391840666d0d3a1aaff9648bd", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "c9873ca67b5838a391840666d0d3a1aaff9648bd", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
15272190
pes2o/s2orc
v3-fos-license
Genetic Polymorphisms and Phenotypic Profiles of Sulfadiazine-Resistant and Sensitive Toxoplasma gondii Isolates Obtained from Newborns with Congenital Toxoplasmosis in Minas Gerais, Brazil Background Previous Toxoplasma gondii studies revealed that mutations in the dhps (dihydropteroate synthase) gene are associated with resistance to sulfonamides. Although Brazilian strains are genotypically different, very limited data are available regarding the susceptibility of strains obtained from human to sulfonamides. The aim of this study was to evaluate the efficacy of sulfadiazine (SDZ) against Brazilian isolates of T. gondii and verify whether isolates present polymorphisms in the dhps gene. We also investigated whether the virulence-phenotype and/or genotype were associated with the profile of susceptibility to SDZ. Methods Five T. gondii isolates obtained from newborns with congenital toxoplasmosis were used to verify susceptibility. Mice were infected with 104 tachyzoites and orally treated with different doses of SDZ. The mortality curve was evaluated by the Log-rank test. The presence of polymorphisms in the dhps gene was verified using sequencing. A descriptive analysis for 11 Brazilian isolates was used to assess the association between susceptibility, genotype, and virulence-phenotype. Results Statistical analysis showed that TgCTBr03, 07, 08, and 16 isolates were susceptible to SDZ, whereas TgCTBr11 isolate presented a profile of resistance to SDZ. Nineteen polymorphisms were identified in dhps exons. Seven polymorphisms corresponded to non-synonymous mutations, with four being new mutations, described for the first time in this study. No association was found between the profile of susceptibility and the virulence-phenotype or genotype of the parasite. Conclusions There is a high variability in the susceptibilities of Brazilian T. gondii strains to SDZ, with evidence of drug resistance. Despite the large number of polymorphisms identified, the profile of susceptibility to SDZ was not associated with any of the dhps variants identified in this study. Other genetic factors, not yet determined, may be associated with the resistance to SDZ; thus, further studies are needed as a basis for a more adequate toxoplasmosis treatment. Introduction Toxoplasma gondii is an obligate intracellular protozoan parasite distributed worldwide that infects a wide range of warm-blooded animals [1]. Infection in humans is usually asymptomatic, but a severe manifestation can occur in cases of congenital toxoplasmosis and immunecompromised individuals, with treatment being indicated in such cases [1]. Treatment of toxoplasmosis usually uses a combination of sulfadiazine (SDZ) and pyrimethamine, which demonstrate a remarkable synergistic activity against the replication of tachyzoites through the sequential inhibition of parasite dihydropteroate synthase (DHPS) and dihydrofolate reductase (DHFR). These two enzymes are responsible for the synthesis of the folate compounds essential for T. gondii survival and replication [2]. However, failures in the toxoplasmosis treatment have been reported in the literature, especially in immunocompromised individuals and in cases of congenital transmission [2,3]. These failures may be related to host factors or to parasite factors [4]. Host factors, for example, are malabsorption or drug intolerance. Parasite factors may be differences in drug-susceptibility between genetically different T. gondii strains or the development of drug resistance caused by mutations in the target gene [3,4]. Previous studies have shown that mutations in the genes encoding antifolate targets normally lead to resistance to these drugs. A study using clinical samples obtained in the UK showed the presence of six mutations within the T. gondii dhps gene. Mutation N407D, which is equivalent to the 437 position in Plasmodium, was reported as being associated with sulfonamide resistance in one clinical isolate [5]. This mutation was also retrieved in a laboratoryinduced sulfamethoxazole-resistant strain [6]. Recently, the susceptibility of 17 T. gondii isolates obtained in France was evaluated with the following three anti-toxoplasmic drugs: sulfadiazine, pyrimethamine, and atovaquone. Some variability was verified for the susceptibility of T. gondii strains to pyrimethamine and atovaquone, but with no clear evidence of drug resistance. On the other hand, a high variability was found in the susceptibility to sulfadiazine, and three strains resistant to this drug were identified. In addition, a new mutation was identified in the dhps gene of T. gondii (A587V), which was associated to resistance of one of the strains to sulfadiazine [7]. Studies on susceptibility to chemotherapy in experimental toxoplasmosis have been predominantly conducted using strains belonging to three genetic clonal lineages: Types I, II, or III, common in Europe and North America. However, studies using multi-locus markers showed a higher genetic diversity of T. gondii in South America than in the Northern Hemisphere. In Brazil, there is a predominance of atypical genotypes, with the following four genotypes being considered common lineages: BrI BrII, BrIII, and BrIV [8,9]. Little is known about the effects of drugs on atypical strains, such as those found in South America. The fact that Brazilian T. gondii strains have a different population structure and a greater genetic diversity as well as being more virulent than the clonal lineages, can lead to different susceptibility profiles to chemotherapy of Brazilian isolates compared to isolates from the Northern Hemisphere. Thus, further studies using Brazilian isolates need to be conducted. The objectives of this study were to evaluate the efficacy of sulfadiazine on Brazilian clinical isolates of T. gondii; to verify whether these isolates present polymorphisms in the antifolates resistance-associated gene dhps; and to assess whether other factors, such as parasite genotype and virulence-phenotype, could be associated with the profile of susceptibility to sulfadiazine. Despite the large number of polymorphisms identified, there was no clear evidence of association with the profile of resistance to sulfadiazine. Toxoplasma gondii isolates Five T. gondii isolates were used to assess susceptibility to sulfadiazine: TgCTBr03, TgCTBr07, TgCTBr08, TgCTBr11, and TgCTBr16. They were previously obtained by mouse bioassay of blood obtained from newborns with congenital toxoplasmosis in Minas Gerais state, Brazil [10], under parental informed consent. The protocols used in this previous study [10] were approved by the local Human Research Ethics Committee (COEP-Federal University of Minas Gerais, protocol 298/06). These isolates are maintained cryopreserved in Dimethylsulfoxide (DMSO) in our laboratory, as previously described [11]. Clinical data of these newborns are summarized in Table 1 (age at the time of blood collection, gender, major clinical signs, confirmative serologic results and T. gondii genotype based on Polymerase chain reaction-Restriction fragment length polymorphism (PCR-RFLP)).The TgCTBr16 isolate was previously genotyped by Pinheiro et al. [12] and the other isolates were genotyped by Carneiro et al. [10]. Assay of susceptibility to sulfadiazine (SDZ) Fresh tachyzoites were obtained from the peritoneal cavities of Swiss mice after thawing, DMSO removal with phosphate buffered saline (PBS) pH 7.2, and intraperitoneal inoculums, as previously described [11]. The number of tachyzoites present in the peritoneal aspirate was counted in a Neubauer chamber under optical microscope and diluted in PBS pH 7.2 to obtain the desired number of parasites. Mice were obtained from Center of Bioterism (CEBIO) of the Institute of Biological Sciences-Universidade Federal de Minas Gerais (UFMG). Female Swiss Webster mice, six to eight weeks of age were used in the experimental groups. Mice were housed 10 per cage, according to temperature, humidity, and lighting standards of the Conselho Nacional de Controle de Experimentação Animal (CONCEA)-Brazil, with ad libitum food (Nuvilab1, CR1, Nuvital, Brazil), and ad libitum water. Cages were composed of polypropylene, approximately 41x34x16cm in size, and changed weekly. The number of ten Swiss mice was used per experimental group, according to Khan et al. (1997) [13]. Mice were intraperitoneally infected (i.p.) with 10 4 tachyzoites from each T. gondii isolate. SDZ (Catarinense, Brazil) was dissolved in Carboxymethylcellulose (0.25%) to achieve the desired concentration. Treatment with 80, 160, or 320 mg/Kg/day of SDZ administered by gavage was initiated 48 hours post-infection and continued for 10 days. A group of 10 infected mice was maintained as a non-treated control (NTC). The mice were followed for 30 days post-infection (DPI), twice a day, to assess the efficacy of the therapeutic treatments. Some infected animals, after going through an initial period of weakness (that may include rapid weight loss, ruffled fur and animal prostration), suddenly recover, gain weight and do not succumb to acute infection, therefore we used death of mice as an endpoint. Death is a required endpoint for our survival experiments to determine the actual number of animals that died spontaneously due to therapeutic failure of SDZ. No analgesics and anesthetics were used during follow-up since drug interactions with other medications can alter the results of experiments. Only SDZ (the drug of choice for the treatment of human toxoplasmosis) was administered as previously described. All deaths were due to toxoplasmosis. The mice that had survived until the end of the experiment were euthanized by cervical dislocation according to CONCEA guidelines. The survival rates, the presence of brain cysts (cyst count in the optical microscope) and specific IgG antibody by enzyme-linked immunosorbent assay (ELISA) in the surviving mice were analyzed as previously described [14]. If no cysts were observed, the brain homogenate was i.p. inoculated into an uninfected mouse (bioassay). The sub-inoculated mice were followed for 30 DPI. The mortality of mice was analyzed using the Log-rank (Mantel-Cox) test to compare the survival curves and the efficacy of the treatments [15]. The statistical differences among the groups were verified using the non-parametric Mann-Whitney or Kruskal-Wallis tests. The mean number of brain cysts in the surviving mice was analyzed using the non-parametric Kruskal-Wallis test (p < 0.05). Association between susceptibility to SDZ and genotype or virulence of T. gondii A descriptive analysis was also performed to verify the association between the profile of susceptibility to SDZ defined in this study and the molecular (genotype) or biological (virulence for mice) characteristics of the isolates. The results from the five T. gondii isolates were compared with previous SDZ susceptibility results from six Brazilian T. gondii isolates: two obtained from humans (SAF and EGS), two from dogs (D4 and D7), and two from free-range chickens (CH1 and CH3) [14]. A composite dataset of the 11 isolates was constructed. The survival rates were used as criteria for comparison because the survival curves of these six isolates were not analyzed by the Log-rank (Mantel-Cox) test. The T. gondii isolates were classified as highly susceptible (mice survival rates higher than or equal to 80%, regardless of the SDZ dosage used); resistant (mice survival rates between 0% and 40% after treatment); and intermediary (mice survival rates between 40% and 80% after treatment). Genotyping, virulence in mice, and the allele type at the CS3 marker data were retrieved from previously published reports from our laboratory [9,10]. Sequencing of the dhps gene T. gondii tachyzoites from each isolate were obtained from the peritoneal cavities of Swiss mice as previously described [16]. Fresh tachyzoites were submitted for DNA extraction using the kit Wizard1 Genomic DNA Purification (Promega), according to the manufacturer's instructions. The target DNA sequence of the dhps gene was amplified by PCR (Polymerase chain reaction). The previously described internal primers were used to amplify the six dihydropteroate synthase (dhps) exons [5]. No previous amplification by the external primers was necessary because the DNA was extracted from purified tachyzoites. The previously described primers to amplify the dhps exon 1 [5] failed to completely amplify this exon, leaving the initial part of the protein uncovered (from amino acid 1 to 388). New primers were designed to amplify this region using the software Oligo Explorer v. 1.5 by Gene Link. The dhps exon 1 was amplified as two overlapping fragments because of its large size: exon 1a (primers: dhps exon 1a New F, 5´-ACGGATATGAGGAGCGCTAC-3´; dhps exon 1a New R, 5´-GAAGCAGCTCCTTCACAG AC-3´) and exon 1b (primers: dhps exon 1b New F, 5´-GTTATACACCCTGATGTGCG-3´; dhps exon 1b New R, 5´-GCATGGCAAAATAGACCGTC-3´). The amplification reactions were performed at a final volume of 100 μL, containing 10 μL 10X High Fidelity PCR Buffer (Invitrogen), 250 mM MgSO 4 , 25 mM of each deoxynucleotide (dATP/dTTP/dGTP/dCTP; Invitrogen), 3.5 U of Platinum1 Taq DNA Polymerase High Fidelity (Invitrogen), 50 pmol of each primer, and 5 μL of DNA. A negative control (without DNA) was included in each reaction mixture. Genomic DNA from RH (type I), ME49 (type II), and VEG (type III) strains was used as a control. The first amplification step consisted of 3 min of denaturation at 94˚C, 50 cycles with denaturation at 94˚C for 20 s, annealing at primerdependent temperatures for 20 s, and extension at 68˚C for 90 s. A final extension step was performed at 68˚C for 5 min. PCR products were resolved by 1% agarose gel electrophoresis. The PCR products were recovered from the gel and purified using a PCR purification kit (NucleoSpin1 Extract II-Nucleic Acid and Protein Purification-Macherey-Nagel). DNA amplified from each sample (1000 ng) was lyophilized and sequenced at both ends in the ABI Prism 3730xl DNA Analyser (Applied Biosystems) by Macrogen Inc (Korea). Analysis of the sequences of the dhps gene The forward and reverse sequences obtained from each exon were processed using Phred and CAP3 [17]. Bases with a phred score lower than 20 were removed (PMID:9521922). Strain polymorphisms were analyzed by alignment of the contig sequences using the multiple sequence alignment program ClustalW (http://www.ebi.ac.uk/Tools/msa/clustalw2/). The sequences of the strains GT1 (type I), ME49 (type II), and VEG (type III) obtained from the ToxoDB database (http://www.toxodb.org) were used as reference. The protein sequences of each isolate were compared using tblastn (Translated BLAST, http://www.blast.ncbi.nlm.gov/) translated by ExPASy Tools Translate (http://www.expasy.org/tools/) and submitted to multiple alignments using ClustalW. Ethical statement This study was carried out in strict accordance with the recommendations of the Conselho Nacional de Controle de Experimentação Animal (CONCEA)-Brazil. The protocol conducted in this study was approved by the Ethics Committee in Animal Experimentation (CETEA) of the Universidade Federal de Minas Gerais, Brazil (Protocol CEUA 257/2012). All efforts were made to minimize suffering. The euthanasia method conducted in this study has been properly approved by CETEA/UFMG. A significant difference (Log-rank test) was observed when the survival curves of the groups treated with different SDZ dosages were compared to the survival curve of the NTC group (TgCTBr03 (p = 0.0139), TgCTBr07 (p = 0.0003), TgCTBr08 (p<0.0001), and TgCTBr16 (p<0.0001)). In these cases, the treatment significantly increased the survival rate of the infected mice. For the TgCTBr11 isolate, the Log-rank test showed that there is no statistical difference between the survival curves of the SDZ treated groups and the survival curve of the NTC group (p = 0.1161). All surviving mice at 30 DPI presented anti-T. gondii IgG antibodies. No significant differences were observed in the antibody levels or the number of brain cysts of these animals when the groups treated with different dosages of SDZ were compared (p>0.05). It was necessary to conduct a bioassay of one surviving mouse inoculated with TgCTBr08 isolate, six inoculated with TgCTBr11 isolate, and 30 inoculated with TgCTBr16 isolate because these animals did not present cerebral cysts by optical microscopy at 30 DPI (Table 2), despite testing positive by ELISA. The animal sub-inoculated with TgCTBr08 isolate died of an acute infection after an 11-day follow up, confirming parasitism. After the bioassay with the TgCTBr11 isolate, one animal from the 160 mg/Kg/day group died due to acute infection 10 days after being sub-inoculated, and the other animal survived the 30-day follow-up without presenting cerebral cysts or anti-T. gondii IgG antibodies ( Table 2). The four mice in the 320 mg/Kg/day group survived the bioassay and did not present cerebral cysts or antibodies. The mortality rates (%) for the bioassay using the TgCTBr16 isolate were 60, 40, and 50 for the 80, 160, and 320 mg/Kg/day groups, respectively. These animals died of an acute infection before day 17 follow-up. The surviving animals did not present cerebral cysts or antibodies, except for one in the 80 mg/Kg/day group, which was ELISA positive for T. gondii (Table 2). Association between susceptibility to SDZ and T. gondii genotype or virulence A descriptive analysis of the results for the 11 Brazilian T. gondii isolates showed that there was no association between parasite genotype and susceptibility to SDZ because the three isolates belonging to genotype ToxoDB #11 (TgCTBr08, TgCTBr11, and D4) presented different susceptibility profiles (Table 3). This analysis could not be performed with the other isolates because each one belonged to a different genotype. For the isolates with intermediary susceptibility and for the highly susceptible isolates, no association was observed between the profile of susceptibility to SDZ and the virulence-phenotype in mice, nor with the allele type at the CS3 locus because the isolates displaying the same profile have different degrees of virulence. The two isolates with a profile of resistance to SDZ (TgCTBr11 and EGS) presented a virulent-phenotype for mice as well as the allele type I at the CS3 locus ( Table 3). Sequencing of the dhps gene Nucleotide sequence data reported in this paper are available in the GenBank database under the accession numbers KT582106, KT582107, KT625490, KT692934 and KT714074. A total of 19 SNPs (single nucleotide polymorphisms) were identified in exons of the dhps gene (Table 4). Of these, 12 SNPs led to silent mutations for all the isolates studied. Of the silent mutations, two have already been described in the literature (codons 664 and 711) and ten are new variants (codons 59, 82, 99, 134, 188, 306, 319, 454, 460, and 547) ( Table 4). Seven of the 19 SNPs led to a change in the protein amino acids (non-synonymous mutation). Of these seven SNPs, three have been previously described in the literature (codons 558, 644, and 681), and four were identified for the first time in this study (codons 39, 65, 356, and 691) ( Table 4). The G/A/C polymorphism in codon 65 is located in exon 1a and allows the presence of three different amino acids at this protein position, depending on the isolate analyzed. Type II and III clonal strains presented a proline. Type I clonal strains and the TgCTBr03 and TgCTBr07 isolates presented a histidine, whereas the TgCTBr08, TgCTBr11, and TgCTBr16 isolates presented an arginine. The G/C polymorphism in codon 691 is located in exon 5 and leads to a change from alanine into proline. TgCTBr08 isolate presented proline, while the other isolates and clonal strains presented alanine (Table 4). Interestingly, the atypical Brazilian isolates obtained from newborns presented the same amino acid that Type I clonal strains in the polymorphisms observed in codons 39, 356, 558, 664 and 681, while Type II and III clonal strains presented another amino acid in these positions ( Table 4). The previously described mutation N407D (codon 491 with an alteration from asparagine to aspartic acid) [5] was not identified with all the isolates studied presenting the amino acid asparagine (N) at this position. The previously described mutation A587V (codon 671, with a change from alanine to valine) [7,3] was not found either, with all the Brazilian isolates presenting alanine at this position. It was not possible to obtain the sequence of exon 4 for the TgCTBr03 isolate. Overall, 40 SNPs were identified in the intronic regions of the isolates studied. The TgCTBr11 and TgCTBr16 isolates presented one SNP identical and exclusive of these isolates in position 5132 of the dhps nucleotide sequence. The TgCTBr03 and TgCTBr07 isolates presented one identical and exclusive SNP in position 3669. The TgCTBr08 and TgCTBr11 isolates presented three identical and exclusive SNPs (positions 2395, 3109, and 5753). The TgCTBr16 isolate presented two exclusive SNPs (positions 2999 and 3126) and the TgCTBr11 isolate presented four exclusive SNPs (positions 3729, 5149, 5162, and 5877). In 16 out of 40 SNPs identified in the introns, the isolates from newborns presented a nucleotide identical to the type I clonal strain, and different from the nucleotide of the type II and III clonal strains (positions 727, 728, 745, 2410, 2462, 2920, 2933, 3140, 3333, 4033, 4231, 4925, 5116, 5142, 5865, and 5906). In four SNPs, the newborn isolates presented nucleotide identical to that of the type II and III clonal strains and different from the nucleotide of the type I clonal strain (2908, 3584, 3702, and 4279). Discussion Despite the fact that the Brazilian T. gondii strains are genetically and phenotypically different from the type I, II, and III clonal strains found in Europe and North America, very few studies have evaluated the response of the Brazilian strains to treatment with extensively prescribed drugs, such as sulfadiazine. There are only two studies conducted with Brazilian strains of T. gondii that verified the effect of SDZ [14,18]. In both the authors found that the susceptibility of T. gondii to SDZ in in vivo models varied according to the parasite strain. However, the genetic factors that could be associated with these differences in T. gondii isolates obtained from humans had not yet been investigated. In this study, we analyzed the profile of susceptibility to SDZ of five T. gondii isolates obtained from the peripheral blood of newborns in Minas Gerais as well as the polymorphisms of the dhps gene of these isolates. We also compared the susceptibility profiles obtained in this study with those of the other six Brazilian isolates that had been previously characterized [14]. According to the survival curves, the isolates were found to present different profiles of susceptibility to SDZ. The TgCTBr08 and TgCTBr16 isolates were highly susceptible to treatment because the infected mice that were treated with the different dosages presented survival rates greater than 80%. The TgCTBr08 isolate belongs to the genotype ToxoDB #11 (BrII) and the TgCTBr16 isolate to the genotype ToxoDB #08 (BrIII) [10,12]. The TgCTBr03 and TgCTBr07 isolates presented intermediate susceptibility because the survival rates of the treated mice varied between 60 and 100%, and did not demonstrate a direct correlation with the SDZ dosage. The TgCTBr03 isolate belongs to the genotype ToxoDB #206, whereas the TgCTBr07 isolate belongs to the genotype ToxoDB #67. Treatment with SDZ considerably increased the survival rate of mice infected with isolates TgCTBr03, TgCTBr07, TgCTBr08, and TgCTBr16. These results show that these four isolates are sensitive to treatment at the dosages used. The TgCTBr11 isolate presented a differentiated phenotypic profile. After infection, the survival rate of treated mice varied between 10% and 40%, suggesting a positive correlation between the SDZ dose and the survival rate. No statistical difference was observed when the survival curves of the groups treated with the different dosages were compared to that of the NTC group. Thus, the TgCTBr11 isolate likely displayed a profile of resistance to SDZ at the dosages used in this study when compared to the other four isolates. T. gondii isolates resistant to SDZ were previously described in the literature [5,7]. This is the first report of T. gondii obtained from a Brazilian newborn with a profile of resistance to SDZ. Earlier studies used different methodologies to evaluate the phenotype of susceptibility of T. gondii and to classify it as susceptible or resistant to the treatment. A study from the UK [5] identified a clinical isolate of human toxoplasmosis resistant to SDZ. In a French study [7], three resistant isolates were identified. In this study, we used the methodology proposed by a Brazilian study [14], with modifications, in which tachyzoites were i.p. inoculated in Swiss mice and were orally treated with different dosages of SDZ (from 40 to 320 mg/Kg/day) after 48 hours and for 10 days. In vivo assays with a murine model had been previously proposed to evaluate the activity of different drugs against tachyzoites of T. gondii [19]. The isolates used in this study were obtained from the peripheral blood of newborns by mouse bioassay, before starting treatment. The blood was collected after the confirmation of the infection through the detection of anti-T. gondii IgM antibodies using serological methods. The newborns were 41 to 78 days old on the date of blood collection [10]. The mothers of these infants did not receive any treatment during gestation because the diagnosis was made at the infants' birth. So, it is possible that the resistance observed for the TgCTBr11 isolate might not have been induced by previous SDZ use. This information is relevant because previous studies have shown that it is possible to induce resistance to SDZ under laboratory conditions by selective pressure [5,20]. These data support the hypothesis that the TgCTBr11 isolate has a profile of natural resistance to SDZ. The therapeutic scheme used to treat these infants was a combination of SDZ, pyrimethamine, and folinic acid, according to the dosage per Kg over 12 months [21]. All of the infants showed a good clinical evolution after the treatment, except for the one infant infected with the TgCTBr11 isolate. The clinical data indicate that despite being treated, the newborn from whom the TgCTBr11 isolate had been obtained presented severe congenital toxoplasmosis, ultimately leading to death [10,21]. At birth, the newborn presented a marked vitreitis and convulsions. The treatment was initiated immediately after screening diagnosis, six days after birth. Three months after the beginning of the treatment, the infant presented active retinochoroidal lesions, vitreous opacification, bilateral retinal detachment, hydrocephalia, microphthalmia hepatosplenomegaly, and convulsions. The infant died at 4.5 months of age due to congenital toxoplasmosis. The severity of toxoplasmosis in this infant may be associated, among other factors, to the low response of T. gondii to the therapeutic procedure used. The clinical isolate resistant to SDZ from a British patient was also obtained from a severe case of toxoplasmosis, which led to the death of the patient [5]. The number of brain cysts in the surviving mice was also used to verify the efficacy of the treatment. However, the number of cysts did not vary with the SDZ dosage (data not shown). The bioassay with the TgCTBr16 isolate showed that some of the treated animals were infected, despite the fact that brain cysts were not visible by optical microscopy. These results indicate a low parasitism in the brain of the treated animals, also showing that ELISA, compared with the bioassay, has greater sensitivity in the identification of mice infected with the TgCTBr16 isolate. Although the TgCTBr08, TgCTBr11 and D4 isolates belong to the same genotype (ToxoDB #11), their susceptibility profiles were different. These results corroborate the study conducted in France, which did not find association between susceptibility to the drug and the strain's genotype when investigating T. gondii isolates [7]. Studies of Brazilian T. gondii isolates showed that alleles type I and II at the CS3 locus (located on chromosome VIIa) are strongly linked to parasite virulence in mice, whereas allele type III at the CS3 locus is absent on the virulent isolates [8,9]. However, isolates with the same susceptibility profile presented different degrees of virulence suggesting that there is no association between susceptibility to SDZ and the allele type at the CS3 locus, or the virulence-phenotype in mice. However, a larger number of atypical isolates need to be evaluated to confirm these hypotheses. The resistance of Plasmodium and T. gondii to SDZ was previously associated with mutation points of the dhps gene [5,7,22,23]. Our study identified 19 SNPs in the exon regions of the dhps gene. Five of them had been previously described in T. gondii isolates from Europe [5,7] and 14 are new SNPs that are described by the first time in this study. This is the first study showing a large quantity of SNPs in the dhps gene of T. gondii. The larger number of polymorphisms may be a characteristic of the Brazilian atypical isolates because they are genetically different and more diverse compared to those found in North America and Europe [8,16]. Seven mutation points, leading to a change in the DHPS protein amino acids, were observed in the Brazilian strains isolated from infants with congenital toxoplasmosis. In five of these mutations, the amino acid presented by the Brazilian isolates was identical to the amino acid presented by the Type I clonal strains (GT1 and RH). It is possible that the dhps gene of the Brazilian isolates is more similar to the dhps gene of the Type I clonal strains than to that of the Type II and III clonal strains. Two (P691A and H65P) of the seven non-synonymous mutations were observed only in Brazilian T. gondii isolates, but not observed in the clonal strains. However, although the enzyme DHPS of these isolates may have undergone some alteration in its structure due to this mutation, such an alteration is likely not related to an increase in resistance to SDZ because the mutations under consideration were identified in isolates with a profile of susceptibility to this drug. Mutations N407D and A587V, which were previously described in the literature as being responsible for the phenotype resistant to SDZ in different isolates [5,7], were not identified in the isolates analyzed in the present study. No non-synonymous mutation exclusive to the TgCTBr11 isolate was identified that could be associated with the low susceptibility of this isolate to SDZ. These results corroborate a recent study conducted in France, which found no association between resistance to SDZ and polymorphisms of the dhps gene [3]. Based on the methodology proposed in the literature [5] and reproduced in other studies [3,7], there is no evidence that there is an association between polymorphisms in the dhps gene of T. gondii and susceptibility to SDZ. A larger number of atypical isolates of T. gondii must be evaluated to confirm these results. Other factors may be related to resistance to SDZ, such as the super-expression of proteins critical to the development of the parasite. The ABC transporter superfamily (ATP-binding transporters) is an important family of membrane proteins involved in the resistance to drugs and other biological activities [24]. However, a recent study performed with two induced-resistant T. gondii strains and three naturally T. gondii sulfadiazine resistant strains showed that resistance is not related to the overexpression of ABC transporter genes (TgABC.B1 and TgABC.B2) [3]. Another study attempted to identify proteins that are differentially expressed in three sulfadiazine-resistance strains of T. gondii [25]. The authors identified 31 proteins that are differentially modulated between sulfadiazine resistant and sensitive strains of T. gondii, according to their genotype. Although none of them allowed a direct identification of the resistance mechanisms to sulfadiazine, the authors suggested that several of these proteins may be associated with the resistance phenotype. Further studies are necessary to verify whether these proteins or other unknown factors could be involved with different phenotypes of susceptibility to SDZ, specifically in Brazilian clinical isolates of T. gondii. Conclusions This is the first study that evaluates the susceptibility to SDZ of T. gondii obtained from newborns with congenital toxoplasmosis in Brazil, and verifies whether the identified profile of susceptibility is related to mutations in the dhps gene. It is also the first study that evaluates the association between the genotype or virulence-phenotype and the susceptibility profile of the parasite. This study confirms the existence of a Brazilian T. gondii isolate obtained from human infection, which is resistant to SDZ. We found that the profile of susceptibility to SDZ is probably not associated with the presence of polymorphisms in the dhps gene. Further studies, using a large number of Brazilian T. gondii isolates, must be conducted to confirm these findings. These results, combined with clinical data regarding a newborns' response to treatment and a greater knowledge regarding the phenotype, genetics, and population structure of T. gondii in Brazil, may help establish a more effective therapeutic scheme in the treatment of toxoplasmosis. reviewers for their suggestions and comments, which helped to improve the quality of this paper.
2018-04-03T06:17:17.126Z
2017-01-24T00:00:00.000
{ "year": 2017, "sha1": "d1f41374bf8acef98e806f79830844e0b614866c", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0170689&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d1f41374bf8acef98e806f79830844e0b614866c", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
216244849
pes2o/s2orc
v3-fos-license
Comparison of the effects of epidural and spinal anesthesia on analgesia and blood gases in neonates born by natural vaginal delivery : A clinical trial study Introduction: One of the concerns of painless deliveries is the safety of neonates. This clinical trial study aimed to compare the effects of epidural and spinal anesthesia on the mortality rate of neonates. Methods: This clinical trial was conducted in Hamadan Hospital in Iran. Ninety women, ages 18 to 45, were randomly assigned to receive epidural or subdural anesthesia. Using a checklist, the following were collected: demographic information, midwifery, hemodynamic status, mothers' pain intensity, and analyses of the baby's umbilical cord blood. The data were analyzed by SPSS version 16 for statistical analysis. Twenty-two of the patients with spinal anesthesia and epidural anesthesia were excluded from the study. Results: There was no significant difference between the two groups in terms of age, gestational age, parity, and severity of pain before or after anesthesia. The hemodynamic status of the mothers before and during the first postoperative period was in the normal range, except that in the spinal group, a decrease in systolic blood pressure was observed in normal range compared with the epidural anesthesia group. In blood gas analysis, the mean pH, partial pressure of carbon dioxide (PCO2), and bicarbonate (HCO3) did not show significant differences between the two groups (p > 0.05). The only complications were acidosis and epidural anesthesia. Conclusion: Based on the findings of the present study, both spinal and epidural opioids have no adverse effects on the health of neonates. However, both spinal and epidural opioid are preferred due to fewer changes in the hemodynamic changes in mothers and in umbilical cord blood gas. INTRODUCTION Pregnancy is carried out in a variety of ways which include standard vaginal delivery and cesarean section. Natural childbirth is carried out in two ways-with painless and without pain control. The pain of vaginal delivery is one of the hardest pains that women can experience during their life 1 . One of the essential methods used to reduce the extent of labor pain in the developed world in recent decades is the use of such local epidural and spinal anesthesia 2 . The pain of vaginal delivery varies widely but many women consider the pain to be unbearable. The pain during pregnancy and vaginal delivery is caused by uterus contractions, cervical enlargement, and stretching of the perinea. The visceral and somatic marker fibers are transmitted to the spinal cord along with sympathetic nervous fibers (via the T10-T12 and L1 nerve impulses) then through the uterus (via T12 and L1 nerve impulses); somatic nerve impulses (S2-S4) are transmitted to sacral nerves 2, 3 and 4 3 . Various factors affect the perception of delivery; these include duration, anatomy of the mother, size of the embryo, use of oxytocin, prenatal mortality, fear, as well as anxiety from childbirth, behavior, and experience of pain and adaptive systems. The lack of proper control of acute pain is associated with destructive pathophysiologic effects. Moreover, mothers show a higher tendency to lean towards natural delivery, along with pain control, over a cesarean section which is a significant operation 4 . Inadequate control of labor pain is associated with adverse effects on both the mother and fetus. For example, in the respiratory system, an increase in the respiratory rate resulted from a decrease in uterine and brain blood flow. Lumbar spinal anesthesia is a safe method for the relief of labor pain. Using low amounts of local and narcotic analgesics, lumbar epidural anesthesia is an effective sensory anesthesia in the first stage of delivery (t10-l1) and may be needed in the continuation of labor to block completion 4 . One of the challenges of local anesthesia (e.g. spinal or epidural) is their effects on the fetus and its hemodynamic status. Epidural anesthesia causes pain reduction, which is associated with increased breathing rate; changes of hemoglobin in the mother can have a negative effect on neonatal hemoglobin 5 . Regional imbalances may result in uterine contraction by the sympathetic nervous system, although it is a useful effect in delivery. If the sympathetic nerve block is extended, the blood flow rate of the umbilicus can also be reduced 6 . The amount of dissolved oxygen in the fetus falls between 20 to 96 % during labor, and can be considered as a threshold that is associated with fetal distress 7 . A change occurs in 15 -24 % of the cases in the fetal heart rate after performing a painless method of delivery 8 . Given the lack of studies in the field, as well as lack of policy in the country to promote natural childbirth, this clinical trial study herein was aimed at comparing epidural and spinal anesthesia in newborn babies. METHODS This study herein was a clinical trial study which was carried out at the Hospital in Hamadan (Iran). The study population consisted of 90 patients who had no prior anesthesia. The inclusion criteria were: first pregnancy or second pregnancy, in an active phase of labor, more than 37 weeks of gestation, single pregnancy, vertex presentation, lack of any underlying disease, women between 18 -45 years of age, as well as pregnant women aged 37 to 42 weeks who attended Fatemieh Hospital (Iran) for normal vaginal delivery and were without control candidates. The exclusion criteria were: use of drugs by patient in operation, unwillingness to participate in the study, and lack of literacy. Prior to the beginning of the study, all participants had reviewed, agreed to, and provided written consent to participate in the trial. Initially, the technique of work to be done for each patient was explained, accounting for the culture and the patient's level of education. Then, the patients were randomly divided into two groups. For all patients, an intravenous line was taken; at the beginning of the study, 500 to 1000 mL sodium chloride 0.9 % were infused. In the spinal patient group, the patient was seated in a sitting position using a-the needle number of 25 spinal needles (Dr. Japan Co Ltd.) and 2 ml of Sufentanil in the subarachnoid area. In the epidural group, firstly, an empty needle number of 18 (Ogame Turkiye) was inserted into the epidural apace from the interlayer space (L3 and L4) using a loss of resistance method and without using needle tests. Then, the epidural catheter (No 19), 2-3 cm in space, was inserted. After aspiration and Ensure the catheter is in the epidural space, 12 cc of Bupivacaine (0.125 %) was injected along with 2 cc of Sufentanil (in the form of Bolus injection). If a patient requested again for analgesia, 8-10 cc of Bupivacaine (0.125 %) was injected. The patient was put immediately into the supine position, and vital signs were taken, including blood pressure (systolic and diastolic), heart rate before the beginning of drug administration, heart rate at zero minute and every 5 minutes until 15 minutes, and heart rate every 15 minutes until the birth. The severity of the patient's pain was recorded by the patient according to the Visual Analog Scale (VAS) for pain; the patient was asked to give a degree of pain between 0 to 10 such that a grade of '10' was the maximum pain the patient could experience. Immediately after the birth and primary care of the umbilical cord, the umbilical cord blood was taken with a syringe soaked in heparin and was sent to the laboratory in standard conditions for analyzing blood gases. Randomization For this purpose, we used random blocks of 4. In this way, we used 4 paper sheets on two sheets of letter 1 means spinal and on two sheets of letter 2 means epidural, representing the epidural options. The papers were mixed and placed in a table drawer; each patient was assigned one of the papers randomly and thus assigned to one of the epidural or subdural hematomas. It should be noted that the pages were drawn until all four drawn papers were not selected; they were then returned to the drawer. After the last sheet had been drawn again, the sheets were again returned to the drawer. The above procedure for the next four patients was followed; this sampling method in a row (called consecutive sampling) was done on women who were eligible to enter the study 9 . The study design involved measurement of the hemodynamic status of patients. According to the study results 1 , and using the statistical software Stata to assess sample size in each group, the study sample size consisted of 45 pregnant women with first or second pregnancy, were in an active phase of labor, were past 37 weeks of gestation, had single pregnancy, had vertex presentation, and lacked of any underlying disease. Contraindications for natural childbirth include placenta previa, breech presentation, transverse presentation, fetus weight more than 4.5 kg, history of cesarean section, and underlying diseases (such as cardiac disease, asthma and/or renal disease). Data analysis Descriptive statistics were carried out; mean and standard deviation were recorded for the quantitative variables, and ratios and percentages were recorded for qualitative variables. In order to compare the relevance of qualitative variables to each other, and quantitative variables, chi-square test (Chi) was used. In this study, SPSS version 16 was used for data analysis. The statistical significance level was set as P < 0.05. Research limitations Limitations included the number of patients to participate in this study of the magnitude of pain inten-sity. Pain was based on the patient's judgment. This study was done in coordination with the University of Medical Sciences. Informed consent was obtained from patients before their participation in the study. Whether the patients were referrals to the treatment services or not, there was no effect on diagnosis or treatment. The study data were collected without the inclusion of names and individual characteristics. The study was conducted in coordination with the University of Medical Sciences. Patients were taken into the study after informed consent. Whether the person visited the university had not impacted on their diagnosis or therapeutics. The study data were collected without listing patient names and individual specifications; the results were generalized. RESULTS In this clinical trial study, 90 pregnant women were nominated for a normal vaginal delivery and with-out pain control patients were split into 2 respective groups: epidural (2) and spinal anesthesia (2). Eleven patients (12.2 %) of the epidural and spinal groups were excluded from the epidural group due to conditions (e.g. deceleration of fetal heart rate, no failure to the progress of delivery, etc.); in fact, they were excluded from the study. Therefore, 40 patients in the spinal group and 39 patients in the epidural group were monitored and evaluated ( Table 1). It is to noteworthy that in the epidural group, 5 patients (due to arrest of dilatation) and 1 patient (due to fetal distress) underwent cesarean section. Also, in the spinal group, 1 patient (due to arrest) and 4 patients (due to fetal distress) had cesarean sections. The mean and standard deviation of age in the spinal group and in the epidural group were 23.2 ± 5.3 and 22.8 ± 4.3, respectively. Comparing the two groups, the average age (p = 0.919), pregnancy age (p = 0.430), pregnancy number (p = 0.919), and severity of pain before (p = 0.579) and after anesthesia (p = 0.189) were not significantly different ( Table 2). The results of the study showed that before the operation, hemodynamic variables and all other variables across the two groups were comparable. For the patients of the spinal group, blood pressure reduction following the infusion of anesthesia was found to be significant with subdural hematomas, although the changes were not outside the normal range. At other times, the blood pressure of systolic and diastolic in patients with numbness of spinal anesthesia was less than that of epidural; however, there was no significant difference. In both groups, the fetal heart rate and Apgar of neonates were in the normal range and were comparable ( Table 3). The findings indicated that the average pH in the two groups was not statistically significant (p = 0.313) and that the pH of the groups was in the natural range (7.3). Theoretically, both groups were within the normal range. The mean values of partial pressure of oxygen (PO 2 ) of both groups were in the normal range but were less than that for epidural, with a significant difference in the patients (p = 0.017) ( Table 4). The findings indicated that there was observed acidosis in only 2 patients, both of them were in the epidural group, and which was not statistically significant (p = 0.241) (Tables 3 and 5). DISCUSSION This clinical trial was aimed at comparing the effects of epidural anesthesia and spinal anesthesia on the amount of anesthetics and newborn blood gases in a normal delivery method at an educational center. The results of the present study showed that painless delivery does not adversely affect the neonates so that the average Apgar coefficient between the two groups in the first and fifth minute was approximately 9 and higher. In the analysis of the umbilical cord blood gases, it was found that the average pH in the two groups was not statistically significant, and that the blood pH of the umbilical cord blood was both within the natural range. The fetal heart rate (FHR) was also normal after the numbness in the normal range, and bradycardia did not occur in neonates. Moreover, there was no effect on the maternal side, and the hemodynamic changes were within the normal range. Both groups of women had a desired state of the condition, and the mean of pain intensity using the VAS scale was approximately 2.5 % after the anesthetic injection at the delivery stage. In this study, the patients of the 2 groups were studied in terms of parity, age, Apgar score, hemodynamic status of mothers before and after anesthesia, and class. There was found to be no significant differences between the two groups. Therefore, the results of this study cannot be affected by those confounding variables. The pain of childbirth is one of the hardest pains a woman experiences in her life 1 . Over the years, it has been trying to relieve the pain by taking measures like inhalation of nitrogen (N 2 ), acupuncture, hydrography, and use of opium. However, these measures were usually not satisfactory at reducing pain. The introduction of spinal analgesia (epidural and spinal anesthesia, or combination of these two methods) has been one of the most dramatic developments in the control of labor pain, which has been accompanied by maternal satisfaction and the sanctity of the embryo and baby 10 . Currently, approximately 60 % of women in the United States use painless methods 11 . After spinal cord anesthesia, the heart rate may undergo changes such as those associated with bradycardia. The amount of FHR changes after spinal anesthesia varies between 15 and 25 % 12 . FHR was not used for painless delivery. The patient must be able to provide painless drugs in all delivery stages and have no adverse effects on fetus and neonates, which in the present study; There was no painless effect and there was lack of undesirable effects on the fetus. Different studies have examined the effects of region on childbirth and its consequences. According to outcome analysis, the type of medicine has been reported for different results from different results. In a case study -evidence in 2010 13 , the findings of this study Apgar (5 minutes later) 9.9 ± 0.3 9.9 ± 0.2 0.697 revealed that the rate section, umbilical cord blood gases, Apgar scores, and baby outcome in anesthesia patients confirm the findings of this study. Moreover, the results of another study 14 showed that patients with epidural anesthesia were not abnormal, even in the first half hours after the administration of anesthetia. In another study, a relatively small sample volume of low sample size has been carried out from the epidural -spinal method with significant changes of painless childbirth. It was also observed that after the numbness of significant changes (such as in the heart rate of the embryo), the incidence of bradycardia aligns with the findings of the present study. Reynolds et al., in a study, found that Epichlorohydrin would reduce the blood pressure of the mother and fever, as well as increase in the second stage of labor and the use of a vacuum for the birth of vaginal delivery. However, the latter duty is negligible in the face of the risk reduction of acidosis in neonates 15 . In the present study, there were only 5 % of patients who had acidosis and who did not pose as a serious threat to the neonates. It has been shown if the epidural is not carried out at the right time, it may be accompanied by a cesarean section risk 13 . In this study, 10 -13 % in both groups eventually had cesarean sections; of these, there were 83 % in the epidural and 20 % in the spinal group due to arrest dilatation. In fact, the epidural group (17 %) and the spinal group (80 %) experienced fetal distress which led to cesarean section. In a study published in the Cochran database, the epidural is accompanied by an increase in the second phase of labor and a cesarean section risk 16 . Considering that the conventional method for low sample size is the epidural or spinal anesthesia, most studies have examined these techniques, although the spinal technique may also be widely used due to neurological sequels 17 . One of the advantages of epidural analgesia is that it reduces the need for systemic drugs that may result in neonatal respiratory depression. On the other hand, pain reduction leads to a decrease in the endogenous opioid secretion. The advantage of epidural injection is the possibility of a sensory block without motor block, and minimum hemodynamic complications, and reducing catecholamines. In the present study, hemodynamic changes in the epidural group were less than those for the spinal group. On the other hand, people are also considering health and treatment policies. These are based on the fact that elective cesarean can be reduced and one of the women worries to have normal vaginal delivery. However, the women are associated with it since it is has been determined that this method is accompanied by pain relief and lack of serious complications. CONCLUSIONS According to the findings of this study, both the spinal and epidural methods have adverse effects on infant health. However, given that the epidural induces fewer changes to hemodynamics and maternal umbilical cord blood gas low sample size. ABBREVIATION FHR: Fetus Heart Rate VAS: Visual Analog Scale AOUTHORS' CONTRIBUTIONS All authors contributed equally in the study design, interpretation of the data and writing of the final manuscript. COMPETING INTERESTS The author(s) declare that they have no competing interests. This study was supported by Hamadan University of Medical Sciences.
2020-04-02T09:18:54.677Z
2020-03-30T00:00:00.000
{ "year": 2020, "sha1": "545cac6511c881f132a534b8f07e347b22e57997", "oa_license": "CCBY", "oa_url": "http://www.bmrat.org/index.php/BMRAT/article/download/595/1201", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "256b7ad412f4c545b9e8727a582c74494104b2e9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
250057931
pes2o/s2orc
v3-fos-license
Sinapicacid Inhibits Group IIA Secretory Phospholipase A2 and Its Inflammatory Response in Mice Human Group IIA secreted phospholipase A2 (sPLA2-IIA) enzyme plays a crucial role in several chronic inflammatory diseases such asasthma, atherosclerosis, gout, bronchitis, etc. Several studies showed that the antioxidants exert an anti-inflammatory function by inhibiting the sPLA2-IIA enzyme. Hence, the present study evaluated an antioxidant molecule, sinapic acid, for sPLA2-IIA inhibition as an anti-inflammatory function. Initially, the antioxidant efficacy of sinapic acid was evaluated, and it showed greater antioxidant potency. Further, sinapic acid inhibited 94.4 ± 4.83% of sPLA2-IIA activity with an IC50 value of 4.16 ± 0.13 µM. The mode of sPLA2-IIA inhibition was examined by increasing the substrate concentration from 30 to 120nM and the calcium concentration from 2.5 to 15 mM, which did not change the level of inhibition. Further, sinapic acid altered the intrinsic fluorescence and distorted the far UltraViolet Circular Dichroism (UV-CD) spectra of the sPLA2-IIA, indicating the direct enzyme-inhibitor interaction. Sinapic acid reduced the sPLA2-IIA mediated hemolytic activity from 94 ± 2.19% to 12.35 ± 2.57% and mouse paw edema from 171.75 ± 2.2% to 114.8 ± 1.98%, demonstrating the anti-inflammatory efficiency of sinapic acid by in situ and in vivo methods, respectively. Finally, sinapic acid reduced the hemorrhagic effect of Vipera russelli venom hemorrhagic complex-I (VR-HC-I) as an anti-hemorrhagic function. Thus, the above experimental results revealed the sinapic acid potency to be an antioxidant, anti-inflammatory and anti-hemorrhagic molecule, and therefore, it appears to be a promising therapeutic agent. Introduction Inflammation is a defensive process and a necessary prerequisite to healing the tissue injury that occurs due to physical, chemical, or biological agents. However, if the inflammation remains beyond its defensive role, it leads to serious consequences such as systemic shock, circulatory collapse, and local tissue injury [1]. Studies have shown that secreted phospholipase A 2 group IIA (sPLA 2 -IIA) enzymes play a significant role in oxidative stress [2] and inflammatory diseases [3,4]. In healthy people, the concentration of sPLA 2 -IIA is minimal (3 ng/mL) but increases significantly (250-500 ng/mL) during infection and injuries [5]. The sPLA 2 -IIA concentration has been elevated in most inflammatory fluids of patients with rheumatoid arthritis [6], asthma [7], atherosclerosis [8], and acute respiratory distress syndrome [9], as well as a biomarker for cardiovascular complications [10,11], sepsis [12] and transplant rejection [13]. The sPLA 2 -IIA enzyme catalyzes membrane phospholipid into arachidonic acid and lysophosphatidic acid. Arachidonic acid is converted into inflammatory mediators such as thromboxane, leukotriene, prostaglandins, and prostacyclins. Lysophosphatidic acid is catalyzed to a platelet activating factor (PAF) that further intensifies the inflammatory condition ( Figure 1). Furthermore, the arachidonic acid pathway produces loads of reactive oxygen species (ROS), which contribute to the defensive function by destroying inflowing pathogens [14,15]. However, the persistence of ROS after the defensive role causes deleterious complications [16]. Furthermore, they play an important role in several inflammatory diseases such as ARDS, COPD, chronic bronchitis, asthma [17], rheumatoid arthritis [18], and Alzheimer's disease [19]. Introduction Inflammation is a defensive process and a necessary prerequisite to healing the tissue injury that occurs due to physical, chemical, or biological agents. However, if the inflammation remains beyond its defensive role, it leads to serious consequences such as systemic shock, circulatory collapse, and local tissue injury [1]. Studies have shown that secreted phospholipase A2 group IIA (sPLA2-IIA) enzymes play a significant role in oxidative stress [2] and inflammatory diseases [3,4]. In healthy people, the concentration of sPLA2-IIA is minimal (3 ng/mL) but increases significantly (250-500 ng/mL) during infection and injuries [5]. The sPLA2-IIA concentration has been elevated in most inflammatory fluids of patients with rheumatoid arthritis [6], asthma [7], atherosclerosis [8], and acute respiratory distress syndrome [9], as well as a biomarker for cardiovascular complications [10,11], sepsis [12] and transplant rejection [13]. The sPLA2-IIA enzyme catalyzes membrane phospholipid into arachidonic acid and lysophosphatidic acid. Arachidonic acid is converted into inflammatory mediators such as thromboxane, leukotriene, prostaglandins, and prostacyclins. Lysophosphatidic acid is catalyzed to a platelet activating factor (PAF) that further intensifies the inflammatory condition ( Figure 1). Furthermore, the arachidonic acid pathway produces loads of reactive oxygen species (ROS), which contribute to the defensive function by destroying inflowing pathogens [14,15]. However, the persistence of ROS after the defensive role causes deleterious complications [16]. Furthermore, they play an important role in several inflammatory diseases such as ARDS, COPD, chronic bronchitis, asthma [17], rheumatoid arthritis [18], and Alzheimer's disease [19]. The arachidonic acid pathway-mediated ROS production modulates the cPLA 2 and iPLA 2 functions that enhance the production of arachidonic acid and free radicals [20]. Interestingly, the ROS increases the sPLA 2 -IIA activity and lipid peroxidation that modulate the downstream reactions, which further increase the proinflammatory mediators. Therefore, a single bioactive molecule with both sPLA 2 -IIA inhibitory and antioxidant activities may become a more effective anti-inflammatory agent. Till today, Non-Steroidal Anti-Inflammatory Drugs (NSAIDs) are widely used to control chronic inflammatory disorders [21,22]. NSAIDs limit the COX-1/2 enzymes but have no effect on the generation of leukotrienes and PAF [23]; they continue to cause inflammation ( Figure 1). Furthermore, the prolonged use of NSAIDs leads to several complications such as hepatotoxicity, renal injury, hypertension, cardiovascular risks, and gastrointestinal toxicity [24][25][26][27]. The specific sPLA 2 -IIA inhibitors such as varespladib (LY315920) and varespladib-methyl (LY333013) were examined in clinical trials, where they were used to treat patients with cardiovascular complications [28,29], but they failed to demonstrate the therapeutic effects. Drugs such asLY315920NA, ginkgetin and petrosaspongiolide M were not successful even though they limit the sPLA 2 -IIA activity at nanomolar concentrations. The unsuccessfulness of these sPLA 2 -IIA inhibitors may be due to the problem associated with formulation or their cytotoxic nature [30,31]. As a result, there is an urgent need for safe and effective sPLA 2 -IIA inhibitors from natural resources with minimal or no adverse effects [32]. Antioxidants such as flavonoids, phenols, and retinoids scavenge ROS and prevent lipid peroxidation, and they further limit the sPLA 2 -IIA-mediated arachidonic acid cascade [14]. In our initial study on pharmaceutically important bioactive molecules, sinapic acid, an antioxidant found in dietary sources [33], has been shown to interfere with the pathways connected to inflammation. Sinapic acid plays a protective role against oxidative stress disorders, as shown in [34], and another study has shown the anti-inflammatory effect by down regulating the synthesis of iNOS and COX-2 in murine macrophage cell lines [35]. Sinapic acid is also documented for its anti-inflammatory effects by inhibiting IL-1β [36], NF-κB [35],reducing the risk of inflammatory colitis in mice by suppressing malondialdehyde, TNF-α and myeloperoxidase expression [37],and reducing carrageenaninduced edema [35]. Therefore, we hypothesized our research to evaluate thepotency of sinapic acid for neutralizing the PLA 2 -IIA enzyme and its inflammation responses. Animals Swiss albino mice (weighing around 20-25 g, males) were procured from the University Animal House Facility (AHF), Mangalore University, Mangalore, India. Animals were maintained and handled according to the guidelines of the Indian National Regulations for Animal Research. In the present study, we conducted the experiments according to the guidelines of Mangalore University's Institutional Animal Ethical Committee (No: MU/AZ/504(a)/IAEC/2015-2016). Human Biological Fluid Institutional Human Ethical Committee (IHEC), Mangalore University, Mangalore, India, permitted the usage of human blood samples (IHEC-No. MU/IHEC/2018/7). The blood samples were collected from volunteers after obtaining the consent letter. Purification of sPLA 2 -IIA The sPLA 2 -IIA was purified from Viper russelli venom as per the protocol of Kasturi and Gowda [38]. The purity of sPLA 2 -IIA was tested by sodium dodecyl-sulfate polyacrylamide gel electrophoresis [39]. The sPLA 2 -IIA of Vipera russelli venom was generally used to study the mode of action of human inflammatory sPLA 2 -IIA because of the simple purification procedure, availability, close structural similarities, and catalytic action compared to human sPLA 2 IIA [40]. The human and snake venom sPLA 2 enzymes share similar functional and biological properties such as edema, pain, muscle injury and leukocyte influx [41]. It was also reported that the binding pattern of a known inhibitor with human and venom phospholipase A 2 was very similar [42]. Hence, it is suggested the use of snake venom PLA 2 as a tool for investigating a new pharmacological inhibitor of human sPLA 2 -IIA [42]. Molecular Docking The structures of phospholipase A 2 (PLA 2 ) were downloaded from the Protein Data Bank (PDB ID: 1POE and 3H1X). Structures of sinapic acid and genistein were drawn and analyzed with ChemDraw Ultra 12.0. The three-dimensional coordination was derived through the PRODRG online server [43]. The potential active pockets for PLA 2 protein were determined and identified from both of CASTp server and reference [44]. During the process, intermediary steps, such as grid box creation, energy optimization, protein and ligand preparations, were established through the Graphical User Interface program of AutoDock Tools (ADT). AutoDock prepared the data and saved the prepared file in the required PDBQT format. Using the available information about chosen protein and ligand, AutoDock Vina was used for the docking process along with grid box characteristics in the configuration file. AutoDock Vina employs an iterated local search global optimizer to process the submitted data [45]. During the docking procedure, the option was selected to consider both the ligand and protein as rigid. Following the completion of the scheduled docking runs, the variable conformations of the protein with ligands were obtained as binding methods with their respective binding affinity. The stable confirmation mode with the optimum best interaction was selected and was the one that represented the lowest binding affinity; the same was picked and aligned alongside the receptor structure for further investigation [46]. Estimation of Antioxidant Activity The antioxidant activity of sinapic acid was estimated by DPPH radical scavenging activity, as described by Blois [47], anti-lipid peroxidation activity, as described by Gutteridge [48], and reducing power activity, as described by Oyaizu [49]. The antioxidant activity was expressed as percent radical scavenging activity. Secreted Phospholipase A 2 Assay (sPLA 2 -IIA) Autoclaved E. coli cells labeled with 14 C-oleic acid were used as a substrate for estimation of sPLA 2 -IIA activity [50,51]. Briefly, the reaction mixture (350 µL) consists of 3.18 × 10 9 autoclaved E. coli cells, calcium (5 mM), Tris-HCl buffer (100 mM), enzyme, and water. The 30 µL E-coli substrate was added and incubated at 37 • C for 60 min. In total, 2N HCl (100 µL) and 100 µL fatty acid-free BSA (10%) were added, vortexed and centrifuged at 20,000× g for 5 min. A total of 140 µL supernatant containing 14 C-oleic acid was collected with caution and was added to a scintillation cocktail, and the 14 C radioactivity was measured. Inhibition of sPLA 2 -IIA Activity Sinapic acid was dissolved in a small amount of DMSO and made up to the appropriate concentration with Tris-HCl buffer. sPLA 2 -IIA inhibition was performed by taking different concentrations of sinapic acid. Genistein was used as the standard molecule as it was a proven sPLA 2 -IIA inhibitor and an anti-inflammatory molecule [52]. The maxi-mum concentration of DMSO used in the experiment was 0.022%. The Graph pad Prism version 5.0 (GraphPad Software, San Diego, CA, USA) software was employed to calculate the IC 50 value. The Effect of Concentrations of Substrate and Calcium on sPLA 2 -IIA Inhibition The assay was carried out with and without IC 50 concentration of sinapic acid as described above. The effect of substrate concentration on sPLA 2 -IIA inhibition was studied by increasing its concentration from 30 to 120 nmoles. The effect of calcium concentrations on sPLA 2 -IIA inhibition was examined by increasing its concentration from 2.5 to 15 mM. Intrinsic Fluorescence Study The fluorescence intensity of the sPLA 2 -IIA enzyme was measured with and without sinapic acid using the Horiba Jobin Yvon Fluorolog-3 spectrofluorometer. The standard reaction mixture (2.0 mL) in a 1 cm path length cuvette consists of sPLA 2 -IIA (20 µg/mL) and sinapic acid concentrations ranging from 0.02 to 0.10 µM. The spectra of the fluorescence were measured between the wavelength of 300 and 380 nm. The tryptophan standard was employed to correct the measurements empirically [53]. The Far UV-Circular Dichroism Study The UV-CD spectra of sPLA 2 -IIA (30 µg/mL) were recorded with/without sinapic acid in a reaction mixture using a Jasco J-810 spectropolarimeter. The quartz cuvette was used to record the spectra of sPLA 2 -IIA between 200 and 240 nm at room temperature. The bandwidth was 1 nm, and the response time was set for 2 s. Ten scans in total were carried out to obtainthe final spectrum. The spectrum of the blank solution containing the standard reaction mixture was subtracted to correct the protein spectra. The secondary structure of sPLA 2 -IIA was calculated using the K2D3 software (http://cbdm-01.zdv.uni-mainz.de/ andrade/k2d3/). Study of Reversibility of sPLA 2 -IIA Inhibition The sPLA 2 -IIA with IC 50 concentration of sinapic acid in 350 µL standard reaction mixture was preincubated and then subjected to dialysis (MW cut off of bag is 3000-6000) for twenty-four hours by changing two buffers. The sPLA 2 -IIA activity was determined before and following the dialysis procedure. Neutralization of Indirect Haemolytic Activity The experiment was conducted as per the method of Boman and Kaletta [54]. The human RBC (1 mL) and egg yolk (1 mL) in 8 mL of PBS were mixed fresh as a substrate for indirect hemolytic activity. The inhibitor (sinapic acid) was preincubated with sPLA 2 -IIA (30 µg) at 37 • C for 30 min, and 1 mL of a substrate was added and allowed for the reaction for 45 min at 37 • C. In total, 9 mL of ice-cold PBS was added to halt the reaction. The suspension was vortexed and centrifuged for 20 min at 1500× g. The hemolytic activity in terms of released hemoglobin was measured at 530 nm. The sPLA 2 -IIA enzyme without sinapic acid was the positive control. Neutralization of Edema Inducing Activity of sPLA 2 -IIA The assay was performed as per the method of Yamakawa et al. [55], slightly modified by Vishwanath et al. [56]. The sPLA 2 -IIA (5 µg) with different concentrations of sinapic acid, making up a total of 20 µL, was injected into the plantar surface of the right hind footpad of mice (weighing 20-25 g). The saline was injected to the respective left hind limb as negative controls. The animals were euthanized after 45 min by administering anesthesia (30 mg/kg of pentobarbitone i.p.), and hind limbs were amputated at the ankle The hemorrhagic activity of sPLA 2 -IIA was estimated as described by Kondo H and Venkatesh M [57,58]. Briefly, 10 µg of the hemorrhagic complex containing 5:2 ratio of sPLA 2 -IIA enzyme and nonenzymatic peptide (Vipera neurotoxin-II, VNTx-II) was injected subcutaneously (s.c.). The mice were euthanized after three hours, the skin was removed, and hemorrhagic spots were measured on the dorsal surface. Saline alone was injected as the control. The preincubated hemorrhagic complex was injected with different concentrations of sinapic acid for the inhibition studies. Statistical Analysis The test results were given as the mean standard deviation of three determinations. Graph Pad Prism Version 5.0 was used to calculate IC 50 values. Percent inhibition was calculated from the difference between the control receiving vehicle and the inhibitortreated animals. Molecular Docking The molecular docking study was carried out to analyze the enzyme-inhibitor interaction. The sinapic acid interacted with the human sPLA 2 -IIA (1POE) enzyme, and it showed the binding energy of −7.6 (E-value). The sinapic acid interacted with the active site conserved amino acid Asp48 through hydrogen bonding and hydrophobic interaction with Cys124, Val45, Cys49, Thr121, Pro122, Lys52, Gly32, and Gly31 ( Figure 2B and Table 1). Similarly, the binding energy of standard genistein was −7.2, which interacted with active site Asp48 and Lys52 through hydrogen bonding and showed hydrophobic interaction with Val45, Cys49, Thr121, and Cys124 residues ( Figure 2A and Table 1). Inhibition of sPLA 2 -IIA Further, the sinapic acid was employed to inhibit the inflammatory sPLA 2 -IIA enzyme. It potentially inhibited the sPLA 2 -IIA enzyme to the extent of 94.4% ± 4.83 at 16 µM concentration with an F-statistic value of 0.0031 and p-value of 0.9969 (Figure 3). The IC 50 value of sinapic acid was calculated by the software Graphpad Prism 5.0, and it was shown to be 4.16 ± 0.13 µM, whereas the standard genistein was 11.75 µM (historical IC 50 value) ( The antioxidant activity of sinapic acid was evaluated by DPPH radical scavenging, reducing power assay and anti-lipid peroxidation. Inhibition of sPLA2-IIA Further, the sinapic acid was employed to inhibit the inflammatory sPLA2-IIA enzyme. It potentially inhibited the sPLA2-IIA enzyme to the extent of 94.4%±4.83 at 16 µM concentration with an F-statistic value of 0.0031 and p-value of 0.9969 (Figure 3). The IC50 value of sinapic acid was calculated by the software Graphpad Prism 5.0, and it was shown to be 4.16 ± 0.13 µM, whereas the standard genistein was11.75 µM (historical IC50 value) (Table 3) [52]. Effect of Calcium and Substrate Concentration on sPLA 2 IIA Inhibition The sPLA 2 -IIA activity was measured with and without sinapic acid (IC 50 concentration) by increasing the calcium concentration from 2.5 to 15 mM, the activity of the enzyme was increased linearly and maintained the constant inhibition of 49.34% ± 1.35 over all the ranges of the calcium concentrations ( Figure 4). Furthermore, sPLA 2 -IIA activity was measured with and without IC 50 concentration of sinapic acid by increasing substrate concentration from 30 to 120 nmoles; the enzyme activity increased linearly and maintained the constant inhibition of 48.43% ± 1.76 over all the ranges of substrate concentrations ( Figure 5). tion) by increasing the calcium concentration from 2.5 to 15 mM, the activity of the enzyme was increased linearly and maintained the constant inhibition of 49.34% ± 1.35 over all the ranges of the calcium concentrations ( Figure 4). Furthermore, sPLA2-IIA activity was measured with and without IC50 concentration of sinapic acid by increasing substrate concentration from 30 to 120 nmoles; the enzyme activity increased linearly and maintained the constant inhibition of 48.43% ± 1.76 over all the ranges of substrate concentrations ( Figure 5). The inhibition constant (Ki) was determined by fitting the data to the competitive inhibition model in GraphPad Prism 5.0 via nonlinear regression analysis of competitive enzyme kinetics [59] (Figure 6). The Ki of sinapic acid for sPLA2-IIA inhibition was found to be 2.711±1. 19. The Ki and IC50values are often used to compare the relative potency of The inhibition constant (Ki) was determined by fitting the data to the competitive inhibition model in GraphPad Prism 5.0 via nonlinear regression analysis of competitive enzyme kinetics [59] (Figure 6). The Ki of sinapic acid for sPLA 2 -IIA inhibition was found to be 2.711 ± 1.19. The Ki and IC 50 values are often used to compare the relative potency of inhibitors. Smaller Ki values denote tight binding, and if the Ki value is less than the IC 50 value, it indicates competitive inhibition [60]. Figure 5. Effect of substrate concentration on sPLA2-IIA inhibition: The sPLA2 IIA activity was measured with the substrate concentrations ranging from 30 to 120 µL with (■) and without (□) IC50 concentration of sinapic acid. The sPLA2-IIA inhibition is shown in the inlet. The data are expressed as mean ± standard deviation (n = 3). The inhibition constant (Ki) was determined by fitting the data to the competitive inhibition model in GraphPad Prism 5.0 via nonlinear regression analysis of competitive enzyme kinetics [59] (Figure 6). The Ki of sinapic acid for sPLA2-IIA inhibition was found to be 2.711±1. 19. The Ki and IC50values are often used to compare the relative potency of inhibitors. Smaller Ki values denote tight binding, and if the Ki value is less than the IC50value, it indicates competitive inhibition [60]. Intrinsic Fluorescence Study The altered intrinsic fluorescence spectrum of the enzyme indicates the structural changes due to interaction with the inhibitor. Sinapic acid altered the relative intrinsic fluorescence of the sPLA 2 -IIA enzyme in accordance with the inhibitor concentration (0.02 to 0.1 µM). The maximum fluorescence intensity of sPLA 2 -IIA was noted at 338 nm and shifted to the higher wavelength of 344 nm in the presence of sinapic acid at 0.1 µM concentration ( Figure 7I,II). Circular Dichroism (CD) Study The change in the secondary structure of the enzyme implies the direct interaction with the inhibitor. The CD spectrum of sPLA 2 -IIA with and without IC 50 concentration of sinapic acid was recorded, which exhibited two major negative bands at 210 and 222 nm (red line). In the presence of sinapic acid (IC 50 concentration), the negative bands were significantly reduced and abruptly shifted to longer wavelengths of 220 and 224 nm, respectively ( Figure 8). The sPLA 2 -IIA spectra were corrected by subtracting spectra of the blank solution containing 100 mM Tris-HCl buffer (pH 7.4) and 5 mM calcium. The K2D3 software was used to determine the secondary structure of the sPLA 2 -IIA enzyme (Table 4). Intrinsic Fluorescence Study The altered intrinsic fluorescence spectrum of the enzyme indicates the structural changes due to interaction with the inhibitor. Sinapic acid altered the relative intrinsic fluorescence of the sPLA2-IIA enzyme in accordance with the inhibitor concentration (0.02 to 0.1 µM). The maximum fluorescence intensity of sPLA2-IIA was noted at 338 nm and shifted to the higher wavelength of 344 nm in the presence of sinapic acid at0.1 µM concentration ( Figure 7I,II). Circular Dichroism (CD) Study The change in the secondary structure of the enzyme implies the direct interaction with the inhibitor. The CD spectrum of sPLA2-IIA with and without IC50 concentration of sinapic acid was recorded, which exhibited two major negative bands at 210 and 222 nm (red line). In the presence of sinapic acid (IC50 concentration), the negative bands were significantly reduced and abruptly shifted to longer wavelengths of 220 and 224 nm, respectively ( Figure 8). The sPLA2-IIA spectra were corrected by subtracting spectra of the blank solution containing 100 mM Tris-HCl buffer (pH 7.4) and 5 mM calcium. The K2D3 software was used to determine the secondary structure of the sPLA2-IIA enzyme ( Table 4). Determination of Binding Characteristics The reversibility of sPLA 2 -IIA inhibition was studied by subjecting the preincubated reaction mixture to dialysis. The sPLA 2 -IIA activity was measured before and after the dialysis. The percentage of sPLA 2 -IIA inhibition before and after the dialysis was found to be 50.2% ± 2.3 and 47.8% ± 1.55, respectively. Neutralization of Indirect Haemolytic Activity Sinapic acid was subjected to neutralizing indirect hemolytic activity of the sPLA 2 -IIA enzyme. Sinapic acid reduced the indirect hemolytic activity of sPLA 2 -IIA in a concentration-dependent manner. The sPLA 2 -IIA (30 µg) alone causes erythrocyte lysis to 94% ± 2.19, which was reduced to 12.35% ± 2.57 by sinapic acid at a concentration of 16 µM (Figure 9). Distilled water served as a positive control (100% lysis). Neutralization of sPLA2 IIA Induced Mouse Paw Edema The different doses of sinapic acid (3-18 µM) were preincubated with sPLA2-IIA and injected into the right hind paw of mice, and saline injected to the left hind paw served as the control. Sinapic acid reduced the edema from 171.75% ± 2.2 to 114.8% ± 1.98 at 18 µM concentration, and the reduced percentage of sPLA2-IIA-induced edema was 79.12% ± 1.52 ( Figure 10). Figure 9. Neutralization of sPLA2-IIA induced indirect hemolytic activity: The reaction was initiated by adding 1 mL of substrate to preincubated sPLA 2 -IIA with indicated concentration sinapic acid and incubated at 37 • C for 30 min. The released hemoglobin was measured by reading the optical density at 540 nm. Data represent the mean standard deviation (n = 3). Neutralization of sPLA 2 IIA Induced Mouse Paw Edema The different doses of sinapic acid (3-18 µM) were preincubated with sPLA 2 -IIA and injected into the right hind paw of mice, and saline injected to the left hind paw served as the control. Sinapic acid reduced the edema from 171.75% ± 2.2 to 114.8% ± 1.98 at 18 µM concentration, and the reduced percentage of sPLA 2 -IIA-induced edema was 79.12% ± 1.52 ( Figure 10). Neutralization of Haemorrhagic Activity This study reveals the synergistic effect of sPLA 2 -IIA and nonenzymatic peptides. The Vipera russellii sPLA 2 -IIA and Vipera russellii neurotoxic nonenzymatic peptide (VNTx-II) in the 5:2 molar ratio is called V. russelli Hemorrhagic Complex-I (VR-HC-I) [58] and was administered to mice intradermally. VR-HC-I induced the hemorrhage at the injection site (Figure 11c). Neither sPLA 2 -IIA nor VNTx-II independently showed the hemorrhagic effect (Figure 11a,b respectively). The mice were injected with preincubated VR-HC-I with sinapic acid (5, 10 and 15 µM), which reduced the hemorrhagic potential (Figure 11d-f respectively). Sinapic acid significantly neutralized the hemorrhagic activity at 15 µM concentration. Neutralization of sPLA2 IIA Induced Mouse Paw Edema The different doses of sinapic acid (3-18 µM) were preincubated with sPLA2-IIA and injected into the right hind paw of mice, and saline injected to the left hind paw served as the control. Sinapic acid reduced the edema from 171.75% ± 2.2 to 114.8% ± 1.98 at 18 µM concentration, and the reduced percentage of sPLA2-IIA-induced edema was 79.12% ± 1.52 ( Figure 10). Neutralization of Haemorrhagic Activity This study reveals the synergistic effect of sPLA2-IIA and nonenzymatic peptides. The Vipera russellii sPLA2-IIA and Vipera russellii neurotoxic nonenzymatic peptide (VNTx-II) in the 5:2 molar ratio is called V. russelli Hemorrhagic Complex-I (VR-HC-I) [58] and was administered to mice intradermally. VR-HC-I induced the hemorrhage at the injection site (Figure 11c). Neither sPLA2-IIA nor VNTx-II independently showed the hemorrhagic effect (Figure 11a,b respectively). The mice were injected with preincubated VR-HC-I with sinapic acid (5, 10 and 15 µM), which reduced the hemorrhagic potential (Figure 11d-f respectively). Sinapic acid significantly neutralized the hemorrhagic activity at 15 µM concentration. Discussion Sinapic acid is rich in fruits such as orange, mango, avocado, strawberries, and raspberries [61][62][63], vegetables such as garlic, onions, cabbage [64,65], and legumes such as horse grams [66]. Among them, avocado, garlic, and horse gram are well documented for their anti-inflammatory activity [67,68]. The sinapic acid reported no cytotoxic effect on V79 cells [69], and there was no effect on lactate dehydrogenase activity and serum creatine kinase in broilers, suggesting that there are no effects on the brain, liver, kidneys, and cardiac muscle [70]. Thus, sinapic acid from the food items was demonstrated as a non-toxic and therapeutically important molecule. The in silico molecular docking study is important at the early stage of drug discovery because it provides basic knowledge of binding energy, pattern and binding affinity. The docking results of sinapic acid with sPLA 2 -IIA (1POE) exhibited greater binding energy (E value −7.6), which was slightly higher than the binding energy of standard genistein (E value −7.2). Most of the sPLA 2 -IIA inhibitors interfere with the catalytic site by binding to His 47/Asp 48 and decreases the catalytic activity by weakening Ca 2+ interaction [71,72]. The sinapic acid interacted with the active site amino acid Asp48 through hydrogen bonding and showed hydrophobic interaction with few amino acids. Therefore, sinapic acid was assumed to be a potent inhibitor of sPLA 2 -IIA enzyme ( Table 1). The reactive oxygen species (ROS) and their role in human disease has become an important aspect of disease management. The sPLA 2 -IIA-mediated ROS generation through activation of NADPH oxidases [73] is an important pathway as it is known to be involved in the activation of cPLA 2 and ERK1/2 [74], leading to the release of arachidonic acid. Furthermore, the hydroxyl radicals that formed during inflammation attack membrane glycerophospholipids and initiate lipid peroxidation [73]. Hence, sinapic acid was evaluated for its antioxidant efficacy. Sinapic acid effectively scavenged the DPPH radical, reduced the ferric ions, and demonstrated its ability to protect lipid peroxidation. Thus, it was concluded that sinapic acid, if developed as an anti-inflammatory drug, limits free radicals and their intermediates released during inflammatory pathologies. The results illustrated that the sinapic acid exhibited greater binding energy (docking study) and better antioxidant potency and, hence, was further examined for its antiinflammatory activity. Sinapic acid potentially inhibited the sPLA 2 -IIA enzyme in a concentration-dependent manner and with a comparatively low IC 50 value. The genistein was taken as a standard molecule in this study as it is a well-known anti-inflammatory and antioxidant molecule [75][76][77]. The kinetics study of sPLA 2 -IIAinhibition was performed using Graph pad prism software 5.0, which suggested that the sinapic acid is a competitive inhibitor of sPLA 2 -IIA. Many sPLA 2 -IIA inhibitors limit the enzyme activity either by chelating calcium (metal ion) or binding to substrates. Inhibitors such as lipocortin I and II bind sPLA 2 -IIA enzymes non-specifically and affect the quality of the lipid interface [78]. Therefore, we examined the effect of substrate and calcium concentrations on sPLA 2 -IIA inhibition. The findings exhibit that the inhibition of sPLA 2 -IIA by sinapic acid was not dependent on calcium or substrate concentrations. Many sPLA 2 -IIA inhibitors were reported to alter fluorescence spectra [79]. The structural change in the enzyme upon interaction with the inhibitor alters the intrinsic fluorescence. The aromatic amino acids of protein such as tryptophan, tyrosine, and phenylalanine are responsible for intrinsic fluorescence. The quantum yield, intensity, and wavelength of maximum fluorescence emission depend upon the microenvironment of these aromatic amino acids. The sinapic acid shifts the maximum fluorescence spectrum of sPLA 2 -IIA towards the shorter wavelength and increases the fluorescence intensity as the polarity of the solvent surrounding the aromatic amino acids decreases [80,81]. Sinapic acid alone does not show any fluorescence, indicating that sinapic acid interacts directly with the sPLA 2 -IIA enzyme. To substantiate fluorimetry results, a circular dichroism (CD) study was carried out. The studies showed that significant changes occur in the secondary structure of sPLA 2 -IIA upon inhibitors binding [82]. In the present study, the sinapic acid interaction with the sPLA 2 -IIA enzyme caused significant changes in the secondary structure ( Figure 8). Hence, it is indisputably concluded that sinapic acid inhibits sPLA 2 -IIA by irreversibly binding to the active site. The reversibility of sPLA 2 -IIA inhibition was examined by measuring the percentage of inhibition before and after the dialysis of the reaction mixture. The inhibition percentages before and after the dialysis were almost the same. Hence, it again is implicit that sinapic acid irreversibly binds to the sPLA 2 -IIA enzyme. The indirect hemolytic assay is an indirect way of estimating sPLA 2 -IIA activity using egg yolk phospholipid and washed erythrocytes as substrates [83]. Sinapic acid efficiently neutralized the sPLA 2 -IIA-mediated hemolysis in a dose-dependent way. Thus, sinapic acid neutralizes sPLA 2 -IIA enzyme activity irrespective of the nature of the substrate because the sinapic acid binds to the enzyme irreversibly. It has been observed that the in vitro experiments show positive results but fail to show efficiency in the in vivo studies. This could be due to the heterogeneity of the environment in the in vivo models. Animal experiments are important for researchers as they provide peer knowledge of pharmacodynamics and pharmacokinetics in the early stages of drug discovery [84]. Therefore, the effectiveness of the sinapic acid in neutralizing the sPLA 2 -IIA-induced inflammatory response in the Swiss albino mice was evaluated. Sinapic acid reduced inflammatory edema to a greater extent. Thus, sinapic acid demonstrated the in vivo efficacy by neutralizing the sPLA 2 -IIA mediated inflammatory response. In the living system, protein-protein interactions lead to pharmacological damage due to the synergistic effect [85]. For example, the interaction of human sPLA 2 -IIA and vimentin (an intracellular protein) further exacerbates inflammatory pathologies. Interestingly, the addition of LY311727 (sPLA 2 -IIA inhibitor) causes substantial structural displacement in the amino terminus of the sPLA 2 -IIA enzyme, and that is sufficient to minimize its interaction with the vimentin. The interaction between sPLA 2 -IIA and nonenzymatic peptides is synergetic in snake bites and leads to increased hemorrhage [58]. In the present study, sinapic acid significantly reduced the synergistic hemorrhagic effect of V. russelli Haemorrhagic Complex-I (sPLA 2 -IIA and V. russelli neurotoxic nonenzymatic peptide) ( Figure 10). Conclusions Activated sPLA 2 -IIA generates proinflammatory lipid mediators and oxygen-free radicals that intensify the status of oxidative stress disorders and chronic inflammatory diseases. The present study evaluated the sinapic acid from a dietary source for both antioxidant potency and sPLA 2 -IIA inhibition as an anti-inflammatory function, and the result showed that sinapic acid exhibit both the potencies to a better extent. Further, sPLA 2 -IIA inhibition was not dependent on either calcium or substrate concentration. The altered fluorescence intensity and shifted negative bands of the circular dichroism spectrum suggest the direct interaction of the sinapic acid with the active site of the sPLA 2 -IIA enzyme. Furthermore, sinapic acid neutralized sPLA 2 -IIA induced erythrolysis, mouse paw edema and the hemorrhagic effect. As a result, sinapic acid is a potential therapeutic candidate for both inflammatory diseases and snakebite envenomation. However, more clinical studies are needed to claim sinapic acid as an anti-inflammatory drug.
2022-06-27T17:32:16.403Z
2022-06-25T00:00:00.000
{ "year": 2022, "sha1": "47bfc5db15757b6f639c4f5e2f0fee01d3d19d3d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3921/11/7/1251/pdf?version=1656147082", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "42623286ef2fb66b90c93ecee88cd65afb3af9db", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
5163102
pes2o/s2orc
v3-fos-license
Measuring Health: A Multivariate Approach We examined the health status of 171 countries by employing factor analysis on various national health indicators for the period 2000–2005 to construct two new measures on health. The first measure is based on the health of individuals and the second on (the quality of) the health services. Our measures differ substantially from indicators used in previous studies on health and also lead to different rankings of countries. As rankings are not that informative without further information, we analyzed the distance between each country and the sample mean. Differences between countries are much more pronounced for our measure on health services than for our measure on the health of individuals. Using cluster analysis, we classified the countries in six homogenous groups. Nowadays, there is much information available on national health. How should all this information be combined? In other words, what is the appropriate conceptual framework for measuring health (Cutler et al. 1997)? What lessons can be learned from such a framework with respect to cross-country differences in health? In our attempt to answer these questions, we applied factor analysis on various national health indicators for 171 countries over the period [2000][2001][2002][2003][2004][2005] to examine whether health has more than one dimension. Factor analysis is an excellent instrument to identify what different indicators of a latent construct (like health) have in common and to separate common factors from specific factors. We used the outcomes of the factor analysis to construct two new health measures. The first one refers to the health of individuals and the second captures the (quality of) health services. Our measures differ substantially from indicators used in previous studies on health and also lead to different rankings of countries. As rankings are not that informative without further information, we analyzed the distance between each country and the sample mean. Differences between countries are much more pronounced for our measure on health services than for our measure on the health of individuals. Using cluster analysis, we classified the countries in six homogenous groups. Health differs substantially across these clusters. The remainder of the paper is structured as follows. The next section explains factor analysis, while in Sect. 3 this method is applied to various indicators of health. Sect. 4 presents our rankings and a cluster analysis, while Sect. 5 offers a discussion of some of our findings. The final section presents our conclusions. Model Most previous studies on health employed an arbitrarily chosen one-dimensional indicator of health. The question is whether these indicators represent all dimensions of health. Furthermore, most indicators of health contain measurement errors that may lead to biased estimates (Klitgaard and Fedderke 1995). This is especially the case for samples including developing countries. To come up with a better measure for health and to determine whether health has a multidimensional character, we employed a so-called Explanatory Factor Analysis (EFA). The first step in this analysis is to check whether the data used is suitable for an EFA using the Kaiser-Meyer-Olkin measure of sampling adequacy testing whether the partial correlation among variables is low. A test statistic above 0.6 indicates that the data is suitable for an EFA (Kaiser 1970). An alternative test is Bartlett's test of sphericity, that checks whether the correlation matrix is an identity matrix in which case the factor model is inappropriate (Lattin et al. 2003). The objective of an EFA is to identify what different indicators of a latent variable (like health) have in common and to separate common factors from specific factors. Following Wansbeek and Meijer (2000) and Lattin et al. (2003), the EFA model can be written as: where x i is a vector containing the M indicators for observation i, i = 1…k (in our case the various indicators of health), D is a vector of factor loadings of order M 9 k, and n is a vector of latent variables with mean zero and positive definite covariance. The random error term e is assumed to be uncorrelated with the latent variables. 1 Under these assumptions, the covariance matrix of x i is: where N is the parameterised covariance matrix that can be decomposed in the covariance matrix of the factors U and the diagonal covariance matrix of error terms X. The model is estimated with the Maximum Likelihood (ML) method. By assuming that the factors and the disturbance term are normally distributed, it follows that the indicators are normally distributed. The log-likelihood function can be written as: where S represents the sample covariance matrix. Minimizing this fit function means choosing the values for the unknown parameters so that the implied covariance matrix comes as close as possible to the sample covariance matrix. The next step is to decide on the number of factors to represent health on the basis of the scree plot, which plots the number of factors against the eigenvalues of the covariance matrix of the indicators. In general, there are two ways of interpreting the graph. According to Kaiser's Rule, only factors with an eigenvalue exceeding unity should be retained (Kaiser and Dickman 1959). An alternative way is to look for an 'elbow' in the scree plot, i.e., the point after which the remaining factors decline in approximately a linear fashion, and to retain only the factors above the elbow. After deciding on the number of factors, it is possible that the factors of the (standardized) solution of the model are difficult to interpret. In that case, rotating the factor loadings may yield a solution that is easier to interpret because the matrix has a simpler structure. Ideally, each indicator is correlated with as few factors as possible. The rotation technique that we used to interpret the factors is the Oblimin rotation, which allows for correlation among the factors and minimizes the correlation of the columns of the factor loadings matrix. As a result, a typical indicator will have high factor loadings on one factor, while it has low loadings on the other factors (Harris and Kaiser 1964). All indicators received factor scores for the various dimensions (factors) identified. These factor scores were used to come up with the so-called Bartlett predictor, i.e., the best linear unbiased predictor of the factor scores: These factor scores were used as indicator of the health status of a country. Data The selection of indicators of health is based on two rules. First, data should be widely available for a large number of countries. Here we faced a trade-off, as some indicators were only available for a limited number of countries. Second, to aggregate the data from micro level to macro level, the data should be gathered in a consistent way across countries and over time periods. We used data from the World Development Indicators of the World Bank and from the Statistical Information System of the World Health Organization. We grouped our data on the health of individuals in three broad categories. Our first category contains various indicators on lifetime. It is quite common to proxy the health status of a country by the population's life expectancy or mortality rate. In this category, we also included the number of healthy years that a person has and the prevalence of children with malnutrition measured by the share of children that is underweighted. Our second category refers to the prevalence of various communicable diseases. These include diseases that are transmitted from person to person or through insect bites and that can be fatale. Most diseases in this category can be epidemic and may form a serious treat for the health status of a country, especially in developing countries. Finally, our third category includes various non-communicable diseases. These are not caused by transmission, but by incident or by lifestyle. These diseases are more common in industrialized countries. We applied factor analysis on 27 national indicators of the health of individuals. Table 1 presents the indicators used and their sources. A different measure for the health status of a country is the quality of its health services. Therefore, we also applied factor analysis on 10 indicators of national health services. The indicators used and their sources are given in Table 2. Our first category includes indicators of the availability of health care. The more capacity there is, the earlier a patient will be seeing a doctor and get care. The second group of variables captures immunization. We argue that the immunization rate is a policy variable determined upon by the government (cf. Lake and Baum 2001). 2 For both measures we used averages over the period 2000-2005 for a sample of 171 countries, giving 4,446 observations for the health of individuals and 1,710 observations for health services. 3 For some countries one or two indicators were not available, yielding 214 missing observations for the health of individuals and 83 for health services, which is in both cases less than 5%. In order not to lose valuable information, we applied the EM algorithm to compute the missing observations. The EM algorithm was suggested by Dempster et al. (1977) to solve maximum likelihood problems with missing data. It is an iterative method, the expectation step involves forming a log-likelihood function for the latent data as if they were observed and taking its expectation, while in the maximization step the resulting expected log-likelihood is maximized. Results The Kaiser-Meyer-Olkin measure of sampling adequacy and Bartlett's test of sphericity indicated that our data could be used for an Explorative Factor Analysis. First, we analysed individual health. Because our data is measured on an interval or ratio scale and is normally distributed, Table 3 shows Pearson's correlation coefficient. Immunization rate tuberculosis WHO (2007) 2 However, the immunization rates may also be considered as an indicator of the health of individuals. We also did the factor analysis with the immunization rates included in the factor analysis for the health of individuals. The correlation between the two factor scores on the health of individuals is 0.95 and between the factor scores on health services the correlation is 0.92. Detailed results are available upon request. 3 We only included countries with a population larger than 200,000. Furthermore, countries were only taken into account if we had three or more observations for all the indicators considered between 2000 and 2005. The countries included in our sample are shown in Table A1 in the Appendix. To extract the right number of factors out of the various indicators, we used the scree plot (see Fig. 1). According to the Kaiser rule, more than six factors should be identified. However, this is probably a so-called Heywood (1931) case where some solutions of the unique variances of the indicators are smaller than zero. If instead the elbow criterion is used, individual health can be represented as a one-dimensional construct. Both models were compared using a likelihood ratio test. In this case, the multiple-factor model does not fit the data significantly better than the one-factor model. The goodness-of-fit test statistic for the one-factor model is 2795.91, which is v 2 (324) distributed, is highly significant (compared to a saturated model) at the five percent significance level, suggesting that the one-factor model is appropriate. Table 4 presents the factor loading of the various national indicators of the health of individuals and the variance of the indicators explained by the first factor. More than 60% of the variance is explained by the first factor and about 40% of the total variance is unique. The one-factor model can explain about 89% of the total variance of the mortality rate below 5 years, but less than 33% of the age standardized mortality of cancer. Next, we performed a factor analysis on the indicators of health services. Table 5 shows Pearson's correlation coefficient. The results indicate that the correlations between the different indicators are often quite low, although generally significant. The scree plot is shown in Fig. 2. According to the Kaiser rule, two factors should be identified, while the elbow interpretation indicated only one factor. Both models were compared using a likelihood ratio test. The two-factors model does not fit the data significantly better than the one-factor model. The goodness-of-fit statistic of the one-factor model is 438.98 which is v 2 (35) distributed and is highly significant at the five percent significance level, suggesting that the one-factor model is appropriate. Table 6 presents the factor loadings of the various indicators and the variance of the indicators explained by the factor. About 60% of the variance is explained by the factor and about 40% of the total variance is unique. Health Ranking and Cluster Analysis We constructed new measures for the health of individuals and health services based on the factor scores as reported in Sect. 3. Table 10 in the ''Appendix'' shows the full list of the predicted factor scores and the implied ranking of the various countries. The rankings lead to a number of conclusions. First, not surprisingly, western countries and Japan dominate the top of the rankings, while mostly African countries take the positions at the bottom. Second, in the ranking based on health services Cuba and Belarus score remarkably high. Third, the ranking differs substantially from the most recent ranking on health over almost the same period by Nolte and McKee (2008) for OECD countries (see Table 7). According to the results of Nolte and McKee (2008), France outranks all other countries in the OECD area. However, in our ranking France is at place eight in the ranking based on the health of individuals and is even number 14 in the ranking based on health services. Another example is Spain that takes the third place in the ranking of Nolte and McKee (2008), but is on place 13 in our ranking of health services. (3) (8) As rankings are not that informative without further information, Table 7 also presents the distance between each OECD country and the OECD mean. 4 This measure gives a much better impression about health differences between countries. The results show that there is a large difference between both health measures. While France scores about 2.5% higher in our measure on individual health, it scores about 11% below the mean on our health services measure. Nolte and McKee (2008) report that the United States scores about 27% below the mean. However, according to our measure of individual health, the United States scores only about 13% below the mean, while it scores above the mean according to our measure for health services. In general, Nolte and McKee (2008) report more dispersion compared to our measure on the health of individuals. However, the variance among the countries in our sample for our measure on health services is much higher than that of Nolte and McKee (2008). These results are confirmed if we take the standard deviation of the various measures divided by their mean. Furthermore, if we expand our sample including not only the OECD countries, we find a similar, but even more pronounced, pattern. The data show that the differences between a Note: Because our factor scores are in logarithms we subtracted the value of a country from that of the value for country with the highest score to obtain the difference in percentage. The factor score are computed in logarithms Measuring Health 445 country's score and the sample mean are much higher for the measure for health services than they are for the measure for the health of individuals. The variance of the individual health measure is 1.1, while for the health services measure the variance is 2.4. To sum up, our results indicate that there exist significant differences between our measures. The ranking based on the health of individuals is less dispersed than the ranking based on the quality of health services. This strengthens our conclusions that both measures are capturing different dimensions of a country's health. So in contrast to Nolte and McKee (2008), we pose that cross-country comparisons of health should not be based on only one (arbitrarily chosen) variable. To get a better view of health differences across countries, we categorized the countries in our sample on the basis of their similarities and differences using cluster analysis. Cluster analysis is recognized as a useful technique for this purpose and has been employed extensively in social and economic sciences (Punj and Stewart 1983;Hair et al. 1998). For the cluster analysis we used our two health measures as identified by the factor analysis. We also included some additional health related variables: public health expenditure as a percentage of GDP, the percentage of the population having access to improved sanitation, the percentage of the population having access to improved water resources, and GDP per capita. 5 The first step is to detect outliers and check for multicollinearity. Outliers distort the true structure of the data and make the derived clusters unrepresentative of the population structure. To test whether an observation is an outlier we used the Mahalanobis D 2 (Hair et al. 1998). The Mahalanobis D 2 estimates the standard deviation of the distances of the sample points from the centre of mass. If the distance between the test point and the centre of mass is more than one standard deviation, it is highly probable that the test point does not belong to the set and can be classified as an outlier. The Mahalanobis D 2 measure indicated that less than 2% of the observations are outliers. A scatter matrix (not shown, but available on request) confirmed that our dataset contains only a limited number of outliers. As a robustness check, we estimated the cluster analysis with and without the outliers. However, the outliers did not affect our results and these observations were therefore not deleted. Also multicollinearity can be a problem in cluster analysis because it distorts the weighting of variables in the different clusters. We used as rule of thumb that the correlation between the variables should not exceed 0.8 (Green 2003). The correlation of two variables was higher: the share of people having access to improved water and the share of people having access to improved sanitation (see Table 11 in the ''Appendix''). We therefore dropped the latter variable. 6 The next step is to determine inter-object similarity, which is based on the distance between the objects. As a proxy we used the squared Euclidean distance, which is the square of the length of a straight line drawn between two objects (Hair et al. 1998). A higher value denotes less similarity. Because all variables are measured on a different scale, we first standardized the data by computing for each variable the standard scores (also known as Z scores) by subtracting the mean and dividing by the standard deviation of each variable. Next, we used Ward's linkage method to cluster countries (Hair et al. 1998). This method seeks to join the two clusters whose merger leads to the smallest within cluster sum of squares instead of joining the two closest clusters. An advantage of this method compared to others (like single linkage or complete linkage) is that Ward's method is not sensitive to small distortions in the data. There is no general rule on determining the number of clusters after the hierarchical clustering procedure. However, there are some rules of thumb. One of these rules is based on the so-called agglomeration coefficient. The agglomeration coefficient is the withincluster sum of squares and measures the differences within a cluster. Joining two very different clusters results in a large agglomeration coefficient (or a large percentage change in the coefficient). One drawback of this method is that it has the tendency to indicate too few clusters (Hair et al. 1998). The agglomeration coefficients in Table 8 indicate that the largest percentage increase occurs if the number of clusters increases from one to two. After seven clusters, the agglomeration coefficient hardly changes. An alternative rule is to compute the Caliński-Harabasz pseudo-F-index or the Duda-Hart pseudo-T-square (Milligan and Cooper 1985). A large pseudo-F-index and a small Tsquare indicate homogenous clustering. The results in the second part of Table 8 show that the six-clusters solution has the largest Caliński-Harabasz pseudo-F-index (409.56). The smallest pseudo-T-squared value is 19.99 for the 5-clusters solution, but notice that the pseudo-T-square value for the 6-clusters solution is also low (23.78). A more formal test on the number of clusters is given by the Mojena test statistics (Mojena 1977). Mojena test I assumes that the distances of the agglomeration schedule are normally distributed up to a certain step of the fusion process. At each step it is tested whether the distance increase belongs to the assumed normal distribution. Mojena test II verifies whether the distance in a certain step can be predicted with a regression line that is estimated using the distances from the previous steps. If the distance lies outside the 95% confidence interval, a significant increase in the distance is found and the respective step of the fusion process is used as the optimal number of clusters. In the present analysis, the two Mojena tests give the same results. According to test statistic I, the level of significance exceeds from seven to six clusters, whereas test statistic II suggests an optimal number of six clusters. This solution is in line with the results on the agglomeration coefficient, the Caliński and Harabasz pseudo-F-index, and the Hart pseudo-T-square. Therefore, we identified six clusters. The six-clusters solution is also in line with the dendrogram. The dendrogram is a graphical representation of the results of a hierarchical procedure in which each object is arrayed on one axis and the other axis portrays the steps in the hierarchical procedure. The dendrogram shows how the clusters are combined in each step of the procedure until all are contained in a single cluster. (Because the dendrogram was too large to include in the paper, we only summarize it in Table 12 in the ''Appendix''. However, the dendrogram is available upon request). The dendrogram table indicates that the first cluster solution based on the minimal distance shows 171 clusters with only one country, the second cluster solution indicates that countries can be categorized in 6 clusters. Finally, we profiled these six clusters. Table 9 shows the P-value of the F-test that the clusters differ significantly with respect to the health variables (P \ 0.05). It is clear that the clusters differ significantly from one another. There are two clusters with poor health, i.e., cluster four and cluster two. In cluster four, on average less than fifty percent of the population has access to improved water facilities and the government is only spending about two percent of (low) GDP on health. Compared to cluster four, cluster two includes countries with a population that has somewhat better access to improved water facilities, a somewhat higher level of government health spending, while the average GDP per capita is about twice as high as GDP per capita in cluster four. Clusters one and six have good and very good health outcomes. In these clusters almost the total population has access to improved water facilities and public health spending is more than five percent of GDP. Finally, the remaining two clusters are intermediate but differ in their health outcomes and income. Table 9 shows that the clusters not only differ with respect to health, they also have different economic and demographic characteristics. Countries in clusters two and four are mostly countries with a low income, low school enrolment rate, and a high population growth. Countries in clusters one and six are high-income countries with a high school enrolment rate. Also the geographical dimension differs across clusters. African countries are mainly in clusters two and four, while most European countries can be found in clusters one and six. Table 13 in the ''Appendix'' shows the composition of the clusters. Discussion On the basis of factor analysis and cluster analysis, this paper tried to offer a better view on cross-country differences in health. Because health is not directly observable and there are many different health indicators available, we used factor analysis to examine the dimensions of health and to come up with better measures for health. Because rankings of countries based on these measures (or any other indicator) do not give information about distances between countries, we focused upon the difference between a country's health vis-à-vis the sample mean. However, like any study, the present study has weaknesses. The main weakness is the availability of the data. One limitation of studies on cross-country differences in health is the limited availability of indicators for a long-term period. Even though we included twenty-seven indicators of the health of individuals and ten indicators of health services, this may not suffice to fully capture the concept of health. Unfortunately, other indicators are only available for a small number of (mostly industrialized) countries or are not constructed in a consistent way. Due to this limitation, it is possible that when more indicators become available for a larger set of countries and longer periods, our two measures of health may turn out to be multi-dimensional instead of one-dimensional. In other words, different data could lead to different results and conclusions. Furthermore, we aggregated the micro level health data to the macro level. Therefore, we cannot take into account the individual (respondent) differences in our cluster analysis. We can only relate the (macro) health outcomes to country averages. Another problem in research on cross-country health differences is the quality of the data, especially for developing countries. Some variables for these countries show large and unrealistic swings and gaps. Also the data dispersion within in a country cannot be addressed in this study because we focus on country level data. The final weakness is that our two-one-dimensional health measures explain on average only between 60 and 70% of the total variance. This means that about one-third of the variance remains unexplained. However, extracting more factors did not give us a more insights and worsened the interpretation of the results. Conclusions One of the major problems in the economic and social science literature is the measurement of latent constructs. This certainly holds true for cross-country analyses of health. Most previous studies that ranked countries on the basis of their health status used arbitrarily chosen indicators of the health status of a country (cf. the life expectancy or the Measuring Health 449 mortality rate), thereby implicitly assuming that health is a one-dimensional concept. Furthermore, most indicators of health contain some measurement error, which may lead to biased estimates. To come up with better measures for health and to determine whether health has a multidimensional character, a so-called Explanatory Factor Analysis (EFA) was employed on various national health indicators for 171 countries over the period 2000-2005. We used the outcomes of the factor analysis to construct two new national health measures. The first one refers to the health of individuals and the second captures health services. Our new health measures differ substantially from those reported in earlier studies ranking countries on the basis of their health status. As rankings are not that informative without further information, we focused upon the difference between a country's health vis-à-vis the sample mean. We found that the cross-country variance of our measure for health services is much higher than that of our measure for the health of individuals. Furthermore, we found that health depends mostly on geography and development. The dispersion of the two health measures within OECD countries is much lower than in the full sample of countries. This strengthens our conclusion that both measures capture different dimensions of health and that cross-country comparisons of health should not be based on only one (arbitrarily chosen) variable. Further analysis showed that there are six clusters of countries, ranging from countries with very good health to very bad health. The clusters not only differ with respect to health, they also have different economic and demographic characteristics. Open Access This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited. Appendix See Tables 10, 11, 12, and 13. Based on the factor scores, which are taken in logarithms
2014-10-01T00:00:00.000Z
2009-05-26T00:00:00.000
{ "year": 2009, "sha1": "7948f8a7d4402bf5d3d72d80c47ec56c1345ca9a", "oa_license": "CCBYNC", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11205-009-9486-x.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "7948f8a7d4402bf5d3d72d80c47ec56c1345ca9a", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
257029672
pes2o/s2orc
v3-fos-license
Root exudate composition of grass and forb species in natural grasslands Plants exude a diverse cocktail of metabolites into the soil as response to exogenous and endogenous factors. So far, root exudates have mainly been studied under artificial conditions due to methodological difficulties. In this study, each five perennial grass and forb species were investigated for polar and semi-polar metabolites in exudates under field conditions. Metabolite collection and untargeted profiling approaches combined with a novel classification method allowed the designation of 182 metabolites. The composition of exuded polar metabolites depended mainly on the local environment, especially soil conditions, whereas the pattern of semi-polar metabolites was primarily affected by the species identity. The profiles of both polar and semi-polar metabolites differed between growth forms, with grass species being generally more similar to each other and more responsive to the abiotic environment than forb species. This study demonstrated the feasibility of investigating exudates under field conditions and to identify the driving factors of exudate composition. www.nature.com/scientificreports www.nature.com/scientificreports/ deficiency often results from a complex formation with metal ions or absorption by the soil 31,32 which can be overcome by the release of organic acids 33 or phenolic compounds from the plant 30,31 . These chelating substances enhance the acquisition of insoluble nutrients and interfere with the nutrient cycles 25 . Furthermore, anthropogenic land use was assumed to influence the exudate profiles of plants 19 . High amounts of nutrients, such as nitrogen and phosphor, are introduced into the soil by fertilization and grazing. This modifies the soil nutrient status [34][35][36] and, thus, probably the root exudation. However, this correlation has not yet been fully analysed. Most of the knowledge about exudates were obtained under controlled laboratory conditions, some of those mimic the natural ecosystem conditions 24,37 , with one-or two-factorial designs 4,16,22,23,38,39 . Furthermore, due to the tremendous variety of metabolites in the plant kingdom 3,40,41 , these studies have mainly focused on specific metabolites or metabolite classes 4,14,24 . As a result, a large part of the exuded plant metabolome remains unconsidered. On the other side, most studies under field conditions neglect the role of exudates. Thus, both types of strategies deliberately disregarded important components of the complex natural ecosystems. To fully understand those networks, a comprehensive investigation of the metabolite profile of a plant and the combination of metabolomics and ecological techniques under natural conditions are of great importance 40 . So far, there are two studies about root exudation of either polar 19 or semi-polar root metabolites 21 applying the untargeted metabolite profiling approach to field grown plants. These two studies focus mainly on the impacts of different endogenous factors in a natural ecosystem. In this study, the effects of different exogenous factors such as climate, soil, neighbouring plants and anthropogenic land use, as well as the endogenous factors species and growth form were investigated for their impact on the composition of polar and semi-polar root exuded metabolites. Those were analysed by GC-MS (detecting more the polar fraction of metabolites) and C18-RP-LC-MS (targeting more the semi-polar fraction of metabolites), respectively. A large field experiment was performed in which five grass and five forb species were transplanted in more than 50 different grassland communities in the three sites of the German Biodiversity Exploratories (Schorfheide-Chorin, Hainich and Schwabian Alb). Those differ in various environmental factors e.g. soil, climate and land use 36,42,43 . After more than one year in the surrounding environment, the root exudates of the transplants were analysed by untargeted metabolite profiling. Moreover, as the identification of semi-polar metabolites is challenging due to their high chemical diversity in plants 14 , a novel approach of classifying metabolites to chemical classes was applied 44 . The main issues of this paper are: (1) Which of the factors growth form (grass or forb), species identity and site affect the root exudate richness under field conditions significantly? (2) What is the impact of biotic growth conditions, species identity and neighbouring plants on root exudate composition? Chemical richness and composition of metabolites detected by GC-MS (polar metabolites). The untargeted metabolite profiling of the investigated samples revealed an annotation of 285 features (detected monoisotopic signals characterized by their specific retention time and mass to charge ration) (Supplementary Table 1). A total of 66 of these features were identified and classified as metabolites of the classes alcohols (6), aldehydes (1), alkaloids (1), amines (2), amino acids (19), carbohydrates (10), lipids (4), nucleic bases or nucleotides (3) and organic acids (19). Five compounds were classified as unidentified carbohydrates (4) and unidentified lipid (1), respectively, due to their mass spectra similarity to other compounds of these classes (Table 1, Supplementary Table 1). The identified compounds in this GC-MS analysis were mainly of the primary metabolism and often of polar character. Thus GC-MS detected compounds and metabolites will be referred to as polar metabolites in the following sections. Linear mixed-effects models showed that the number of exuded metabolites (chemical richness, Fig. 1) significantly depended on species (p < 0.05), site (p < 0.001; Schorfheide, Hainich or Swabian Alb) and their interaction (p < 0.01; Supplementary Table 2a)). In contrast, the two growth forms grass and forb as a whole did not differ in their total number of exuded compounds (chemical richness), whereas those of forbs and grasses of the sites differ significantly from each other (p < 0.001; Fig. 1A, Supplementary Table 2a)). Chemical richness was highest in the Schorfheide (SCH), in particular for grasses (p < 0.001) and the forb Ranunculus acris (p < 0.01; Fig. 1A, Supplementary Table 2b)). The pattern of chemical richness was also reflected in the multivariate analysis of the metabolites composition, as revealed by a Redundancy Analysis (RDA, Fig. 2A) with species and site as constraining variables. Here, the samples of SCH were separated from those of HAI and ALB on first axis (12.59% of total variance explained, Fig. 2A), while ALB and HAI differed in their scores on the second axis (2.57%; Fig. 2A). In contrast to site, differences among species or growth forms played a subordinate role, and were only apparent on the lower axes ( Supplementary Fig. 1). The loadings of the exuded compounds (Fig. 2B) indicated a common set of metabolites in the exudate profiles of the transplants. Those comprised metabolites of the classes alcohols, amino acids, carbohydrates, lipids, nucleic bases, organic acids, and also unidentified compounds. Some of the class members, however, showed a higher probability to occur in specific sample groups. Grasses exuded the highest number of group discriminating metabolites compared to forbs (Supplementary Table 3a)). The highest number of those metabolites was observed in plots of the ALB such as N-acetylglucosamine, favourably exuded by Poa pratensis, and succinate, especially exuded by Lolium perenne, as well as a number of unidentified compounds (Supplementary Table 3). Discriminating exudates of grass plants grown in SCH or HAI plots showed no species-specificity, but the number of those metabolites was higher in SCH than in HAI. Instead, forb exudate profiles revealed the highest number of discriminating metabolites in plots of the HAI. Most of those could be related to a specific species, e.g. 3-Caffeoyl-trans quinic acid preferentially exuded by Galium mollugo, and many unidentified compounds (Supplementary Table 3). Forbs grown on SCH and ALB plots exuded nearly the same number of discriminating www.nature.com/scientificreports www.nature.com/scientificreports/ metabolites. In all sample groups, many metabolites were preferentially exuded by a specific growth form in a specific site, but not by a specific species. Simultaneously, some compounds were preferentially exuded by plants of a specific growth form or species without an influence of the site factor. For instance, octadecatrienoic acid is exuded by Plantago lanceolata of all plots, whereas 2-aminoadipate is preferentially exuded by A. elatius plants in all plots (Supplementary Table 3). polar metabolite composition and exogenous factors. The importance of the growth location (e.g. the site) compared to species is also reflected in the variance partitioning analysis both for forbs and grasses (Fig. 3). Here, plot explained most of the variance (forbs: 23.8%, grasses: 24.4%; Fig. 3A, B). While in forbs (Fig. 3A) the second most important factor was species (8.8%), in grasses it was the interaction with the local neighbouring plants (LNH) and the plot (7.0%) together. The effect of LNH on polar metabolite composition was mainly brought about by covered area of the neighbouring plants (Cover, Supplementary Table 4). The highest amount of shared variation between plot and single variables was brought about by the combination of plot and total carbon content of the soil (TC; 2.85% and 7.38% in case of forbs and grasses, respectively; Supplementary Table 4). www.nature.com/scientificreports www.nature.com/scientificreports/ The correlation of single metabolites to the different environmental drivers revealed that 65.14% and 69.72% of metabolites in forb and grass exudate samples responded significantly to the environment (Supplementary Table 5a,c)). Soil variables showed the highest number of impacted metabolites followed by LNH and climate, whereas the lowest number of metabolites is linked to LUI effects in forbs and grasses ( Supplementary Fig. 3, Table 5a). In accordance with the results of the variance partitioning and in all LUI variables, grazing affected primary metabolites to the highest extent. Semi-polar metabolites detected by LC-MS occur in a species-dependent manner in exudates. Untargeted metabolite profiling by LC-MS (focussing on the semi-polar metabolite fraction) revealed 2,947 features. The chemical richness of the transplant exudate profiles was independent of the growth form (p = 0.630), but significantly depended on site and species (p < 0.001, Supplementary Table 2). This was mainly driven by the grass species, which displayed a higher chemical richness in SCH plots than in all other exploratory plots (Fig. 4B, Supplementary Table 2b)). Moreover exudates from grasses and forbs grown on SCH plots showed a significantly higher chemical richness than in the other exploratories (Fig. 4A, Supplementary Table 2b). For forbs, this was mainly driven by the Galium species.The RDA of semi-polar features partly reflected these results (Fig. 5). Although a discrimination of the plant samples by site was not observed, a species-specific pattern occurred (Fig. 5A). While the two Galium species were separated from the other species on axis one (8.72%), P. lanceolata was separated from the other species on axis three (2.42%, Supplementary Fig. 4). Furthermore, A. millefolium samples were discriminated from the other species on axis four (1.71%, Supplementary Fig. 4) whereas the separation of R. acris samples occurred on axis six (1.05%, Supplementary Fig. 4). Axis five (1.44%, Supplementary Fig. 4) was the only dimension in which A. elatius, a grass, was separated from all other species, while all other grass species always clustered together. The loadings of the exuded semi-polar metabolites (Fig. 5B) indicated a common set of exuded metabolites of all species, but also a higher degree of metabolite diversity in exudate patterns of forbs than of grasses. This is reflected by the calculation of the significant specific features per species or the genus Galium (Galium spp.) together, respectively. 229 of these significant species-specific features were thereby observed in metabolite profiles of forbs (A. millefolium: 40, G. mollugo: 69, G. verum: 2, P. lanceolata: 89, R. acris: 29), whereas 47 were observed in grass profiles. Further 76 significantly specific features were discovered in Galium spp. samples. chemical classification of species-specific exuded semi-polar metabolites. Tandem mass spectrometry (MS/MS) provided fragment mass spectra of 217 of the 352 significant species-specific features (Supplementary Table 6). Their chemical classification revealed 116 compounds grouped into seven chemical classes with different subclasses ( Table 2): glycosides (20) with different residues (acid (2), sulfate (2), hydroxycarbonic acid (1)), jasmonate derivatives (2), phenylpropanoids such a coumarin derivative (1), flavonoids (15) (glycosylated (3), kaempferol derivatives (3)), hydroxycinnamic acids (39) (glycosylated (15), not glycosylated (21) amide residues (3)), polyketides (5), terpenes (12) and compounds which could not be assigned to one of these semi-polar metabolite families but carried different chemical residues (aliphatic (7), imine residues (2), methoxy-groups (3), sulfates or phosphate groups (10)). Hierarchical clustering of the compounds according to their mass spectral fragment similarities resulted in a dendrogram with nine main branches (Fig. 6). Seven of those corresponded to the substance classes listed above, whereas two branches contained members of all chemical families and unclassified compounds (Fig. 6, Supplementary Figs. 5-12). Furthermore, the spectra of species-specific compounds showed a clustering due to species identity. Sulfated or phosphorylated compounds clustered in branch one together and were predominantly exuded by P. lanceolata (Fig. 6, Supplementary Fig. 5). www.nature.com/scientificreports www.nature.com/scientificreports/ The majority of glycosylated compounds and glycosides clustered in main branch two and were exuded by A. millefolium, G. mollugo, G. verum, P. lanceolata, A. elatius, A. pratensis, and R. acris plants, respectively (Fig. 6, Supplementary Fig. 6). The annotated polyketides and some potential flavonoids released by G. mollugo, G. verum, Galium spp and A. millefolium roots clustered in branch four (Fig. 6, Supplementary Fig. 8). There were two branches containing substances that resembled terpenes (Fig. 6, Supplementary Figs. 9 and 11), mainly exuded by Galium spp (branch 5) or A. elatius (branch 8). The latter also contained one of the two annotated jasmonate derivatives. Compounds of phenylpropanoid-, flavonoid-and hydroxycinnamic acid-like structures were predominantly clustered in branches six and seven, whereas branch six contained fragment spectra of Galium spp., P. lanceolata and A. pratensis specific compounds and branch seven fragment spectra of R. acris and P. lanceolata specific compounds (Fig. 6, Supplementary Fig. 10). Branches three and eight instead contained compounds of either different classes of chemically unrelated or unclassified compounds (Fig. 6, Supplementary Figs. 7 and 12). These branches are heterogeneous in the species origin. Two compounds (931.2829 m/z at 3.67 min, 501.1253 m/z at 4.07 min) were exclusively found in the exudates of P. lanceolata roots and might represent irido glycosides ( Table 2). The exudation of semi-polar metabolite is differentially affected by the environment in case of forbs and grasses. The results of variance partitioning of the semi-polar metabolites strongly differed between the two growth forms. In sum, the predictors explained less of the variation in semi-polar exudate profiles of grasses than those of forbs (up to 15.9% and up to 24.9% for grasses and forbs, respectively, Fig. 7). For grasses the largest proportion of variance was explained by plot, in forbs most of the variation was accounted by species identity (Fig. 7A,C,E, Supplementary Fig. 2C). The predictors LNH, Climate, Soil and Env did not have any explanatory power, whereas single environmental variables explained the variability in semi-polar metabolite profiles to a minor extent (Supplementary Table 8). The inclusion of LUI as predictor resulted in a minor amount of explained variance (0.34 and 0.96% for forbs and grasses, respectively). This is caused by the effect of fertilization and grazing on the exudation of grasses and forbs (Supplementary Table 8). The correlation of semi-polar metabolite profiles with single environmental variables revealed a strong environmental impact on the exudation of many secondary metabolites (Supplementary Table 5b,c)). 21.90% and 17.49% of compounds detected in forb and grass exudate samples, respectively, could be linked to one of the environmental variables (Supplementary Fig. 13). Soil variables such as moisture and soil texture but also the climate variable precipitation and T(200) were significantly correlated to semi-polar compounds ( Supplementary Fig. 13, Supplementary Table 5b,c)). In general, LUI and LNH variables had a similar effect on the metabolite exudation. In particular, mowing is the variable of LUI with the highest number of affected features (forbs: 89, grasses: 80), whereas Cover was involved in the exudation of 106 features and Shannon in 101 features in forbs and grasses, www.nature.com/scientificreports www.nature.com/scientificreports/ respectively ( Supplementary Fig. 13, Supplementary Table 5b,c)). Interestingly, there are species-specific compounds among these correlated compounds (Supplementary Table 5b)). For instance, LUI traits could be linked to compounds of the phenylpropanoid metabolism and glycosides of various species. Furthermore, compounds of hydroxycinnamic acid-like character exuded by A. millefolium (619.1862m/z_3.12 Table 2. Putative classification of species-specific semi-polar compounds. a Chemical classes contain compounds being classified on the base of one identifier fragment. b The annotation of compounds as a kaempferol derivative bases on identifier fragments of kaempferol and spectral similarity. This has to be confirmed by analytical standards. The table contains the total number of compounds (in brackets) of each class as well as the occurrences in the samples of the ten different species. Numbers in brackets behind the species represent the total amount of specific compounds per species. Discussion Root exudation is a complex process in which a diverse chemical cocktail of substances is released into the rhizosphere. Most studies focus on the investigation of either single substances or specific chemical families 14,20,30,31,38 and with this they neglect the complexity of exudate profiles. The untargeted metabolite profiling approach presented here allowed not only the detection of 3,185 features but also the classification of 182 substances into various chemical families. Thus, this represents a highly comprehensive exudate analysis of plant species that were not characterized in such a detail, so far. Furthermore, the semi-polar metabolites were designated by chemical classification and grouped according to spectral and fragment similarities 44 . With this, the time consuming bottle necks of tradotional substance identification and categorization, the lack of appropriate analytical standards and the large gap in the knowledge of the majority of metabolites 3,14,40 , in almost all untargeted metabolomics investigations 14,39,40,44 was overcome. A clustering by fragment similarity and classification of indicative shared fragments could help to overcome this obstacle. Thus, this method provides a basis for the further elucidation of such metabolites and their characterization. The overall composition of the metabolite profiles of the investigated transplants showed a quite common set of compounds in all ten species, but also differences due to various impacting factors. Moreover, it played a role when the metabolites were categorised into the polar or semi-polar metabolite profile. The chemical composition of polar metabolites was qualitatively similar between the species and less affected by growth form (issue 1 of this study). Semi-polar metabolite compositions, however, showed major differences between forbs and grasses (issue 1 of the current study). Here, forbs had a higher diversity in their profiles. This can be linked to the high impact of species identity and the tendency of forbs to exude more species-specific metabolites than grasses. The high importance of this factor on the exudate composition of forbs could be explained by the phylogenetic distance between the species of both growth forms. It was shown that the genus of a species can impact the diversity in the metabolite profiles 45 . Thus, the larger phylogenetic distance between the forbs compared to the Poaceae grasses could affect the result. This assumption is consistent with the results of Herz et al. 19 and Dietz et al. 21 . Both studies investigated the impact of endogenous factors on plant root exudation of the same ten target species. In contrast to Herz et al. 19 , the present study revealed the factor growth form of being of minor importance for polar metabolites (issue 1 of this study). This might be due to the different exposure times in the field. The transplants of Herz et al. 19 and Dietz et al. 21 grew three month in the field, whereas the plants analysed here were exposed to field conditions for more than one year. Another explanation for the differences in the role of growth form in the polar metabolite exudation might be the inclusion of further aspects of the experiment, e.g. the site (issue 1 of the current study). The German Biodiversity Exploratories were set up along environmental gradients 42 , in which Schorfheide represents a Figure 6. Hierarchical clustering of species-specific semi-polar exudates. Hierarchical clustering was performed on the tandem-mass spectra of the significant species-specific compounds. Cluster were calculated on spectral similarity rested on Jaccard dissimilarity and fragment-count-weighted value rating. The numbers represent the cluster which are shown in Supplementary Figs. 5-12 for more details. The classification of metabolites is given in the legend. www.nature.com/scientificreports www.nature.com/scientificreports/ special site. This was particularly obvious at the level of soil, nutrient cycles and organismic interactions 35,43 . The Schorfheide-Chorin exhibits a higher soil moisture and lower pH as well as higher nitrogen and carbon content than the Swabian Alb and Hainich-Dün 35,43 . Low pH and high soil moisture trigger the exudation of alcohols, amino acids and organic acids 27,28 to overcome the acidification of the plant cells 43,46 , which is caused by anaerobic soil conditions. Previous studies also showed that the release of nitrogen containing metabolites, such as www.nature.com/scientificreports www.nature.com/scientificreports/ amino acids contributes to the nitrogen content of the soil 47,48 , which in turn triggers the increased release of carbohydrates, organic acids 33 and phenylpropanoids 31 . Those mediate the uptake of nutrients by enhancing their absorption or interacting with decomposing organisms 49 . This might explain the occurrence of some of the amino acids, carbohydrates, organic acids and phenylpropanoids in the exudate profiles of the plants investigated here. The inclusion of site might also be the reason for the divergence in the impact of growth form between this study and Herz et al. 19 , who omitted this further sub-classification. The present study revealed that grasses showed a higher chemical richness in exuded polar metabolites in SCH compared to forbs. This implies that grasses exhibit a higher potential for environmental adjustment than forbs 5,8 , which is supported by further results of this study. As already described by Herz et al. 19 and Dietz et al. 21 , plot characteristics are also the main drivers for semi-polar metabolite exudation of grasses, but also polar metabolites which were released of both growth forms (issue 1 of this study) in this study. In addition, the present study further resolved the impact of the individual environmental traits to shed more light on their influence (issue 2 the current study). Single variables of soil and climate altered the exudate composition. Soil variables, such as soil moisture and the soil characteristics of the WRB database (soil texture, soil type) had a high relevance for the exudation of polar and semi-polar compounds in both growth forms. The relation of soil moisture and pH to polar and semi-polar compounds like hydroxycinnamic acids or terpenes is remarkable and not described so far. Also the relation of climatic drivers like aboveground temperature to polar and semi-polar compounds needs further investigation. In previous exudate studies 19,21 of the ten target species in grasslands, minor impact of neighbouring plants and no impact of land use was found. In contrast, the results of the present study revealed a contribution of single variables of these predictors to the variance in plant metabolite exudation. The impact given by LNH underlines the suggestion that longer residential time could play a role in the exudation of plants in a field plant community 19,21 . Therefore, a better adaptation and a stronger interaction with their locally neighbouring plants, as described for other species 4,22,50,51 , is highly likely. An impact of the plant neighbourhood on a plant after a long residential time was already investigated for belowground root development and plant fitness by Ravenek et al. 7 . The impact of the LUI also matches the observations of other studies to the impact of land use on ecosystems 8,36 . The results would support the functions of semi-polar metabolites as mediators of interaction with neighbouring plants 4,22,23 , but also polar and semi-polar metabolites as adaptive agents to abiotic factors 27,28,33,47,48 such as LUI. So far, the nature of these interactions of exudates and LUI or LNH factors, respectively, is not clear. Thus, further investigation of the correlated exudates with variables of the predictors LUI and LNH might expand the knowledge of plant-plant interaction and land use impact. It has to be noted that the presented relations between environmental factors and exudates are to the greatest extent not the result of the typical relation of cumulated predictors to the dataset of interest (here the exudates) 19,21,36 (issue 2 of the current study). The findings discussed here are the result of the impact of single variables on compound composition and single compounds. Quenching effects might be the reason. Those effects might result from either less explanatory power of single variables or the number of compounds that were not correlated with the endogenous and exogenous factors. This might reduce the overall explanation power of the predictors. On the other hand, the higher impact of logistic models compared to variance partitioning is also reasonable. In logistic models, the non-linear character of the analysis can result in a much earlier significant correlation of two variables, here an exuded metabolite and an environmental factor, than by the variance partitioning. However, variance partitioning is affected by the presence of a metabolite in comparison to the overall metabolite profile, whereas logistic models compare each compound individually with the specific environmental variable. Further statistical methods could help to clarify this point. Moreover, the compounds with a linkage to different factors are of specific interest and this study provides a large set of those compounds. They are not identified so far. Thus, it would be of great interest to reveal their chemical identity by the identification approaches provided by different analytical techniques 31,52,53 . Although plot together with different individual variables of the neighbouring plants, land use, climate and soil could partially explain the polar and semi-polar exudate profiles of the ten species (up to 31.6% and 24.9%, respectively), an unexplained variance of 68.4% to 68.9% in polar metabolites and of 75.1% to 84.7% in semi-polar metabolites remained. This points to further unrevealed variables influencing the exudation. For instance, single aboveground events (trampling and erosion) or further root surrounding organisms like bacteria, fungi and herbivores might explain the appearance of certain exuded metabolites and the exudate composition. This is especially of huge interest for some polar metabolites, as amino acids, organic acids and carbohydrates, known to be released as response to the microbial community surrounding the root 20,54 . But also several semi-polar metabolites are described as interaction mediators between plants and as defence agents against bacteria or fungi 17,55-57 . They also act as inhibitors of the growth of plants 58 and as toxins for herbivores 18,59,60 . Also endogenous factors e.g. the plant functional traits [19][20][21] and plant age 20 can alter the exudate composition in the rhizosphere. Herz et al. (2018) 19 and Dietz et al. (2019) 21 demonstrated for the same ten plant species that in both cases, polar and semi-polar exuded metabolites, plant functional traits such as root biomass and C content of the roots have an impact on the exudate profile of plant roots. Such investigations were not possible in the current study due to detection limitations of different plant functional traits of the ten phytometer species (data not shown), however, they should be part of future investigations. Aulakh et al. (2001) 21 presented qualitative and quantitative changes in the metabolite profile of organic acids of rice cultivars in dependence of the plant developmental stage. The experimental design of the current study did not allow such relations, which might also play a role in the grassland rhizosphere. In conclusion, this study demonstrates the diversity of exudate profiles of polar and semi-polar metabolites of different forb and grass species in the field. Only the combined investigation of a broad set of metabolites and different ecosystem components can help to find the most probable explanation why plants release a part of their metabolome into the soil and thereby might provide information about the potential biological function of exudates in the rhizosphere. Table 9 presents the number of samples per site and species. environmental factors and data collection. Climate data of the precipitation (in %) and temperature in 10 cm and 2 m height (T(10) and T(200), in °C) as well as soil moisture (moisture, in %) were provided by Biodiversity Local Management teams 42 . Soil pH (pH), the total carbon (TC, in %) and total nitrogen content (TN, in %) of the soil of each site were measured on soil samples of bulk soil of each target plant. The soil was collected, sieved (2 mm mesh size), dried at 105 °C and ground. pH was determined by mixing 10 g soil powder and 25 ml demineralized water. 1.86 g KCl was added and pH was measured by a glass electrode. TC and TN were determined by weighing 10 mg soil powder into tin capsules and analysed using a C/N-analyser (vario EL cube; Elementar). exudate sample collection. The sample collection took place from June to August 2015. A field exudate collection method was adapted to those of Herz et al. 19 and Dietz et al. 21 to collect the polar and semi-polar metabolites. A wash step of the roots in 0.5% sodium chloride solution (NaCl) for 10 min was inserted between wash step one and two to remove rhizosphere microorganisms form the root surface. The exudate collection was performed in deionised water of HPLC quality from the complete root. Water samples without root exudation were used as process control ("water blanks"). An internal standard stock solution containing 20 µg/mL 2,4-dichlorophenoxyacetic acid and 10 µM Ribitol were added immediately after exudate collection in the field. The exudate solution was purified by using the approach described in Herz et al. 19 and Dietz et al. 21 and measured with two different non-targeted plant metabolite profiling approaches. Aliquots of 100 µL of each sample were analysed by LC-MS according to Dietz et al. 21 . Aliquots of 200 µL of each sample were derivatized as described in Herz et al. 19 and subjected to GC-MS analysis. GC-MS analysis and data processing. Derivatized exudates and water controls were analysed by non-targeted plant metabolite profiling with a gas chromatograph (6890N GC; Agilent Technologies, Santa Clara, USA) equipped with a ZB-5 Zebron Guardian TM Capillary GC column (30 m + 10 m Zebron TM , iD 0.25 mm, df 0.25 µm; Phenomenex, Torrance, USA) and coupled to mass spectrometer (5975 MSD; Agilent Technologies). Settings and method of measurement as well as data processing were applied as described in Herz et al. 19 . LC-MS and MS/MS analysis and data processing. Exudate samples as well as water controls were analysed by ultra performance liquid chromatography coupled to electron spray ionisation quadrupole time of flight mass spectrometry (UPLC/ESI-Q-ToF-MS). An ultra performance liquid chromatography (ACQUITY UPLC; Waters, Eschborn, Germany) equipped with an Acquity UPLC ® HSS T3 column (ACQUITY UPLC HSS T3 Column, 100 Å, 1.8 µm, 1 mm × 100 mm; Waters) coupled to MicrOTOF-Q II hybrid quadrupole time-of-flight mass spectrometer equipped with an Apollo II electrospray ion source (Bruker Daltonics) was used for MS mode. To obtain CID mass spectra (MS/MS) of exuded compounds UPLC/ESI-Q-ToF-MS with an ultra performance Acquity UPLC platform (ACQUITY UPLC; Waters) equipped with an Aquity UPLC ® H5S T3 column (ACQUITY UPLC HSS T3 Column, 100 Å, 1.8 µm, 3 mm × 100 mm, 1/pkg; Waters) and a MicrOTOF-Q I hybrid quadrupole time-of-flight mass spectrometer equipped with an Apollo II electrospray ion source (Bruker Daltonics). Detailed description were provided in the publication of Dietz et al. 21 . Compound classification and identification. Data were processed as described in Herz et al. 19 and Dietz et al. 21 . The identification of GC-MS measured compounds based on the National Institute of Standards and Technology (NIST) data base, the Golm metabolome database (GMD) and analytical standards measured on the same instrument. The classification of LC-MS measured compounds based on comparison of qualifier ions of each MS/MS of each compound with a fragment library of measured reference standards (see also Dietz et al. 21 ). The specific identifier ions are given in Supplementary Table 7. The hierarchical clustering of mass spectra was done according to their fragment spectra similarity by the MetFamily tool 44 . The coloration of the endpoints was done manually in accordance to family classes and fragments. www.nature.com/scientificreports www.nature.com/scientificreports/ Statistical analysis. The statistical analysis was adjusted to Dietz et al. 21 and was performed either with Excel 2010 or with R (version 3.4.4 63 ). First, metabolites and compounds occurring in 50% of water controls as well as in 50% of chemical blanks (GC-MS analysis) were regarded as artefacts and excluded from the metabolite dataset. For investigation of the overall metabolite profile, data were transformed to a presence/absence matrix. This allowed the observation of metabolite composition without the heterogeneity already described in former investigations 19,21 . The chemical richness as well as the significance of differences were calculated by the mean number of measured metabolites per group, here site and species or site and plant, using ANOVA (function aov, 63 and Scheffé posthoc test (function sheffe, package agricolae 64 ). The dependency on traits as species, site and growth form were calculated by linear mixed effect models (function lmer, package lmerTest 65 ) including site with plot nested in species or species and plot nested in growth form as random factors. Results were visualized using boxplots (function qplot, package ggplot2 66 ). The exudate composition was analysed by redundancy analysis (function rda, package vegan 67 ) of the presence/absence matrix of metabolite composition against a presence/absence matrix of species and site together. GC-MS measured exudates were also analysed for their quantitative occurrence in growth form, site, and growth form and site as well as species, and species and site. First, the counts of each substance over all samples of the group members were summed up and divided by the number of samples per group. This percentage of occurrence of one group member, e.g. ALB, was divided by the sum of percentage of occurrence of all other group members, e.g. SCH and HAI. Metabolites with a ratio of at least 2 were accepted as linked to this group member. LC-MS measured exudates were analysed for the significant species-specific compounds by calculating the mean of the compound composition of each species and subjecting it to a t-test (function binom.test 63 ) with alternative hypothesis that a compound does not occur in one out of ten species. In a second test, Galium mollugo and Galium verum data were combined as Galium species due to their phylogenetic origin and compound composition similarity and investigated with the alternative hypothesis that a compound occurs at least in two species. Polar and semi-polar metabolite matrices were subjected to variance partitioning (function varpart, package vegan 67 ) for the calculation of explained variance by target species identity (Species), the plot as local impact (Plot) and either parameters of the neighbour plants around each target plant (LNH) and the environmental conditions (either as cumulated predictors of soil and climate parameters (Env), or soil conditions (Soil), and climate conditions (Climate) separately). The single variables contained in LNH (Cover, Richness and Shannon), LUI (mowing, grazing, fertilization), Soil (pH, soil moisture, soil texture, soil type, TC, TN) and Climate (precipitation, T(10) and T(200)) were investigated for their explanatory power of exudate composition in the same way as the cumulated predictors. Furthermore, logistic regression models (glmer, lme4 package 68 ) were performed with the particular metabolites as dependent variable and a single environmental variable as predictor, using plot and species as random factors. Correlations with an alpha error below 0.05 in ANOVA and correlation analysis were considered as significant. The correlated compounds were summed up for each variable in a bar plot (Excel 2010) and divided by the total number of compounds found in the dataset of forb or grass exudates, respectively, for calculation of the percentage of affected compounds per cumulated predictor. Data availability The mass spectrometric data are available from MetaboLights database LC-MS permanent link: https://www.ebi. ac.uk/metabolights/mtbls865 GC-MS permanent link: https://www.ebi.ac.uk/metabolights/mtbls866 1 and www. bexis.uni-jena.de. Information concerning environmental data, land use intensity data, and plant community data that support the findings of this study are available from www.bexis.uni-jena.de.
2023-02-20T14:13:00.617Z
2020-07-01T00:00:00.000
{ "year": 2020, "sha1": "a978d8f331abbf15192e48f04ef343619adef7a3", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-019-54309-5.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "a978d8f331abbf15192e48f04ef343619adef7a3", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
263082726
pes2o/s2orc
v3-fos-license
Identification of Three Early Phases of Cell-Fate Determination during Osteogenic and Adipogenic Differentiation by Transcription Factor Dynamics. Age-related skeletal degeneration in patients with osteoporosis is characterized by decreased bone mass and occurs concomitant with an increase in bone marrow adipocytes. Using microarray expression profiling with high temporal resolution, we identified gene regulatory events in early stages of osteogenic and adipogenic lineage commitment of human mesenchymal stromal cells (hMSCs). Data analysis revealed three distinct phases when cells adopt a committed expression phenotype: initiation of differentiation (0-3 hr, phase I), lineage acquisition (6-24 hr, phase II), and early lineage progression (48-96 hr, phase III). Upstream regulator analysis identified 34 transcription factors (TFs) in phase I with a role in hMSC differentiation. Interestingly, expression levels of identified TFs did not always change and indicate additional post-transcriptional regulatory mechanisms. Functional analysis revealed that forced expression of IRF2 enhances osteogenic differentiation. Thus, IRF2 and other early-responder TFs may control osteogenic cell fate of MSCs and should be considered in mechanistic models that clarify bone-anabolic changes during clinical progression of osteoporosis. INTRODUCTION Mesenchymal stem/stromal cells (MSCs) are an excellent biological source for bone regenerative therapies, tissue engineering, and treatment of post-menopausal osteoporosis (Murphy et al., 2013;Steinert et al., 2012). Ex vivo expansion of autologous bone marrow stromal cells and systematic administration of hMSCs was proposed 20 years ago to treat patients with osteoporosis (Bruder et al., 1997). To date, treatment of osteoporosis using bone marrow-derived MSCs is not yet standard clinical practice. Donor variation among patients and unpredictable capacity for differentiation are among the current shortcomings of hMSCs for their application in regenerative cell therapies (Murphy et al., 2013). In MSCs, transcription factors (TFs) such as RUNX2, SP7/Osterix, and SOX9 have been shown to play critical roles in differentiation of MSCs into osteoblasts or chondrocytes. Overexpression of RUNX2 in non-osteoblastic cells or in adipose-tissue-derived MSCs increases expression of osteoblastic markers and enhances osteoblast differentiation and mineralization (Ducy et al., 1997;Otto et al., 1997;Zhang et al., 2006). Moreover, homozygous RUNX2 mutant mice lack mature osteoblasts and mineralized bone, indicating that a single TF is important for bone development and osteoblast differentiation. Besides differentiation into osteoblasts, MSCs can differentiate into other cell lineages such as adipocytes and chondrocytes (Pittenger et al., 1999). The identification of regulators of lineage commitment is therefore an essential step toward our understanding and control of human MSC differentiation into osteoblasts. The balance between osteoblast and adipocyte differentiation is of specific interest. Increased adipose tissue volume is observed in the bone marrow cavity in osteoporotic people where increased bone resorption is not sufficiently compensated by an increase in bone formation by osteoblasts (Justesen et al., 2001;Yeung et al., 2005). The hypothesis underlying the current study is that detailed analysis of gene expression changes upon induction of osteogenic and adipogenic differentiation of MSCs enables identification of TFs that change activity during early differentiation in both lineages. While previous gene expression studies identified important regulatory pathways and processes involved in MSC differentiation, the results are limited by differentiation into a single mesenchymal lineage, a later stage of differentiation, and/or a low temporal density at early time points (Hung et al., 2004;Kulterer et al., 2007;Ng et al., 2008;Piek et al., 2010). Because key lineage decisions are made during the early stages of mesenchymal differentiation, a high density of early time points of differentiation is critical for the identification of important regulators within the initial differentiation phases. (legend continued on next page) Here, we systematically investigated gene expression changes upon differentiation of human MSCs into adipocytes and osteoblasts with high temporal resolution. One of our key findings was the identification of three distinct sequential phases of differentiation in both lineages. Furthermore, we characterized genes and regulatory programs controlling the early stages of mesenchymal lineage commitment. These findings provide opportunities for designed engineering of hMSCs for applications in both personalized and regenerative medicine. Differentiation of hMSCs into Osteoblasts and Adipocytes To analyze dynamic transcriptional networks in early differentiating human MSCs (hMSCs), we generated gene expression profiles with a high temporal density during the first 4 days of osteogenic and adipogenic differentiation. Histological staining for calcium and lipids in osteogenic and adipogenic differentiation cultures show that, respectively, the extracellular matrix (ECM) is mineralized and cells accumulate intracellular lipid vesicles as shown at day 25 ( Figure 1A). Biochemical analyses of samples during differentiation show that total protein and alkaline phosphatase (ALP) activity transiently increases during osteogenic differentiation prior to a decrease of ALP upon mineralization ( Figure 1B). ALP activity is similarly increased during adipogenic differentiation ( Figure 1B). ECM mineralization is observed after 21 days of osteogenic differentiation as demonstrated by increased calcium deposition in the matrix ( Figures 1A and 1B). Together, our observations establish that hMSCs differentiate into both lineages, consistent with their expected multi-lineage potential. Analysis of gene expression dynamics during hMSC differentiation reveals that transcript levels immediately change upon induction of differentiation ( Figure 1C). To gain insight into the robustness of the gene expression changes, we selected genes that were specifically upregulated during osteogenic or adipogenic differentiation. We validated their change in expression in MSCs from an additional donor as well as during osteogenic differentiation of a committed human osteoblast cell line (NHOst). These an-alyses supported the observations obtained by the microarray gene expression analysis ( Figure S1A). During induction of osteogenic differentiation, at least 44 gene probes are significantly different at 30 min and this increases further to 351 after 2 hr. The number of significantly modulated probes gradually increases and begins to level off after 2 days with a maximum number of 3,178 probes after 3 days of osteogenic differentiation. Comparable results were obtained during adipogenic differentiation where the number of differentially expressed probes was 46 after 30 min, 470 after 2 hr, and 3,863 by day 3. Remarkably, within the first 2 hr of differentiation, most of the genes that are modulated are upregulated (276 of 351 probes for osteogenic and 301 of 470 probes for adipogenic induction) (Figures S1B and S1C). Yet, the number of up-and downregulated genes is about the same in the two subsequent phases. These results suggest that transcription exceeds median mRNA degradation during the initial induction of differentiation. Gene expression at later stages of differentiation may be controlled by transcriptional repression combined with non-specific mRNA decay and/or constitutive transcription with enhanced mRNA destabilization. Next, we calculated the number of significant differentially expressed probes at each time point compared with the preceding time point and divided the differences by the elapsed time ( Figure 1D, right panel). Adipogenic differentiating cells showed a similar number of differentially expressed probes during the first 2 hr, but thereafter the number of gene expression changes per hour was almost two times higher than in the osteogenic differentiation cells and is in agreement with recent studies suggesting the default preference of bone marrow-derived MSCs for osteogenic differentiation (Meyer et al., 2016). In both conditions, the number of probes that changed per hour decreased drastically after 48 hr of differentiation, and only minor gene expression changes were evident between days 2 and 4 ( Figure 1D). More than 20% of genes change expression in the early phase (first 3 hr) of osteogenic differentiation ( Figure 1C). We assessed whether this dramatic change is mostly due to uncoordinated gene activation and repression or proceeds in a more organized and sequential manner that reflects a well-defined single differentiation program in which the number of modulated mRNAs per time period (C) Number of significant differential probes (q < 0.001) relative to undifferentiated cells (t = 0) (based on three independent experiment with 10-20 technical replicated measurements in a single donor). (D) Number of significant differential probes (q < 0.001) relative to the previous time point. The number of differential expressed probes was divided by the time differences between two time points (based on the same replicated samples as in C). (E) Principle component analyses of the osteogenic (1), adipogenic (2), and all (3) gene expression profiling experiments using the 15,795 probes that were detected as expressed. Replicated (based on the same replicated samples as in C) probe intensities were averaged for each time point (blue, osteogenic; red, adipogenic). Gray circles indicate differentiation phases as described in the text. Figure 1C) probe intensities were averaged for each time point. (B) Analyses of enriched functional categories among the probes that were differentially expressed during adipogenic and osteogenic differentiating hMSCs. The cluster diagram depicts the Àlog10(p value) of the enrichment. Only significant (p < 0.05) instances are shown. Colors on the side of the cluster diagram depict the similarly associated functional categories. (C) Significance of enrichment of the functional categories, TF activity and cell cycle, during differentiation. (D) Venn diagram of the functional categories that are similarly enriched, TF activity and cell cycle, and extracellular matrix proteins, a functional category that is enriched earlier in osteoblasts. The numbers inside the Venn diagram are the number of significantly expressed (legend continued on next page) increases during early lineage commitment and progression. Principal component analysis (PCA) was applied to examine whether expression produces a single dominant principal component (PC), and whether there is co-linearity between time and progression along that component. The PCA shows indeed a rather dominant first principal PC that encompasses two-thirds of the variation in expression ( Figure 1E, 1 and 2). In this dimension, both lineages differentiate away from the undifferentiated MSCs. Gene expression (and its inherent variation) at a given biological time point generates time stamps. Since these time stamps occur in a precise order within the diagram, it appears that the process of differentiation of both lineages resembles an ordered program rather than an uncoordinated set of events. By performing a single PCA on the data of both lineages, we addressed similarities and differences in the differentiation programs of the two lineages. We found that both lineages display a unique series of time stamps that occur in sequence on a single line ( Figures 1E, 3, and S1D). The time stamp lines for the two lineages move in the direction of the main PC and diverge steadily over time, consistent with the early adoption of two distinct mesenchymal phenotypes. The osteogenic and adipogenic lineages already diverge within 2-3 hr upon induction of differentiation, and three phases can be discerned in each lineage ( Figure 1E, 3). Global gene expression analyses at high temporal resolution and unsupervised clustering of the microarray data defines three distinct sub-stages (phase 1, 0-3 hr; phase II, 6-24 hr; phase III, 48-96 hr) and dynamic transcriptional responses during differentiation of hMSCs into either osteogenic or adipogenic lineages ( Figure 1E). Hence, mesenchymal differentiation into the cellular lineages that produce mature bone or fat tissue is a multi-stage process. Furthermore, phenotype commitment is initiated quite rapidly following induction. Gene Ontology Analysis Reveals General and Lineage-Specific Functional Processes during Differentiation To understand mechanistic cellular changes upon induction of differentiation, we used the gene expression profiles to assess the functional categories that were significantly enriched at the different time points after induction of differentiation into both lineages. Differentially expressed probes at each individual time point ( Figure 2A) were subjected to a gene ontology analyses, and the most significantly enriched functional categories were visualized in a hierarchical clustering diagram ( Figure 2B and Table S1). The clustering of enriched functional categories illustrates that modulated genes in the first phase are highly different from those in the second and third phases. Importantly, the gene expression program of both lineages are enriched in similar gene ontology terms, consistent with classical models for cellular differentiation in which changes in cell proliferation accompany the acquisition of lineage-committed cellular phenotypes. These functional categories include transcriptional regulation (enriched in phase I), apoptosis (enriched in phases I and II), as well as cell-cycle/DNA replication and mitosis (enriched in phases II and III) ( Figures 2B and 2C). Because similar functional categories are enriched in both lineages, for each lineage we investigated which genes are differentially expressed. Expression of 45 probes linked to transcriptional control is modulated in both lineages within 3 hr of differentiation (phase I), and these genes may represent a class of common early-responder TFs in phase I ( Figure 2D). We also observed 22 and 46 TF activity-linked probes that are specifically regulated within the initiation phase of osteogenic and adipogenic differentiation (phase I), respectively. Furthermore, 53 and 99 probes linked to cell-cycle mechanisms are differentially expressed in the osteogenic and adipogenic lineages at 48 hr, respectively, when cells progress from a lineage-acquisition phase (phase II) into the lineage-progression phase (phase III) ( Figure 2D). The situation is clear but paradoxical: the two lineages are largely similar in terms of functional categories, yet distinctly; the differences between the two lineages set in early (second PC in Figure 1E and detailed differences of the differential expressed TFs in Figure 2D, upper panel). We note that many probes (n = 188) associated with the cell cycle are differentially expressed in both lineages after 48 hr, and about 85% of these probes are downregulated ( Figure 2D). This result indicates that induction of differentiation in both lineages coincides with downregulation of the expression program that mediates orderly progression through the mitotic cell cycle. More importantly, the modulated expression of 185 common probes may define a shared mesenchymal cell-cycle program of downregulated cell-cycle stimulatory factors (CCNA2, CCNB1, CCNB2, CCND1, CCNE1, CCNE2, and CCNF; and E2Fs: E2F2 and E2F7) and upregulated cell-cycle inhibitors (CDKN1C and CDKN2C) that is coordinately controlled in each lineage. Besides functional categories that were enriched in both lineages ( Figure S2A), we identified a number of gene probes at the time point indicated. The numbers of up-or downregulated probes are shown in gray type. The number of probes that are oppositely regulated in the two conditions are shown below the Venn diagrams. (E) Significance of enrichment of functional categories, extracellular matrix proteins and oxidoreductase activity, that were enriched during osteoblast or adipocyte differentiating hMSCs, respectively. (legend continued on next page) ontology terms that are only enriched during adipogenic differentiation in phase III ( Figures 2B and 2E) and not during osteoblast differentiation. The unique regulation of these gene categories (e.g., monocarboxylic acid binding [GO: 0033293] and oxidation reduction [GO: 0055114]) suggest changes in metabolic activity in the adipogenic lineage that do not occur during osteoblast differentiation. Osteoblast differentiated MSCs are defined in part by the production of ECM proteins. Indeed, the number of genes enriched in phase II of osteogenic differentiation is related to cell adhesion (GO: 0007155), the ECM (GO: 0031012), and proteinaceous ECM (GO:0005578) ( Figures 2E and S2A). Interestingly, genes with the same GO terms are also significantly enriched during adipogenic differentiation but at a later stage (phase III). Within 12 hr of differentiation, the expression data show that 23 of 51 and 22 of 50 modulated ECM genes are either osteoblast or adipocyte related, respectively ( Figures 2D and 2E). Taken together, gene ontology analysis establishes that phenotype acquisition is initiated in phase I and continues into phases II and III. Differential Expression of Transcription Regulators within 3 hr after Induction of Differentiation in Phase I The main class of genes that is activated during phase I in both osteogenic and adipogenic differentiation is associated with regulation of transcription ( Figure 3A). This interpretation is based on significant enrichment of functional categories such as homeobox TFs (IPR: 001356), basic-leucine zipper TFs (IPR: 004827), TF activity (GO: 0003700), DNA binding (GO: 0003677), as well as positive and negative regulation of transcription (GO: 0045941 and GO: 0016481) ( Figure 2B and Table S1). Within 1 hr, more than 20% (osteoblast) and 30% (adipocyte) of the regulated genes are related to TF activity (GO: 0003700) and decrease below 10% in both lineages after 6 hr ( Figure 3A). Hence, changes in mesenchymal phenotypes upon induction of differentiation appear to be mediated by rapid changes in the expression of TFs, which can function as inducers of secondary responses to sustain either osteogenic or adipogenic phenotype commitment. Within the first phase of osteogenic and adipogenic differentiation, there are in total 133 probes (corresponding to 114 genes) related to transcriptional activity that are acutely regulated (Figures 3B and 3C). Most of the probe sets differ in expression in both lineages (i.e., 78%, 69 of 88 upon osteogenic induction; 61%, 69 of 114 upon adipogenic induction) (Figures 3B and 3C). A small number of the regulated transcription-related probes were specifically regulated in one of the two lineages and may represent lineage-specific regulators (i.e., 22%, 19 of 88 in osteogenic medium; 39%, 45 of 114 in adipogenic medium). Transient upregulation of early-responder TFs in phase I is occasionally followed by a return to basal levels of expression in phases II and III. This biphasic modulation indicates that induction of differentiation initiates a primary transcriptional program that may serve to induce a secondary program of phenotype-specific genes. GeneMania network analyses illustrates that the 88 transcription-related probes (80 genes) regulated upon osteogenic differentiation are linked to established signaling pathways such as transforming growth factor b (TGF-b) receptor signaling, regulation of nuclear Smad2/3 signaling, and AP1 TF signaling ( Figure 3D), perhaps indicating activation of a TGF-b-Smad2/3-AP1 network. Similar signaling pathways are evident among the 114 probes (97 genes) annotated for TFs that are controlled during adipogenic induction ( Figure 3E). The TFs regulated in both lineages are associated with similar pathways, yet these pathways are linked to TFs with lineage-specific changes in expression. For example, expression of MEF2A and VDR is modulated only during osteogenesis and adipogenesis, respectively, and functionally related to TGF-b receptor signaling (Figure 3D,arrow). Other TFs regulated during osteogenic differentiation of MSCs include five TFs with well-known roles in skeletal morphogenesis (DLX5, FOXC2, IRX5, SOX9, and TWIST1) ( Figure 3D). Furthermore, there are many homeobox-domain-containing TFs that are prominently regulated during either osteogenic (12 TFs) or adipogenic differentiation (19 TFs) (Figures 3D and 3E). Taken together, lineage-specific transient stimulation and suppression of early-responder TFs in phase I may represent a pre-commitment stage that drives activation of the target (B) Venn diagram of all probes annotated as TF activity (GO: 0003700) that are differentially expressed within the first 3 hr during adipocyte and osteoblast differentiation. (C) Hierarchical clustering of 133 probes of the gene expression study and annotated as TF activity (depicted in B). Red, upregulated; green, downregulated relative to undifferentiated cells; yellow, significant differentially expressed within 3 hr of osteoblast differentiation; blue, significant differentially expressed within 3 hr of adipocytes differentiation. (D) GeneMania network analyses of the 77 (88 probes) TFs that were up-or downregulated during osteoblast differentiation. Gray and yellow edges illustrate consolidated pathways and protein domains with edges shared, respectively. Blue nodes are pathways; known osteoblast TFs are marked in red; and green nodes depict the TFs that are specifically regulated during osteogenesis. Arrow see text. (E) GeneMania network analyses of the 95 (104 probes) TFs that were up-or downregulated during adipocyte differentiation. Edges are colored as in (D). Green nodes are TFs that are specifically regulated during adipogenesis. genes to establish the specialized mesenchymal phenotypes that support bone formation or fat metabolism. Analysis of Upstream Regulators Reveals a Coordinately Controlled TF-Gene Network The regulated probes at each individual time point were selected and used for upstream regulator analyses (URA) in ingenuity pathway analyses ( Figure 4A) to assess if the modulated genes correlate with TFs that change activity immediately upon differentiation. URA generated a pattern for 147 TFs of which the activities are modulated during osteogenic or adipogenic differentiation ( Figure S3A). Two notable genes activated in both lineages are CDKNA2 and NR3C1. CDKN2A is a major regulator of cellular quiescence and senescence, and its activation blocks cell proliferation by ensuring that cell-cycle stimulating regulatory E2F factors remain sequestered by pRB/p105. NR3C1/GR encodes a key nuclear receptor that controls gene expression in response to glucocorticoids (e.g., dexamethasone), a known stimulant of both osteoblastogenesis and adipogenesis that is included as an inducer of differentiation in our cell-culture experiments. Interestingly, the change in TF activity of NR3C1 exemplifies that additional post-transcriptional regulations should occur since its expression level did not change. URA also identified two other proteins (TP53 and EZH2) that have direct gene regulatory functions specifically during hMSC differentiation. TP53 encodes the tumor suppressor protein p53 that is required for normal osteoblast differentiation, while EZH2 is a histone methyltransferase that is known to suppress CDKN2A and control mesenchymal differentiation during skeletal development (Dudakovic et al., 2015). Beyond CDKN2A and NR3C1, there are eight other gene regulators (ELF4, FOSB, MYC, NFKBIA, SMARCB1/BAF45, STAT5A, NR1I3, and THRB/NR1A2) that are identified by URA in phase I of adipogenic differentiation ( Figure S3A) and have direct connections with adipogenic differentiation based on well-established pathways. Collectively, our high-density temporal analysis combined with URA validates the known osteogenic and/or growth inhibitory functions of four gene regulatory proteins (i.e., CDKNA2, NR3C1, TP53, and EZH2) during differentiation of hMSCs. Many TFs have lineage-independent activation/inhibition patterns. For example, 71% (phase I), 60% (phase II), and 59% (phase III) of TFs modulated during osteogenesis are activated or inhibited during the same phases of adipogenesis ( Figure S3B). These results show that initiation of either osteogenic or adipogenic differentiation is mediated by shared regulatory pathways, and a select number of critical cell-fate-determining proteins may coordinate activation of mesenchymal lineage-specific gene expression programs. Interestingly, URA predicts that many TFs change their activity without a change in their expression level. During osteoblast differentiation, only 14%, 18%, and 23% of the TFs identified by URA exhibit a change in gene expression in each of the three different phases, while 30%, 29%, and 33% of these TFs change during adipocyte differentiation ( Figure S3A). Thus, the mRNA-independent activation and inhibition of TFs predicted by URA suggest phase transitions during differentiation that may occur via a posttranscriptional mechanism including translation, protein modification, and/or subcellular translocation. Analyses of Early Activated and Inhibited TFs We subsequently focused our studies on TFs that change activity within the first phase (0-3 hr) when cells are postulated to make important lineage decisions (Figures 2 and 4B). Functional annotation illustrated that nearly all identified TFs are involved in cellular differentiation (30 of 34) ( Figure 4C). More specifically, 7 and 11 of the 34 TFs have been linked to differentiation of adipocytes and osteoblasts, respectively ( Figure 4C). Consolidated pathway analyses illustrated that the identified TFs are involved in well-known osteoblast and adipocytes signaling pathways such as GR, AP1, TGF-b, and interferon g (IFN-g) signaling ( Figure 4D) (Augello and De Bari, 2010). Interestingly, the (B) Cluster diagram of all TFs that a Z-score of>2 or <À2 in either lineages within the first 3 hr of osteogenic and adipogenic differentiation. Next to the activity patterns is the relative expression during differentiation of the TFs. Expression patterns of genes that are gray were not expressed or not present on the array, and therefore no expression data were available. (C) All TFs that were changing activity (blue, Z-score of<À2; or red, Z-score of>2) within the first 3 hr of differentiation of osteoblasts (OS), adipocytes (AD), or both (OS and AD). (D) GeneMania network analyses of the 34 TFs identified. The connected edges in gray illustrate the consolidated pathways (blue); red, the osteoblast-specific TFs; green, the adipocyte-specific TFs; gray, the TFs that we identified in both lineages. (E) Alizarin red staining of hMSC transduced with pLenti6.3-DsRED, pLenti6.3-IRF2, and non-transduced (NT) and cultured for 21 days in osteogenic differentiating conditions. (F) ALP levels of transduced cells after 7 days osteogenic differentiation. n = 3 independent experiments with three replicates per experiment, mean ± SEM; statistical significance using a one-way ANOVA; **p < 0.01, ***p < 0.001; ns, not significant. downstream analyses of four TFs that changed activity only during osteogenic differentiation (i.e., TP53, FOSl1, ESR1, and HOXA9), suggest that they may regulate 36.6% of the genes that were differential expressed within this first phase (Table 1). This number increased to 52% and 58% of the differential expressed genes in phases II and III. Importantly, this analysis demonstrates that activation or inhibition of only four TFs within the first 3 hr is linked to the regulation of more than 50% of the genes at later stages. Taken together, our analyses validate known regulators of osteoblast differentiation and identify candidates that may regulate many downstream genes involved in osteoblast differentiation. One of the TFs we identified is interferon-regulating factor 2 (IRF2). This conclusion was based on the direction of regulation of five IRF2 target genes: IRF1, IL6, CDKN1B, SOCS1, and PTGS2. To assess the robustness of this conclusion, we studied the expression of these five target genes in MSCs obtained from four different donors. As shown in Figure S4, the change in expression was identical in all donors at 3 or 6 hr ( Figure S4A and S4B) after the start of osteogenic differentiation. Forced expression of IRF2 in MSCs resulted in increased ALP activity and ECM mineralization ( Figures 4E and 4F). Thus, IRF2 is functionally rate limiting for osteogenic lineage commitment and progression of hMSC differentiation and mineralization. DISCUSSION The present study provides key insights into mechanistic events that direct osteoblast and adipocyte differentiation, which occur before time windows examined by previous studies (Hung et al., 2004;Kulterer et al., 2007;Ng et al., 2008;Piek et al., 2010). The high-density temporal dy-namic gene expression profiles we generated permit a mechanistic description of functional processes that change during the first hours and days of osteoblast and adipocytes differentiation. Among the main findings of this study is the definition of at least three distinct early differentiation phases, each with their own mRNA dynamics, and the identification of candidates for early regulation of hMSC differentiation. We used isolated MSCs, which represent a heterogeneous cell population. However, by using dexamethasone as inducer of both osteogenic and adipogenic differentiation, we synchronized the clock of the cells (So et al., 2009), which is coupled to the cell cycle (Feillet et al., 2015), and thereby limited cell-cycle heterogeneity and explained the identification of cell-cycle-related genes. Although we have investigated in detail the differentiation of bone marrow-derived MSCs, further in-depth studies are needed to establish whether MSCs from adipose tissue or different anatomical locations undergo a similar cascade of transcriptional events. Although adiposederived MSCs could function as a source of hMSCs for regenerative medicine, previous studies illustrated that MSCs isolated from different adipose depots respond very differently with respect to their clonogenic potential, doubling time, and differentiation (Russo et al., 2014). Therefore, the gene signature identified here may aid in the selection of cells from adipose tissue or other anatomical sites with increased regenerative capacity for the treatment of skeletal disorders. Gene expression analyses revealed an increased number of genes differentially regulated in osteoblasts and adipocytes. Yet, the enriched biological processes and TFs that change activity are mostly identical between the lineages within the first 4 days and therefore suggest that the timing of expression modulation and the specific types of genes within each functional category differ between the two lineages. Based on the number of genes regulated and the unsupervised cluster analyses, we discriminated three different phases within the first 4 days of differentiation. The third and last phase (early lineage progression, 48-96 hr) is characterized by a stabilization of the number of differentially expressed genes and suggests that the differentiating cells have reached a stable phenotype. The second phase (lineage acquisition, 6-24 hr) represents a transition phase in which the two differentiating lineages begin to deviate, as reflected by many transcriptional changes that are a direct result of the changes in the first phase. Downstream analyses of the four TFs that were identified during the first phase of osteogenic differentiation showed that these are capable of regulating more than 50% of the regulated genes in the second and third phases. Within the third phase of osteogenic and adipogenic differentiation, most of the genes associated with the cell cycle were downregulated. This illustrates the transition from proliferation to differentiation and reflects the inverse correlation between these processes as has been described in various other differentiating cells. The SWI/SNF chromatin remodeling complex is important for this regulation (Ruijtenberg and van den Heuvel, 2016). Moreover, since RUNX2-dependent skeletal gene expression requires SWI/ SNF, it agrees with the sequential process from proliferation to differentiation of hMSCs (Young et al., 2005). Nevertheless, we need to investigate whether small molecules that inhibit proliferation are sufficient for the induction of differentiation of hMSCs. In the regulatory model derived from our data, the first phase (0-3 hr) represents the initiation stage of the differentiation program of hMSCs. This phase is characterized by expression changes of many transcription-related genes that regulate lineage commitment and set the stage for further differentiation toward a stable phenotype. We identified many TFs with homeobox and bZip domains that were regulated in both lineages. Homeobox TFs have been extensively studied and are generally important for their regulatory effects during development, as well as MSC differentiation (Stains and Civitelli, 2003). Strikingly, the homeobox TF DLX5 is upregulated within the first hours of differentiation. Dlx5 is epigenetically unlocked during DMSO-induced osteogenic differentiation (Thaler et al., 2012), activates the osteoblast TF Runx2 , and is required for mesenchymal cell proliferation and differentiation (Samee et al., 2008). Interestingly, seven homeobox TFs (e.g., HOXA10, HOXB2, IRX3, SATB2, SIX2, SIX4, and ZFHX4) are only regulated upon adipocyte differentiation. Apart from HOXA10, these TFs have not yet been described to be involved in adipocyte differentiation and hold the potential of early regulators for lineage specificity and commitment. Consolidated pathway analyses of the immediately earlyregulated TFs identify Smad2/3-TGF-b and AP1 signaling. TGF-b and AP1 signaling are also enriched in the URA of the regulated genes in the first phase. Because most of the TFs within these pathways are regulated in both differentiating osteoblasts and adipocytes, we hypothesize that the initiation of hMSC differentiation is similarly activated in both lineages and that changes in the combination of these signaling pathways are necessary to exit the immature multi-potent cell stage (loss of stemness) to allow acquisition of a specialized mesenchymal phenotype. Indeed, the dominant PC of expression changes may correspond to this loss of stemness. The TGF-b family member Activin A inhibits differentiation and bone formation of committed osteoblasts (Eijken et al., 2007) by altering the ECM composition (Alves et al., 2013). Because most TFs associated with TGF-b signaling (e.g., ESR1, FOSB, HOXA9, JUNB, MEF2A, and MYC) appear to be inhibited in our URA and because the Activin-antagonist follistatin enhances osteoblast differentiation (Eijken et al., 2007), we hypothesize that inhibition of TGF-b signaling is essential for early initiation of osteogenic differentiation of hMSCs. Our studies also identified IRF2 as a regulator of osteoblastogenesis in hMSCs. Irf2 is an antagonist of Irf1 and inhibits the transcriptional activation by IFN-a and -b (Zhang et al., 2015), and has a separate function in cell proliferation (Vaughan et al., 1995). In addition, we found that IFN-gpathway-associated TFs (e.g., CEBPb, CREB1, HOXA10, STAT5A, STAT5B, and STAT6) change activity within the first phase. Consistent with these findings, IFNs do not affect induction of osteogenic differentiation in hMSCs, but they inhibit mineralization when administered after lineage commitment (2 days after initiating osteogenic differentiation) (Woeckel et al., 2012) or to pre-committed immortalized human fetal osteoblasts. Taken together, these findings indicate that regulation of IFN signaling is important for the osteogenic differentiation of hMSCs. In conclusion, our data show that a stable osteoblast or adipocyte phenotype is established within the first 2 days upon induction of lineage commitment in hMSC. Three distinct early phases with characteristic cellular responses and differentially expressed TFs are evident during both adipogenic and osteogenic differentiation. We observed that adipogenic differentiation of MSCs derived from young healthy individuals required a higher number of genes to change in expression than osteogenic differentiation. This observation together with the known shift in balance between adipocyte and osteoblast differentiation with aging (Li et al., 2016) motivates further studies to investigate the extent of transcriptional changes as a function of age or gender. Interestingly, changes in TF activity that occur within the first 3 hr may control regulation of subsequent later phases of mesenchymal differentiation. Upstream regulator analyses identified TFs in both canonical and less explored signaling pathways. The latter finding opens up possibilities for studies on small molecules that target early regulators to efficiently induce osteoblast and adipocytes differentiation, as part of a bone anabolic strategy for osteoporosis. DNA, Protein, Alkaline Phosphatase Activity, and Mineralization Assays DNA, protein, ALP activity, and calcium measurements were performed as previously described (Bruedigam et al., 2011). RNA Isolation For each time point, nine individual wells with MSCs were induced to differentiate into the osteogenic as well as the adipogenic lineage. To obtain enough RNA for the gene expression profiling analyses, we pooled three individual cultures in TRIzol (Life Technologies), resulting in a total of three experimental samples per time point (11 time points) per lineage (osteogenic or adipogenic) to be used for gene expression profiling analyses (66 samples in total). RNA was isolated as previously described (Bruedigam et al., 2011). The quality of isolated RNA was assessed on a 2100 Bioanalyzer (Agilent Technologies). Illumina Gene Chip-Based Gene Expression Profiling Illumina HumanHT-12 v3 BeadChip (Illumina) human wholegenome expression arrays were used. RNA from three biological rep-licates for each condition and time point were analyzed. Total RNA (100 ng) of each sample was amplified using an Illumina TotalPrep RNA Amplification Kit (Ambion). cRNA (750 ng) was hybridized using the standard Illumina protocol and scanned on an iScan. Microarray Analysis Raw data were background subtracted using Illumina GenomeStudio (V2010.1, Illumina), and further processed using the Bioconductor R2.10.1 lumi-package (Du et al., 2008). Data were transformed using variance stabilization and quantile normalized. Probes that were present at least once in the experiments (detection p value <0.01) were considered to be expressed (15,795 probes) and further analyzed. Differential expressed probes, q < 0.001, were calculated using the Bioconductor package limma (Smyth, 2004). Enriched Functional Categories, Ingenuity Pathway Analysis, and GeneMania Network Generation For each time point, we selected the significant regulated probes (q value <0.001 versus undifferentiated hMSCs), and enriched functional categories were calculated using DAVID v6.7 (http:// david.abcc.ncifcrf.gov/ (Huang et al., 2009)). The 100 most significant functional categories (based on p value) per phase (0-3 hr, 6-24 hr, 48-96 hr) and lineage (adipogenic, osteogenic) were selected. The enrichment p values for each time point and treatment were added to the functional categories (in total 285, see Table S1). The p values were Àlog10 transformed and visualized in a cluster tree using JavaTreeView. Consolidated pathway analyses were performed with GeneMania (www.genemania.org). The URA for each time point was performed using ingenuity pathway analysis (www.ingenuity.com). TFs with a Z score below À2 or above 2 were considered inhibited or activated. For the downstream analyses depicted in Table 1, we counted the number of genes in phases II and III that can be regulated by the TFs in the first phase. The cluster analyses of the Z scores were visualized using JavaTreeView. Generation of Overexpression Constructs Overexpression constructs were generated using Gateway cloning (Invitrogen). Constructs containing the gene of interest (pCMV-SPORT6-IRF2: 3920890) were ordered from Open Biosystems. PCR products without a stop codon were generated from these constructs (primers: IRF2-for, caccatgccggtggaaaggatgc; IRF2-rev, acagctcttgacgcgggcctg) and ligated into pENTR-topo (Invitrogen) using the supplier's protocol. The generated Entry clone was sequenced and the insert was transferred into pLenti6.3/v5-DEST (Invitrogen) destination vector using an LR-gateway reaction according to the supplier's protocol. Virus production was carried out as previously described using Virapower lentiviral packaging constructs (Drabek et al., 2011). Subsequently, viruscontaining media were concentrated using ultracentrifugation at 22,800 rpm in a SW32ti rotor (Beckman Coulter) at 4 C. ACCESSION NUMBERS The gene expression data analyzed here are publicly available and can be retrieved from the Gene Expression Omnibus (GEO) at the NCBI under accession number GEO: GSE80614. Supplemental Information includes four figures and one table and can be found with this article online at http://dx.doi.org/10.1016/ j.stemcr.2017.02.018.
2018-04-03T01:55:19.277Z
2017-03-23T00:00:00.000
{ "year": 2017, "sha1": "7a851c840931be031eb25aadc90a63c0070de3d9", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.stemcr.2017.02.018", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "11ee1e6595e435c92eb03fc3e2a5741824ae0805", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }