id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
56330086
|
pes2o/s2orc
|
v3-fos-license
|
Monetarism and the Battle with the Global Economic Crisis
The purpose of this article is to research and verify the hypothesis that the current crisis is the result of monetarist doctrine and it concludes that up to now the battle against the International Financial Crisis is ineffective regarding policy effects and unsatisfactory concerning methods. After a period of growth and prosperity, we are now in the final stages of a more than 30 years long era of monetarism. It is observed how the system defends itself from falling. However, at the same time, the fight against the crisis is dominated by monetarist instruments. Low evaluation of the effectiveness of this fight leads to the conclusion that it is impossible to fix the economy using the same methods that led to the crisis outbreak. After six years since the crisis outbreak the condition of the economy demonstrates the low effectiveness of dealing with the crisis and the further intensification of the crisis-factors. The end of the monetarist age signals the need to seek systemic solutions, new approaches to quality management and change the way of thinking, which seems to be necessary for the Western economy to get out of recession.
It is not true to say that the crisis is global. Against this background, a successful development strategy of the Far East is shocking. The crisis touches this region only in the sense that the demand decline in Western countries reduces economic growth in the East, which, nevertheless, is impressively high. China, which takes the first place in global production, is the real tycoon in 'showering'their goods all over the world. It is hard not to notice the dominance of Chinese banks, the accumulation of foreign exchange reserves, export expansion, technological advancement, the momentum of public projects, the scale of luxury goods, works of art, and automotive markets. (Note 2) The status of the most dynamically developing region provides a promotion of the Far East from the imitation phase to the era of economic and technological dominance. The financial opportunities, accumulated surpluses, domestic demand, a high rate of accumulation, the scale of public investment, scientific and technical progress are completely underestimated.
Depreciating these achievements causes a lack of interest in a number of innovations in the economy of the countries in this region that could be implemented for the benefit of the West. There is no doubt that the main barrier to understanding the causes of the achievements of the Far East is the ideological struggle and monetarist way of thinking. Unfortunately, the "brake" on the world economy is the West.
Monetarist Thinking, as a Brake on Overcoming the Crisis
In fact, many people, including scientific authorities, especially "Chicago School" enthusiasts, just do not see the connection between monetarism and the crisis. It is not about theoretical issues and a model of the economy, but the long-term effects of monetarist economic system. The increasingly visible collapse of the economy, debt accumulation and life at the expense of future generations means that the link between the theory of monetarism and economic reality is the same as the relationship of the theory of socialism and the practiced socialist economy.
Monetarist theory appears in the publications in the fifties due to Milton Friedman. Initially, the concept which is the negation of Keynesianism focused on the study of the effects of monetary policy on national income. Further theoretical work has contributed to the economic development of the school as an alternative to the Keynesian welfare state, the implementation of which has come to the end of its possibilities at the end of the 70s.
In the early eighties, the doctrine of monetarism became the basis of the reforms of Ronald Reagan (1981) and Margaret Thatcher, and then spread to most of the western countries. Thus begins the over thirty-years-long era of monetarism in the economy. During the booming period in the nineties, monetarism creates a dynamic business environment based on liberal principles and deregulation of the capital market. This system is based on a specific philosophy of thinking, which is dominated by the pursuit of money, resulting from the conviction that "cash counts" and "real money is earned on the stock market." It is worth noting that the systemic transformation of the Polish economy in the 90s, coincided with the flowering phase of monetarism, which means that the transformation that has taken place in Poland through reforms of Leszek Balcerowicz and Jeffrey Sachs, consisted in the construction of the economic system and monetarist way of thinking, which led to a fundamental redefinition and new conflicts. Walters, D. Friedman, K. Ohmae and others associated with the analysis of the impact of money supply on the economy and national income through the control of interest rates and Treasury bonds. This concept implies the effectiveness of market mechanisms and rejects the direct participation of the state in economic processes.
Economic Doctrine
The doctrine is based on the use of the money supply as a determinant of economic activity within the mechanism of functioning of the money market and capital market, treats capital as a major factor in creating value, rejects Keynesian solutions as ineffective in an open market economy and globalization, so it is a doctrine opposed to the state intervention doctrine.
Economic Policy
The choice of policy instruments to ensure the implementation of the economic growth policy through the money supply control method by the state monitoring of the money market functioning in conjunction with the liberal economic policy assuming currency convertibility, removal of restrictions on the movement of capital, strengthening of free trade, deregulation of the economy, promoting individual entrepreneurship.
Financial System
Includes general characteristic of monetarism regulatory and financial institutions regarding the mechanism of supply and functioning of the money in the financial markets and its impact on business processes. A key role in this system perform international banks, investment funds, insurance companies and other financial corporations, which act as an intermediary between the representatives of supply and demand on the individual financial markets. The impact of the state on the financial system is indirect through legal standards, regulators and supervisors.
Economic System
Monetarist approach of modern economic relations is based on the functioning of the capital market institutions and money market funds that support the functioning of a non-financial sphere, i.e. the real economy to businesses and households. The center is a capital market, which combines the financial sphere and the real. Its dynamics illustrates economic development, cash flows, distribution of capital, investment trends, the level of return and cost of capital. In a liberal economy the state's economic policies, including fiscal and budgetary policy are losing its importance. Economic Age, Sense of rationality, Common practice Initiated by R. Reagan, the US president, in 1980, expressed in the 'financialization' of the economy, which is subordinated to the economic and financial decision-making mechanism processes. Separateness of this era refers to the characteristic way of thinking about the economy based on the interpretation of business processes from the perspective of the functioning of financial markets and monetary developments. On the assumption that capital is the dominant source of value, according to the belief that "money makes money on the stock market", the flow of capital determines economic development, and its outflow causes a recession. This era is characterized by higher growth rate of capitalization of the economy of the GDP. As a result of the financial sphere detachment from the real economy, excessive money supply, the accumulation of debt and financial imbalances, in 2008, the global financial crisis erupts, announcing the twilight of the era. Source: Own Elaboration Currently, the sense of monetarism is slightly different from the original theoretical assumptions in relation to the experience of economic policy, financial market regulation and supervision of financial institutions. The solutions adopted in different countries shows that monetary policy can take different forms, ranging from conservative and cost-effective policies aimed at the effect of "strong currency", as in the case of Switzerland, to the active support of the economy by the budget deficit, as in the US and Japan.
What unites all monetarists is the aversion to intervention of the state and reduction of its presence in the economy, simultaneously with a belief in efficiency of the market mechanism and economic liberalism, and assigning of the specific role to the monetary processes and financial capital. In the long term such a thinking has resulted in a buildup of imbalances until the outbreak of the crisis in 2008. (Note 3) In fact, monetarism has contributed to the development of the cult of money and the desire to get rich. According to this idea any good should be commercialized and reduced to monetary value, because striving for wealth surpasses all other values. Monetarist thinking is an expression of commercialization of social services, arts, education, health, welfare, safety, the prison, etc., which impoverishes the social content of these spheres of activity. Monetarism, as the ideology of radical capitalism is inherently inconsistent with the principles of the social market economy, the idea of solidarity and the balance between public and private interests, rooted in the values and traditions of Europe. Awareness of these properties, which depreciated monetarism, can be helpful in the search for ways out of the crisis.
Monetarism as an Economic System
The theory of monetarism assumes in general that, in the environment of the open market economy, control of currency stability and preventing inflationary processes are effective methods of maintaining economic balance. Stimulating the economy with instruments of monetary policy ensures economic growth in the long run. It is assumed that the supply of money causes an increase in consumer spending and investment, which further enlarges national income contributing to increasing needs and increased expenditure which leads to an increased standard of living.
Monetary policy comes down to the proper correlation of money supply increase and economic growth. Highly leveraged banks based on treasury debt instruments deliver capital to corporations which strive for the highest possible rate of return. The mechanism of capital supply for the private sector grants entrepreneurs access to capital at the lowest possible cost. The key aspect here is maintaining attractive, investment encouraging interest rates for both investors and borrowers (Note 4).
A crucial instrument that helps to implement this theory into practice is the supply of treasury bonds affecting the level of interest rates that is the price level of money on the financial market. In a liberal country such a mechanism drives the development of the financial sector. Areas of capital supply and interest rates through which a country stimulates economic growth is under control. The development of the capital market becomes the aim of monetary policy and its primary task is to maintain interest rates which ensure attractive return on investment which further encourages continuity of investment and keeps demand on the capital market at a reasonable level. As a result the issue volume and the rate of interest of treasury instruments are the main regulators of the capital market.
Winding up the business cycle solves a number of economic problems and this is why various tasks of budgetary institutions connected with increasing wealth and the standard of living within society are not so important for the government. The work of the international financial market based on monetarist doctrine has a systematic character. In the borderless economy the crucial role is played by financial institutions that fulfill functions as a center of the international financial system servicing peripheral countries in addition to the system. The center of the system issues world currency and controls both the creation and distribution of capital. As a part of the world financial centre there operates various institutions such as investment banks, stock exchanges, rating agencies, consultancy corporations and agencies of research and creating opinions.
Each new issue of debt securities, stimulates capital supply, helps reduce interest rates and accelerates the economy. The placement of new issues on the market stimulates the export of capital which helps overcome the limited demand in the internal markets. Introduction of treasury debt instruments recognized as risk-free to the world circuit of money becomes a form of subsidization of the country's economy by foreign creditors of the issuer. A country with monetary sovereignty may benefit from a low-cost source of financing for budgetary expenditure provided that excessive debt is prevented. The condition for the continued operation of this mechanism is to maintain control over the supply of instruments with the demand of the peripheral countries.
Long-term Effects of Monetary Policy
Long-term effects of increasing money supply are not so obvious. Who can predict the cumulative effects of 'empty money' and 'shaken relationships' in the real economy? Implementation of the doctrine of monetarism in the eighties and nineties was a period of rapid worldwide expansion of the capital market. Comparison of long-term trend in money supply and GDP shows a downward trend, reflecting the adverse consequences of this policy. It is continued by declining productivity of capital, rising operating costs, progressive uncontrolled accumulation of debt and deterioration of the other macro -indicators.
Initially, with the high demand for capital and the international role of the US dollar, external funding opportunities for budgetary expenditures seemed unlimited. With this doctrine in the nineties, high supply of financial assets stimulates capitalization of stock exchanges. International capital flows are significantly ahead of the dynamics of international trade and industrial production. Rapidly increasing growth of financial assets, seen at the stage of 'bloom', results in the giant enlargement of savings by fast -growing and rich societies. This is the time, when a period of dislocation of industrial production to the Far East and the capital-intensive infrastructure programs, defense and social services begins. The broad stream of capital revives stock exchanges, the expansion of multinational corporations, and individual and public consumption on credit.
These processes create risk for local economies resulting from the system (e.g. Swiss Franc). With an excessive money supply easy access to the capital or its speculative flows can seriously distort the market mechanism, disorganize valuations and the calculation of the cost destroying entire sectors of the economy. Sense fades, risk becomes elusive, leading to what happened in Greece. Monetarists tend not to see the effect of increasing the amount of money on its quality, as well as recidivism of crossing the state budget expenditure above a safe level for long-term economic security. This is what happens in most EU countries, in which the budget deficits financed by a high debt increase will lead to a decline in GDP.
Reducing the demand for capital due to a decrease in productivity does not mean the loss of activity of banks on the money supply side, moreover it does not prevent them from engaging in high-risk investments. Limited demand of the domestic markets for the issues of treasury debt instruments leads to the distribution of these instruments in international financial markets. Banks manage the potential for the money creation driven by magnified deposits through the internationalization of operations. All this leads to an increase in market capitalization. Finally, low interest rates attract foreign expansion, while it seems that cheaper capital makes the economy more competitive, which is not always true.
Cheap capital and easy access to sources of financing, encouraging borrowing, has placed many countries, companies and households on the brink of insolvency. The increase in bank debt does not mean that debtors are able to earn on leverage. There was a number of spectacular collapses of the financial markets on this background in the nineties such as in Russia, Argentina, Brazil, Mexico, Chile, and then in Iceland, Greece, Cyprus, Ireland, Portugal, Spain and Italy. As a result, capital markets shrank, the price of securities decreased significantly, debt refinance collapsed and the activity of enterprises decreased. These phenomena led to a deep recession, higher unemployment and hindered the in-flow of external capital.
The Limited Effect of the Monetarists' Approach to Fight the Crisis
In general, the fight against the crisis has not brought any significant effect. What is present, is the unstable equilibrium threatened by numerous factors, including the influence of fluctuations of exchange rates and interest rates, which in consequence might lead to another snowball effect. This is why governments do not present their willingness to take more significant measures in order to stimulate their economies. Symptomatic is the fact that the approach to fighting the crisis is limited to the monetarist array of instruments, which confirms the way of thinking about the economy. Postponed fiscal reforms, budget savings as well as high unemployment bring dissatisfaction among the electorate and lead to destabilization of the political situation. The lack of attractive alternatives leads investors to purchase treasury bonds, which on the other hand does not create any real economy effects.
According to the IMF's prognosis, the crisis is going to last until 2018, which is the length of the period needed to reduce the indebtedness of the public sector, which slows down the economy. However nothing is done in this dimension and what is more, the IMF recommends an increase in spending. The United States were the first to introduce the 'Quantitative Easing' program, and also started to reduce interest rates in order to fight the recession. On a monthly basis, the FED was buying huge amounts of debt securities (officially 80 billion of US. Dollars a month, however in practice, the scale of the operation was much bigger). The growth of stock indexes in the US, for many months has been indicating the end of recession. As a consequence, the policy of 'Quantitative Easing' was slowed down. According to certain press releases, the era of crisis is overcome and the forecasted economic growth is announced, however this boom does not affect any other area of the economy, apart from the financial sector. During the last six years since the beginning of the crisis, the indebtedness of the US doubled but at the same time almost no real benefit was achieved. Unnecessary bureaucracy developed along with numerous regulations and a tax burden which has doubled.
Exactly the same approach is introduced by the Central Bank of Japan (CBJ), with only one difference. The scale of the operation is at the level of 17% of the country's GDP. Time shows that the easing effect is short-lived and costs are high. According to the wire from the Japanese stock exchange, 'the miraculous influence of the printed money on indexes, starts to lose its strength. Although the central bank continues to supply money, the Japanese stock exchange notes a significant decline in stock prices and is compared to the effect of the nuclear power plant catastrophe in Fukushima or to the Tsunami disaster.
A similar situation takes place in the European Union, where investors do not perceive the current economic situation, at least in a positive light. What is more, the low level of economic activity in different sectors does not presage any breakthrough. However, analysts seem to be looking forward to the moment, when this trend will be reversed. Everyone is waiting for the EU's support, however this is likely to be too little to increase the profitability of the financial sector. It is postulated that "money will be dropped from helicopters". According to John Muellbauer, the Oxford professor, the governments would stimulate demand by sending a check for 500 euros to every adult citizen. Similar treatment for Europe is recommended by J. Stiglitz, who notices the correlation between the difficulty of overcoming the crisis with the reduction of a country's spending, the policy of savings and penalizing excessive spending. The economist claims, that the EU's approach to increasing public spending is too tentative.
The helplessness of monetarists extends to the point where it becomes contradictory with common sense. The anecdotic idea of Milton Friedman to drop money from helicopters might be modified by the author of this article. He suggests that an injection of fake money to the market might boost demand, but will not increase the level of debt! The thing is that the idea of Friedman is not about increasing the level of demand or employment in factories. Monetarists dream about the increase of capitalization of financial assets. They believe that they are in possession of the "philosopher's stone' metaphorically, so they have the patent to increase the value of financial capital by increasing the amount of money in circulation. As a consequence they print more and more, totally forgetting about the side effects in the future.
While the IMF operates according to this strategy, doubts are rising, whether there is anyone who is responsible for the whole situation and how far the financial system can go? This policy of the ECB is mainly encouraged by the top economist of IMF, O. Blanschard: "… We need to do everything in order to increase the demand and keep the growth, having in mind accommodative fiscal policy, recapitalization of banks or structural reforms… adjustments in the euro zone require lower prices in indebted countries in the South and simultaneously higher prices in core countries of the EU… it is recommended that German inflation will be higher and the purchasing power will be strengthened by the increase of real wages." Without conviction the ECB decides to introduce the 'Quantitative Easing' program, because it is hardly proved that the program will be effective. The program of easing will be accomplished by the EBC during the period of 18 months until September 2016. Within this program 60 billion euros will be printed per month and supplied to the market. In this way ECB is going to increase its monetary base by 7.6% of GDP, which might stimulate bank credits. The increased money supply is expected to increase the rent ability attraction of financial instruments, enhance the attractiveness of investment in euros and protect the euro zone from the external inflow of capital, however this seems to be only a theoretical assumption. This QE policy, however originates from the will to prevent the euro from appreciation. It seems that IMF encouragement of QE only brought more trouble to the EU. In practice, deflation and negative interest rates became a nightmare for the EU, Switzerland and Scandinavia. It is estimated that in the European market, the number of debt instruments with negative profitability is twice as many and the new program will only aggravate the situation.
Deutsche Bank is of exactly the same opinion, when it comes to the low efficiency of the easing policy. It is thought that it will be extremely difficult to stop this tendency in the future. As a consequence of printing money, its amount increases demand and stimulates investment to a very small degree. As it is proved by researchers, a multiplying effect in the EU is totally out of the question, while in more developed countries people have more savings than liabilities. While the profitability of bonds is diminishing, more funds go to wealthier people, who have assets whose value increases, as a consequence of the easing program. It means that additional money turns into an asset on a financial market, which like a "black hole" attracts capital.
Deflation and negative interest rates are a totally new phenomenon. Low interest rates do not stimulate credits, but lead to a situation where business entities try to avoid financial services and where we can observe a diminishing demand for money and financial services. Over-liquidity is reinvested on a speculative market. Here anything might happen. The excess of financial instruments is never accompanied by inflation, which means that the monetary and capital markets are merged, so the warning factors and the barometer of the quality of money are lost.
Everyone forgot about the concept of two speeds in the EU. Extremely low interest rates, lack of clear vision for the future of the EU, which, while penalizing Hungary's experiments, discourages others from initiatives. From the perspective of integration enthusiasts, like Poland, actions taken by the EU are unclear and strongly discourage them from joining the European Monetary Union. Turning points are the right time to evaluate the strategy of the EU and to define the role of the ECB, because it is becoming a more common belief that the reputation of the Union's institutions is diminishing and more often these institutions are recognized as the cause of disintegration.
The Need of a Broader Perspective on the Crisis
Admittedly, the condition of the Polish economy is relatively good, promising the maintenance of GDP growth above 3%, therefore the Polish National Bank, likewise Polish economists approach the problem of qualitative easing with common sense and moderation. It can be seen at a glance that the temporary measures are of limited effect, hence, more important are the structural determinants of the economic situation (Note 5).
As demonstrated by Prof. R. Szewczyk, a monetary policy of increasing the amount of money in the market destroys its basic functions, and setting the interest rates does not have much in common with the market economy. The proponents of the short-term balance deplore the negative effects of stagnation on the market, completely blind to the other scenarios. They encourage the loosening of monetary policy without worrying about the consequences of future accumulated debt, how much bonds will be worth, how to secure the pension funds or to issue more debt securities with no chance of repurchase. (Note 6) In the context of the crisis, a political scientist at Oxford, Professor J. Zielonka, draws attention to the build-up of the financial crisis by the crisis of the market economy and democracy. The vanishing of the state and its economic functions is observed. Financial markets are not conducive to socio-economic development, but have become an area of speculation, as well as criminal activities. Political parties no longer serve the development of democracy, but instead they enforce the distance between the area of public authority and a citizen. This is related to social processes of a crisis, unemployment, demography, the depreciation of work, the "rat race", consumerism etc. (Note 7) Limiting the crisis only to a financial dimension cannot be done; one of the reasons is its effect of imbalance and shrinking of the economy. Stimulating the economy through the supply of money, debt, public spending, fiscal policy, induces limited effects in the real economy. Therefore, all recovery programs written from the perspective of money supply and expenditures of the state are not original and bring limited results (Note 8).
Conclusion
When approaching the economy, the monetarists tend to operate on a limited amount of variables while simultaneously not doubting the efficiency of the market, which gives them the illusion of predictability of the situation and ease of forecasting. The veracity of the assumptions presented in the introduction is important from the point of view of the consequences of the global economy and EU perspective. The monetarist approach to economic problems does not show premises of the end of the economic crisis in the Western world, but of creating illusion of economic and social stability. Arguments presented in this paper lead to the following conclusions: 1) As a result of "financialization" of the economy, expressing the dominance of the financial sector over the non-financial sector, we exhibit a direct casual link carrying the crisis in the financial sector to the real economy.
2) A false interpretation of the crisis can be noticed as an objective, global and homogenous phenomenon etc. which should be surrendered, leaving the initiative to the strongest to solve it.
3) Monetarist financial policy tools result in the lack of effects with fighting the crisis with methods that originally have led to it. Everything seems to indicate that the majority of countries follows consciously the footsteps of Greece. The future looks pessimistic.
4) There is a futility of monetarist ideas to combat the crisis and protection of the financial sector at the expense of the real economy. Therefore, it is impossible to restore equilibrium in the real economy under the slogans of the fight against the financial crisis. 5) So far the financial crisis has not violated the financial pyramid, which is growing at an alarming pace.
Frightening are the possible effects of its collapse, on a larger scale than in the case of Greece, as there is no possibility of state aid on such a scale.
6) In this context, the most dangerous is the global speculation fueled by the supply of money. Lack of control over the market of short-term investments and derivatives markets that was originally designed to secure transactions, in macro scale has incalculable destructive potential, disorganize local markets and when provoked, can at any time capsize the financial pyramid.
As studies confirm, an increase in the amount of money, whether through deficit, public spending, quantitative easing, bank recapitalization, or spending cash from "helicopters", brings effects that are short-term and are disproportionate to the costs, which are postponed in time and accelerates the accumulation of debt, insolvency of states, depreciation of pension funds, life at the expense of future generations and debasement. Holders of debt assets, investment and pension funds are participating in a giant pyramid scheme, and so far, are benefiting from the fictitious wealth, which in a blink of an eye can be dematerialized.
Adding money into the circulation, as a cure for all diseases, increases the gigantic debt pyramid. Money without coverage is like a drug, which gives short-term relief, is addictive, has decreasing effects, and in addition, accumulates in the organism. Overdose of medicine makes it a poison, which can cause unpredictable results. One gets the impression that the sick economy was entrusted not to doctors, but rather to some alchemists, who at any moment could cause an explosion.
|
2019-05-16T13:06:27.829Z
|
2015-10-22T00:00:00.000
|
{
"year": 2015,
"sha1": "5dd9bd158daa061e95f1a7c8ab3deb128b8a7ce3",
"oa_license": null,
"oa_url": "http://www.sciedu.ca/journal/index.php/ijba/article/download/8176/4898",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "90eb97f99c294281473eb98911828bfdbcd0fe64",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
}
|
219535606
|
pes2o/s2orc
|
v3-fos-license
|
Comparing the Electrophysiology and Morphology of Human and Mouse Layer 2/3 Pyramidal Neurons With Bayesian Networks
Pyramidal neurons are the most common neurons in the cerebral cortex. Understanding how they differ between species is a key challenge in neuroscience. We compared human temporal cortex and mouse visual cortex pyramidal neurons from the Allen Cell Types Database in terms of their electrophysiology and dendritic morphology. We found that, among other differences, human pyramidal neurons had a higher action potential threshold voltage, a lower input resistance, and larger dendritic arbors. We learned Gaussian Bayesian networks from the data in order to identify correlations and conditional independencies between the variables and compare them between the species. We found strong correlations between electrophysiological and morphological variables in both species. In human cells, electrophysiological variables were correlated even with morphological variables that are not directly related to dendritic arbor size or diameter, such as mean bifurcation angle and mean branch tortuosity. Cortical depth was correlated with both electrophysiological and morphological variables in both species, and its effect on electrophysiology could not be explained in terms of the morphological variables. For some variables, the effect of cortical depth was opposite in the two species. Overall, the correlations among the variables differed strikingly between human and mouse neurons. Besides identifying correlations and conditional independencies, the learned Bayesian networks might be useful for probabilistic reasoning regarding the morphology and electrophysiology of pyramidal neurons.
In terms of electrophysiology, Gilman et al. (2017) found that mouse visual cortex pyramidal neurons have a lower action potential threshold voltage, shorter action potential rise time, and longer fall time than those of the rhesus monkey, yet found no significant difference in subthreshold features such as time constant and input resistance. Kalmbach et al. (2018), on the other hand, found differences in input resistance and membrane resting potential between human and mouse L2/3 pyramidal neurons, with the degree of difference varying with the somatic distance from the pia. Human cortical neurons have lower membrane capacitance (Eyal et al., 2016) and higher onset rapidity of action potentials than those of adult mouse pyramidal neurons (Testa-Silva et al., 2014). Like morphology, the electrophysiology of pyramidal cells differs across cortical areas (Amatrudo et al., 2012) and age (Zhang, 2004;Elston and Fujita, 2014) and may also vary with somatic distance from the pia within L2/3. In particular, Kalmbach et al. (2018) found such an effect on subthreshold features, including membrane potential and input resistance, in both human and mouse neurons, while Deitcher et al. (2017) found no effect of cortical depth on electrophysiology in human cells (they did not consider the electrophysiology of mouse neurons).
It is well-established that dendritic geometry strongly affects the action potential firing pattern of neurons. For example, given an identical distribution of ion channels over different cortical neuron types, smaller cells tend to spike, whereas larger ones tend to burst (Mainen and Sejnowski, 1996), while computational models suggest that such spiking versus bursting behavior depends on the ratio of somatic surface to dendritic surface (Mason and Larkman, 1990). Also, action potential is accelerated in neurons with larger dendritic surface area (Eyal et al., 2014), which is a likely explanation for the differences in spike onset between human and mouse, given that human dendrites are larger. Computational modeling by Amatrudo et al. (2012) showed that morphological differences between primary visual and prefrontal cortex cells can largely account for differences in passive properties but not in action potential firing nor the synaptic response, thus suggesting differences in active channel conductances. Indeed, the RNA for HCN1, a major poreforming subunit of h-channels, is ubiquitous in human but not in mouse layers L2/3 (Zeng et al., 2012), and Kalmbach et al. (2018) found that h-channels contribute more prominently to the physiological properties of human pyramidal neurons than to those of the mouse. The differences in h-channel expression, however, could not explain the strong dependence of these electrophysiological properties on cortical depth (Kalmbach et al., 2018), suggesting that one would need to account other factors, including morphology, in order to explain some of the observed cortical depth dependence and inter-species differences.
There have, nonetheless, been relatively little quantitative analyses of how the different electrophysiological and morphological variables correlate with each other and how do these correlations vary between species. An exception is Gilman et al. (2017), who found that larger neurons had a lower input resistance. These analyses were limited to estimating the linear correlation between pairs of variables. This ignores the effect of covariates as well as the conditional (in)dependencies among variables. In other words, relationships often involve more than two variables and thus require a multivariate model; for example, dendritic diameter may be independent of spiking behavior, but when modeled as an exponential function of the distance from the soma, its decay rate is significantly lower for spiker neurons than for bursters and plateauers (Washington et al., 2000). Pairwise analyses can thus be complemented by using multivariate graphical models (Whittaker, 2009) and by quantifying conditional (in)dependencies with partial correlation coefficients. One type of graphical models that is useful for modeling conditional independencies are Bayesian networks (Pearl, 1988;Koller and Friedman, 2009). These models, based on directed acyclic graphs, let us visualize the probabilistic relationships between the variables and are thus useful for exploratory analyses (Bhushan et al., 2019). Their applications in neuroscience (Bielza and Larrañaga, 2014;Bielza and Larrañaga, 2020) include interneuron classification (Mihaljević et al., 2014(Mihaljević et al., , 2015Mihaljević et al., 2019) and the generation of synthetic dendritic branches (López-Cruz et al., 2011).
In this paper, we compare layer 2/3 human temporal cortex and mouse visual cortex pyramidal neurons from the Allen Cell Types Database (http://celltypes.brain-map.org/) in terms of their electrophysiology and dendritic morphology, while assessing the effect of cortical depth on their features. The Allen Cell Type Database cells are unique in that they have been quantified in terms of electrophysiology and morphology, with a standardized procedure for both species. We learn from data Gaussian Bayesian networks in order to identify correlations and conditional independencies between the variables. We learn these networks from three different subsets of our data: (a) from electrophysiological variables alone; (b) from morphological variables alone; and (c) from electrophysiological and morphological variables combined. For each data subset, we learn a Bayesian network per species, which yields a total of six networks; for subset (c), we also show correlation networks (see section 2.6).
The rest of this paper is structured as follows. Section 2 describes the data set, the variables, and analysis methodology. Section 3 provides the results. We discuss our findings in section 4.
Data
We used adult human and adult mouse neurons from the Allen Cell Type Database. Human cells were acquired from donated ex vivo brain tissue. We used all excitatory (spiny) cells from layers 2 and 3 of the temporal (human) and visual (mouse) cortex that had a reconstructed morphology. Our sample consisted of 42 human cells from the temporal cortex and 21 mouse cells from the visual cortex.
Electrophysiological Variables
The Allen Cell Type Database provides pre-computed electrophysiological features. These features were derived from high temporal resolution data on membrane potential measurements (in current-clamp mode) obtained with a standardized patch clamp protocol. We used 11 electrophysiological features provided by Allen Cell Type Database, covering subthreshold and suprathreshold features of the cells, including those relative to action potentials. Below we list these features along with brief descriptions (see also Table 1 for their mean values), while we refer the reader to the technical white paper by the Allen Cell Type Database for details (http://help.brain-map.org/download/attachments/8323525/ CellTypes_Ephys_Overview.pdf).
Subthreshold features were computed as follows: resting potential (rest): average pre-stimulus membrane potential across all the long square responses; input resistance (resistance): the slope of a linear fit of minimum membrane potentials during the responses onto their respective stimulus amplitudes for long square sweeps with negative current amplitudes that did not exceed 100 pA; time constant (tau): exponential curve fit between 10% of the maximum voltage deflection (in the hyperpolarizing direction) and the minimum membrane potential during the response, and the time constants of these fits were averaged across steps to estimate the membrane of the cell.
All action potential waveforms were evoked by a long square (1 s) current step stimulus. The waveforms of the first action potentials were collected from each cell and aligned on the time of their thresholds. Action potential features were computed as follows: threshold (threshold): the level of injected current at threshold; peak (peak): maximum value of the membrane potential during the action potential; amplitude (amplitude): difference between the action potential trough and the action potential peak, where the trough is the minimum value of the membrane potential between the peak and the next action potential; upstroke/downstroke ratio (up down ratio): the ratio between the absolute values of the action potential peak upstroke and the action potential peak downstroke, where the upstroke is the maximum value of dV/dt between the threshold and the peak, and peak downstroke is the minimum value of dV/dt between the peak and the trough; rise time (rise time): time from threshold to the peak; fall time (fall time): time from peak to the trough. Additional suprathreshold features were computed as follows: latency (latency): time between the start of the stimulus until the first spike; "f-i curve" (f-i curve): slope of a straight line fit to the suprathreshold part of the curve of frequency response of the cell versus stimulus intensity for long square responses.
Morphological Variables
The Allen Cell Type Database provides 3D neuron morphology reconstructions. These were obtained by filling the cells with biocytin and serially imaged to visualize their morphologies. Detailed description of the reconstruction protocol is provided in the Allen Cell Type Database morphology overview technical whitepaper (http://help.brain-map.org/download/attachments/ 8323525/CellTypes_Morph_Overview.pdf).
We computed nine features of both basal and apical dendrites. Of these features, four are arbor-level features, whereas five are branch-or bifurcation-level features. We computed the features with the open-source NeuroSTR library (https:// computationalintelligencegroup.github.io/neurostr/). Below we list these features along with brief descriptions (see also Table 2 for their mean values). The variable names provided in parenthesis correspond to basal dendrite variables; the corresponding variable of the apical dendrite is denoted with an a prefix: for example, a.distance instead of distance.
The branch-level features were averaged across all bifurcations or branches of an arbor and were computed as follows: average branch length (length): sum of the lengths of all compartments of a branch, averaged over all bifurcation points; average path distance (distance): sum of the lengths of all compartments' length starting from the dendrites' insertion point into the soma up the bifurcation point, averaged over all bifurcation points; average branch tortuosity (tortuosity): ratio of branch length and the length of the straight line between the beginning and the end of a branch, averaged over all branches; average remote bifurcation angle (angle): shortest planar angle between the vectors from the bifurcation to the endings of the daughter branches, averaged across all bifurcations; average branch diameter (diameter).
Arbor-level features were computed as follows: height (height): difference between the maximum and minimum values of Y-coordinates of the dendrites; width (width): difference between the maximum and minimum values of Xcoordinates of the dendrites; depth (depth): difference between the maximum and minimum values of Z-coordinates of the dendrites; total length (totallength): sum of branch length of all the branches of the dendrites. The Allen Cell Type Database provided the depth of each cell's soma (rel depth) relative to pia and white matter. There were both superficial and deep cells in both species; although deep cells were better represented in the mouse sample, whereas superficial ones in the human sample (see Figure 1).
Bayesian Networks
A Bayesian network (BN) (Koller and Friedman, 2009) B allows us to compactly encode a joint probability distribution over a vector of n random variables X by exploiting conditional independencies among triplets of sets of variables in X (e.g., X is independent of Y given Z). A BN consists of a directed acyclic graph (DAG) G and a set of parameters θ [B = (G, θ )]. The vertices (i.e., nodes) of G correspond to the variables in X, while its directed edges (i.e., arcs) encode the conditional independencies among X. A joint probability density f G (x) encoded by B, where x is an assignment to X, factorizes as a product of local conditional densities, where pa G (x i ) is an assignment to variables Pa G (X i ), the set of parents of X i in X according to G. G induces conditional independence constraints for f G (·), derivable from the basic constraints that each X i is independent of its nondescendents in G given Pa G (X i ). For example, for any pair of variables X, Y in X that are not connected by an arc in G there exists a set of variables Z in X (disjoint from {X} and {Y}) such that X and Y are independent conditionally on Z Similarly, for any pair of variables X, Y in X that are connected by an arc in G there is no set Z such that X and Y are independent conditionally on Z. These constraints extend to nodes not connected by an arc in G and the structure G thus lets us identify conditional independence relationships among any triplet of sets of variables X, Y, and Z in X. For example, in the DAG X → Y → Z we only have one independence: X is independent of Z conditional on Y; X and Y, X and Z, and Y and Z are not marginally independent. The Markov blanket of X i is the set of variables MB(X i ) such that X i is independent of X \MB(X i ) conditional on MB(X i ). The Markov blanket of X i is easily determined from G as it corresponds to the parents, the children, and the spouses (other parents of the children of X i ) of The parameters θ specify the local conditional densities f G (x i | pa G (x i )) for each variable X i . When X contains only continuous variables, as in our case, a common approach is to let f G (x) be a multivariate normal density. The local conditional density for There is thus a different vector of coefficients β i0 , β T i , σ 2 i for each X i . Two or more DAGs can encode the same set of conditional independencies. A set of such equivalent DAGs can be uniquely represented with a completed partially directed graph (CPDAG). An edge between X and Y is directed in the corresponding CPDAG only if it is identically oriented in every equivalent DAG; it is undirected otherwise.
Learning Bayesian Networks From Data
Learning a Bayesian network B from a data set D = {x 1 , . . . , x N } of N observations of X involves two steps: (a) learning the DAG G and (b) learning θ , the parameters of the local conditional distributions. There are two main approaches to learning G from D (Koller and Friedman, 2009): (a) by testing for conditional independence among triplets of sets of variables (the constraintbased approach); and (b) by searching the space of DAGs in order to optimize a score such as penalized likelihood (the scorebased approach). While seemingly very different, conditional independence tests and network scores are related statistical criteria (Scutari et al., 2019). For example, when considering whether to include the arc Y → X into a graph G, the likelihood-ratio test of conditional independence of X and Y given Pa G (X) and the Bayesian information criterion (Schwarz, 1978) (BIC) score are both functions of log P(X|Pa G (X),Y) P(X|Pa G (X)) . They differ in computing the threshold for determining independence: the former relies on the distribution of the statistic under the null model (i.e., conditional independence), whereas the latter is based on an approximation to the Bayes factor between the null and alternative models. Besides using different criteria, the constraint-based and score-based approaches also differ in model search, that is, in terms of the sets X, Y, and Z that they choose to test conditional independence. The score-based approaches tend to be more robust (Koller and Friedman, 2009), as they may reconsider previous steps in the search by removing or reversing FIGURE 1 | Histograms and density plots of the cells' relative (to pia and white matter) cortical depth. Depths close to 0 denote superficial cells. Vertical lines denote rough estimates of the boundaries of L4 in the two species, given by the cortical depth of the most superficial L4 cell that we observed among Allen Cell Type Database neurons.
previously added arcs. We thus followed a score-based approach in this paper.
Confidence in Network Structure
A difficulty for inferring the network structure is that the number of data instances is relatively small compared to the number of variables, especially for the mouse data set. This might result in many different high-scoring structures and thus reduces the confidence in a particular learned structure. In order to paliate this, we use the bootstrap-based (Efron, 1979) approach by Friedman et al. (1999) to filter out arcs that are likely to be spurious. In particular, we begin by taking B samples from the empirical distribution and apply our learning algorithm on each sample to produce B Bayesian networks. The confidence in the arc X → Y, p(X → Y), is then estimated as the fraction of times that X → Y appears in the B networks. We then consider that all where t is some threshold, are spurious and thus blacklist them when learning the definitive network structure.
A reasonable threshold t might be 0.5, so that we discard all arcs which we find more likely to be spurious than not. By experimenting with synthetic data, we found, indeed, that the confidence estimates of nonspurious arcs were never below 0.48. On the other hand, the confidence estimates for spurious arcs tended to be inflated, with a maximum of 0.86 and the third quartile around 0.5. We thus used t = 0.7 as it provided none or few false positives in our experiments while yielding reasonably few false negatives.
We found that the above procedure filtered out most of the possible arcs in each of the six networks, leaving few candidate arcs for the definitive structure learning. Note that we considered both X → Y and X ← Y in order to compute the confidence in a direct link between X and Y. This is because we found that arc directions were rarely established with confidence and we thus filter out arcs that have insufficient confidence in the directions combined.
Marginal and Partial Correlation Coefficients
For Gaussian variables, the partial correlation coefficient ρ XY|Z of X and Y given all other variables Z = X \ {X, Y} equals the correlation between the residuals R X = X − f X (Z) and is a linear regression of X onto Z and likewise for f Y (Z). The ρ XY|Z can be computed directly from the inverse of the joint covariance matrix , ρ XY|Z = − XY √ XX YY , where = −1 . By estimating from data, we estimate pairwise conditional independencies, since ρ XY|Z = 0 (and thus XY = 0) if and only if X and Y are independent conditional on Z. One way to estimate is by learning a Bayesian network from the data. Namely, for standardized variables X, (Aragam and Zhou, 2015), where B is the matrix containing the network's parameters with each β i in one column, B = β 1 | · · · |β n , and S the diagonal matrix containing variances of local conditional distributions S ii = σ 2 i . The estimate is thenˆ B = (I −B)Ŝ −1 (I −B) T , where· denotes the empirical estimate. Note that the Bayesian network provides an estimate of even when the empirical correlation matrixˆ is not invertible (e.g., when n > N).
The heavy regularization of Bayesian networks with bootstrap blacklisting shrinks many marginal correlations correlation coefficients ρ in the correlation matrix associated with the Bayesian network,ˆ B =ˆ −1 B to 0. We thus report correlation coefficients derived from the empiricalˆ , rather than those derived from theˆ B . Note that marginal correlations are easily seen on a correlation network, an undirected graph that has an arc between X and Y if the absolute value of their correlation is above some threshold.
Comparing Bayesian Networks
We used the Hellinger distance (Pardo, 2018) in order to quantify the difference between Bayesian network structures of the two species. This is a bounded metric for probability distributions, with a value of 0 for identical distributions and a maximum distance of 1. As such, it also depends on the parameters of the Bayesian network; for example, it can be high for two normal distributions with identical structures with very different means, meaning that we could have a large distance simply due to inter-species differences in the variables' magnitudes (Tables 1, 2). We thus isolated the effect of interspecies differences in the means by re-fitting the parameters of one of the distributions before the comparison. Namely, we re-fit the parameters of the human Bayesian network on the mouse data before comparing it to the mouse Bayesian network; likewise, we re-fit the parameters of the mouse Bayesian network on the human data before comparing it with the original human Bayesian network. This means that we report two Hellinger distance values, one from each (human and mouse) data set. Note that the means of the compared distributions are always the same, as they are estimated from the same data set.
Settings
We used B = 2, 000 bootstrap samples for estimating arc confidence and blacklisted all arcs with an estimated confidence below 0.7. We then learned network structures by using the tabu algorithm (Glover and Laguna, 2013), implemented in the bnlearn R package (Scutari, 2010;R Core Team, 2015), to optimize the BIC score. The tabu algorithm is a local search that efficiently allows for score-degrading operators by avoiding those that undo the effect of recently applied operators; we used a tabu list of size 30 and allowed for up to 30 iterations without improving network score.
RESULTS
We first look at electrophysiological (section 3.1) and morphological features (section 3.2) separately, and then at joint Bayesian networks and correlation networks for both electrophysiological and morphological features (section 3.3).
Electrophysiology
All variables except for threshold, up down ratio, and fall time differed significantly between the species (Table 1).
Human neurons had lower a resistance, higher time constant (tau), rest potential, peak action potential voltage, amplitude and latency, and a longer action potential rise time.
The human and mouse BNs uncovered relevant correlations and independencies among the variables. In the human BN (Figure 2A), the Markov blanket of rel depth consisted of threshold and up down ratio, while it was marginally correlated with all variables except for latency, fall time, and rise time. In particular, rel depth had a strong positive marginal (0.59) and partial correlation with up down ratio (0.53) and a strong negative one with threshold (−0.40). This is contrary to the results of Deitcher et al. (2017) who found that human electrophysiological features such as input resistance and membrane time constant were independent of depth in the human L2/3 pyramidal neurons of the temporal cortex and, on the other hand, is partially consistent with the results of Kalmbach et al. (2018) (see section 4). Variables fall time, rise time, and latency were each uncorrelated with all other variables, f-i curve was independent of all other variables given resistance, as were tau given threshold and amplitude given peak. All other variables had Markov blankets of size two or larger, with the largest being that of threshold with five variables. The strongest partial correlations were those between peak and amplitude (0.78) and resistance and f-i curve (0.64). See Figure 2A for non-zero all partial correlation coefficients.
In the mouse BN (Figure 2B), rel depth was correlated with up down ratio and peak, amplitude, and fall time, while its Markov blanket contained only up down ratio. Contrary to the human BN, its marginal (−0.84) and partial (−0.54) correlation with up down ratio was strongly negative. rise time was uncorrelated with other variables, and resistance and threshold were independent of all other variables given f-i curve. The remaining variables had Markov blankets of size two or larger, with the largest being that of f-i curve with four variables. The strongest partial correlations were those between peak and amplitude (0.94), and latency and rest (−0.70). See Figure 2B for all nonzero partial correlation coefficients.
Overall, the human and mouse BNs were strikingly different, with only two common arcs in their CPDAGs (resistancef-i curve and peak-up down ratio). No variable had an identical Markov blanket in the two graphs and Hellinger distances on human and mouse data sets, respectively, were 0.44 and 0.61. While the magnitudes of threshold, fall time, and up down ratio did not differ significantly between the species (Table 1), the BNs show that their correlations with other variables did. A rare common feature of the two BNs was the strong positive partial correlation between amplitude and peak.
Morphology
All variables, except for tortuosity, differed significantly between the two species ( Table 2). Human dendrites were larger, had longer and thicker branches and, especially in apical dendrites, sharper bifurcation angles. Deitcher et al. (2017), on the contrary, report similar branch diameter in human and mouse neurons. The average human apical arbor was 3.6 times longer than the FIGURE 2 | Completed partially directed graphs (CPDAGs) of the Bayesian networks for electrophysiological features of human (A) and mouse (B) cells. Arc width is proportional to the absolute value of the partial correlation (shown next to the arc) between the nodes. Arcs corresponding to negative partial correlations plotted with dashed lines. Proximity between two nodes is unrelated to the magnitude of partial correlation. mouse one, while the average human basal arbor was 2.7 times longer. This is more pronounced than the 3.2-fold and 2.1-fold differences that Mohan et al. (2015) observed for apical and basal dendrites, respectively, of human and mouse temporal cortex pyramidal neurons.
In the human BN ( Figure 3A), rel depth had only a.height in its Markov blanket, while it was also correlated with a.distance, a.length, and length but independent of the remaining variables, including a.totallength (marginal correlation coefficient ρ = 0.34) and totallength (ρ = 0.03). Thus, while the height of the apical arbor increased significantly with depth from the pia, total arbor length did not. These results are contrary to those of Deitcher et al. (2017), who found strong correlations between depth from the pia and a number of apical and basal variables, including basal dendrites' total length (ρ = 0.50) and apical arbor width (0.48). We found that most basal dendrites' variables were positively correlated with the corresponding apical variable, with the exceptions being the bifurcation angles and the distance from soma. The diameter was particularly consistent, with ρ = 0.94 between diameter and a.diameter.
In the mouse BN (Figure 3B), the Markov blanket of rel depth contained totallength, a.totallength, and a.height, while it was marginally correlated also with diameter, a.height, a.width, a.distance, and a.diameter. This is contrary to the results that Deitcher et al. (2017) observed on temporal cortex mouse cells, as they found no significant change in morphological features with increasing depth. While a.height increased with rel depth, a.totallength decreased strongly with rel depth, both marginally (ρ = −0.83) and conditionally on all other variables (ρ XY|Z = −0.66). We observed the same, yet slightly weaker, effect for basal dendrites (ρ = −0.73 and ρ XY|Z = −0.40 with totallength). Thus, deeper mouse cells had smaller apical and basal arbors and, perhaps surprisingly, this was in spite of them having higher apical arbors. As in human cells, basal variables were often correlated with the corresponding apical variables. Unlike in the human, cells with larger basal dendrites tended to have thicker branches (ρ = 0.43) while a.angle had a negative partial correlation with a.diameter.
Overall, the human and mouse BNs were strikingly different, with only one common arc in their CPDAGs (tortuositya.tortuosity ). Only a.tortuosity had an identical Markov blanket in the two graphs. The Hellinger distances were larger than for electrophysiological variables, with a value of 0.87 on the human data set and 0.75 on the mouse data set.
Electrophysiology and Morphology
The correlation networks (Figure 4) and the Bayesian networks ( Figure 5) show many correlations between electrophysiological and morphological variables.
In human cells, all electrophysiological variables except for latency, fall time, and rest were marginally correlated with at least one morphological variable (Figures 4A, 5A). Besides features related to arbor size, electrophysiological variables were also correlated with branch-level features such as the mean bifurcation angle. While up down ratio was strongly correlated with features of apical arbor size (e.g., ρ = 0.53 with a.totallength), these correlations were explained away by the cortical rel depth and hence up down ratio was independent in the BN, conditional on its Markov blanket, of all morphological variables. Interestingly, peak decreased strongly (ρ = −0.52) with a.tortuosity and this effect persisted after conditioning on the remaining variables (ρ XY|Z = −0.17). Input resistance was negatively correlated with basal and apical arbor size (e.g., ρ = −0.50 with a.totallength and ρ = −0.44 with totallength). While it is already known that resistance decreases with dendritic size (Gilman et al., 2017), we found that it decreased additionally (ρ XY|Z = −0.30) with basal arbor width after accounting for totallength. As in Figure 2A, rise time was independent of all electrophysiological variables; it was, however, correlated with morphological ones. In particular, rise time decreased with apical a.totallength (ρ = −0.48, ρ XY|Z = −0.49) and increased with basal bifurcation angle (ρ = 0.37, ρ XY|Z = −0.32). The Markov blanket of rel depth contains up down ratio, threshold, as in Figure 2A, as well as a.height, as in Figure 3A. Since rel depth is not independent of the electrophysiological variables given the FIGURE 3 | Completed partially directed graphs (CPDAGs) of the Bayesian networks for morphological features of human (A) and mouse (B) cells. Basal nodes are in green and apical nodes are in dark green. Arc width is proportional to the absolute value of the partial correlation (shown next to the arc) between the nodes. Arcs corresponding to negative partial correlations plotted with dashed lines. Proximity between two nodes is unrelated to the magnitude of partial correlation.
morphological ones, Figure 5A shows that the correlation of rel depth with the electrophysiological variables cannot be explained as an indirect effect of the differences in morphology with respect to cortical depth, and instead corresponds to an effect of cortical depth on electrophysiology that is not explained by our morphological variables.
In mouse cells, there were also many marginal correlations between electrophysiological and morphological variables, with 16 arcs between electrophysiological and morphological features in the correlation network ( Figure 4B) and 3 in the Bayesian network ( Figure 5B). Overall, electrophysiological variables were correlated with features of arbor size but not with branch level features such as bifurcation angles and tortuosity. In particular, the strongest marginal correlations were those between latency and a.totallength (ρ = 0.64), a.width and peak (ρ = 0.61), length and resistance (ρ = 0.56), a.width and amplitude (ρ = 0.58). While many electrophysiological variables strongly decreased with rel depth (e.g., ρ = −0.72 with peak), these variables were independent of rel depth conditional on up down ratio. As in the human BN, the Markov blanket of rel depth included a.height and up down ratio. Thus, as in human cells, the effect of cortical depth on the electrophysiology was not explained by depth-related differences in morphology. While resistance did decrease with apical and basal arbor size, the effect was somewhat weaker than in human cells (ρ = −0.48 with a.totallength).
Overall, the two BNs were different, with only four common arcs in their CPDAGs. No variable had an identical Markov blanket in the two Bayesian networks. The Hellinger distances were 0.91 and 0.85 on the human and mouse data sets, respectively. Showing only arcs between morphological and electrophysiological variables as well arcs to/from rel depth and with an absolute correlation above 0.4 for human cells and 0.5 for mouse cells. These threshold values were well above the 0.05 significance level and thus correspond to strong correlations. Morphological nodes are shown in green, with apical nodes in dark green; electrophysiological nodes in orange.
FIGURE 5 | Completed partially directed graphs (CPDAGs) of the Bayesian networks for electrophysiological and morphological features of human (A) and mouse (B) cells. Morphological nodes and the arcs between them shown in green, with apical nodes in dark green; electrophysiological nodes and the arcs between them in orange. Arc width is proportional to the absolute value of the partial correlation (shown next to the arc) between the nodes. Arcs corresponding to negative partial correlations plotted with dashed lines. Proximity between two nodes is unrelated to the magnitude of partial correlation.
Dependence on Cortical Depth
We found that the negative correlation of rel depth and a.totallength in mouse neurons can be explained by the difference in length between cells located below a rel depth of 0.28 and those above it, as the deep cells had notably shorter apical arbors. In particular, a.totallength actually increased slightly with rel depth in both subgroups (ρ = 0.16 among deep cells and ρ = 0.17 among the non-deep cells, Figure 6) while the combined correlation was negative (ρ = −0.83). Likewise, rel depth was weakly correlated with a.width, diameter, resistance, threshold, and f-i curve within the subgroups yet strongly correlated overall (Figure 6). Thus, rather than varying smoothly with cortical depth, the observed dependences were fully or partially explained by the difference between deep and nondeep cells. On the contrary, rel depth was negatively FIGURE 6 | Electrophysiological (left) and morphological (right) variables' correlation with somatic cortical depth for all mouse cells (red), those located below a rel depth of 0.28 (deep) and those above it (superficial). Variables arranged by increasing overall correlation with rel depth, with horizontal lines at −0.5 and 0.5 separating the strong correlations that are shown in Figure 4. correlated with latency in both subgroups yet not globally (Figure 6). For length as well as most action potential variables, the overall correlation was slightly stronger than among nondeep cells and notably stronger than among deep cells. For morphological variables unrelated to arbor size (e.g., angle and tortuosity), the correlation was rather consistent between the subgroups as well as globally. The correlation coefficients were largely similar between deep and nondeep cells, with a mean absolute difference of 0.26 and a maximum of 0.47 among electrophysiological variables (latency) and 0.65 among morphological variables (a.angle). Note that the subgroup estimates have high variance as there were 10 deep and 11 nondeep cells. Kalmbach et al. (2018) found that the cross-species differences in rest and resistance were depth dependent, with more difference among the most superficial L2 cells and the deepest L3 cells and less in the middle of L2/3. We did not formally test for such an effect, as there were too few mouse cells so as to bin them into groups according to cortical depth. However, visual inspection did not suggest such a dependence for the electrophysiological variables; instead, for most variables we observed a consistent difference across the L2/3 (one example is rest, Figure 7). An exception is up down ratio, which indeed differed only in the superficial and deep sections (Figure 7). In particular, up down ratio was higher among superficial mouse cells than among superficial human cells; it then decreased with rel depth for mouse cells yet increased for human ones, and thus did not differ between the two species toward the middle of L2/3 and was higher for human cells in the deep part of L2/3. Note that the means of up down ratio in the two species are similar and thus the t-test found no significant difference ( Table 1).
DISCUSSION
We found strong differences between the electrophysiology and morphology of human and mouse pyramidal neurons, both in terms of the variables' magnitudes and in terms of correlations between the variables, as evidenced by the differences in their Bayesian networks. In particular, the Hellinger distances ranged from 0.44 on electrophysiological variables to 0.91 on combined morphological and electrophysiological variables. While the maximal distance between two distributions is 1. We note that we compared Gaussian distributions with identical means.
We found strong correlations between electrophysiological and both apical and basal morphological variables in both species. In human cells, electrophysiological variables were not only correlated with morphological variables that are directly related to dendritic arbor size or diameter, but also to branchlevel variables such as mean bifurcation angle and mean tortuosity. For some variables, we observed an opposite effect of cortical depth in the two species. We also found a strong effect of cortical depth on both morphology and electrophysiology in both species. In particular, the upstroke/downstroke ratio (up down ratio) increased with normalized cortical depth in human cells (ρ = 0.59) yet strongly decreased in mouse cells (ρ = −0.84). Likewise, while the length of the basal and apical arbors increased or stayed constant with cortical depth in human cells, it decreased strongly in mouse cells (ρ = −0.83 with a.totallength and ρ = −0.74 with totallength); notably, this was in spite of the apical height increasing with depth in mouse cells (ρ = 0.58). While Kalmbach et al. (2018) reported an effect of cortical depth on rest and resistance, we also report it for action potential properties such as up down ratio. We also showed that the correlation of electrophysiological features with cortical depth could not be explained in terms of the morphological variables. Overall, the effect of cortical depth differed between two species, perhaps reflecting differences in laminar organization of layers L2 and L3 between the two species. Our results suggest that, except regarding up down ratio, the cross-species differences are not depth dependent and that they hold across the depth of L2/3.
Our results regarding the effect of cortical depth are largely contrary to those by Deitcher et al. (2017), who found that electrophysiological features such as input resistance and membrane time constant were independent of depth in the human L2/3 pyramidal neurons of the temporal cortex (they did not assess the effect of cortical depth on electrophysiology in the mouse). Regarding morphology, they found that the size of the dendritic arbor increases with cortical depth in human pyramidal neurons but found no effect in mouse pyramidal neurons. Our results are, on the other hand, partially consistent with the results of Kalmbach et al. (2018). They found a positive correlation between rest and the rel depth in both species and a positive correlation between resistance and rel depth among mouse cells yet a negative one among human cells, albeit they could not confirm it in subsequent experiments, with a fixed membrane potential, for mouse cells. We confirmed the positive correlation with rest (ρ = 0.37 in both species), albeit weaker and only significant for the human cells, as well as the significant positive correlation with resistance in mouse cells (ρ = 0.49), yet only found a nonsignificant positive correlation in human cells (ρ = 0.13).
A possible explanation for our differences with the results by Deitcher et al. (2017) is that we had more electrophysiologically characterized human cells (42 vs. 25) and more morphologically characterized mouse cells (22 vs. 14), thus probably covering a wider range of somatic cortical depths and including the most superficial and deepest cells (Figure 1); indeed, this is the explanation proposed by Kalmbach et al. (2018) regarding a similar discrepance with Deitcher et al. (2017) in terms of cortical depth dependence of electrophysiology. Another difference in mouse cells is that we studied the visual cortex while (Deitcher et al., 2017) and Kalmbach et al. (2018) studied the temporal cortex. We note also that the patch clamp protocols were not identical in the three studies; however, it would not explain the differences with Deitcher et al. (2017) in the observed effect of cortical depth on morphology.
Our Bayesian networks are representative as long as the two samples are homogeneous, in the sense that the dependencies among variables are consistent across the cells of each sample. This may not be the case for mouse cells; for example, the correlation of latency and a.angle with rel depth varied between deep and nondeep L2/3 neurons, although that might be due to chance given the small sample sizes. Nonetheless, most deep cells indeed had distinctly smaller arbors and it is possible that at least some of them are star pyramidal neurons (Staiger et al., 2004); some of these L4 cells are also found in deep L2/3 in the Allen Cell Type Database. This depth-related difference in size could also be related to the distinction between profuse-tufted and slim-tufted neurons: Deitcher et al. (2017) noted that slim-tufted neurons tend to be located deeper in L2/3, although the separation was not as clear-cut as in our case. Nonetheless, when looking for two clusters with k-means and hierarchical clustering we obtained nothing similar to the distinction between deep and nondeep mouse cells.
Provided that our assumption of a multivariate Gaussian distribution of the variables holds, the learned Bayesian networks can be useful beyond identifying the independencies and correlations between variables. For example, they would allow for probabilistic reasoning regarding the morphology and electrophysiology of pyramidal neurons. For example, we could set the morphological variables to particular values and study the conditional distribution of electrophysiological variables. One might also use them for multioutput regression (Borchani et al., 2015), for example to predict the values of electrophysiological variables from those of the morphological variables.
DATA AVAILABILITY STATEMENT
Publicly available datasets were analyzed in this study. This data can be found at: Allen Cell Type Database.
AUTHOR CONTRIBUTIONS
BM designed and conducted the analysis and wrote the manuscript. All authors substantially reviewed the manuscript.
FUNDING
This work has been partially supported by the Spanish Ministry of Science and Innovation through the PID2019-109247GB-I00 project and by the BBVA Foundation (2019 Call) through the Score-based non-stationary temporal Bayesian networks. Applications in climate and neuroscience project. This work has received funding from the European Union's Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreement No. 785907 (Human Brain Project SGA2) and the Specific Grant Agreement No. 945539 (Human Brain Project SGA3).
|
2020-06-04T09:08:31.043Z
|
2020-06-03T00:00:00.000
|
{
"year": 2021,
"sha1": "33d03b8506843c5859af2d4585898a4aff1f19f1",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fninf.2021.580873/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "473cd9414a541b65bbef1a324f90de8804bcd06b",
"s2fieldsofstudy": [
"Biology",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine",
"Biology",
"Computer Science"
]
}
|
38749777
|
pes2o/s2orc
|
v3-fos-license
|
Cerulenin inhibits unsaturated fatty acids synthesis in Bacillus subtilis by modifying the input signal of DesK thermosensor
Bacillus subtilis responds to a sudden decrease in temperature by transiently inducing the expression of the des gene encoding for a lipid desaturase, Δ5-Des, which introduces a double bond into the acyl chain of preexisting membrane phospholipids. This Δ5-Des-mediated membrane remodeling is controlled by the cold-sensor DesK. After cooling, DesK activates the response regulator DesR, which induces transcription of des. We show that inhibition of fatty acid synthesis by the addition of cerulenin, a potent and specific inhibitor of the type II fatty acid synthase, results in increased levels of short-chain fatty acids (FA) in membrane phospholipids that lead to inhibition of the transmembrane-input thermal control of DesK. Furthermore, reduction of phospholipid synthesis by conditional inactivation of the PlsC acyltransferase causes significantly elevated incorporation of long-chain FA and constitutive upregulation of the des gene. Thus, we provide in vivo evidence that the thickness of the hydrophobic core of the lipid bilayer serves as one of the stimulus sensed by the membrane spanning region of DesK.
Introduction
When bacteria are exposed to temperatures below those of their normal growth conditions, the lipids of their membrane become rigidified, leading to a suboptimal functioning of cellular activities (Mansilla and de Mendoza 2005;Mansilla et al. 2008). These organisms can acclimate to such new conditions decreasing the transition temperature of their membrane lipids, this is, the temperature at which membrane lipid bilayers undergo a reversible change of state from a liquid-crystalline (disordered) to a gel (ordered) array of the fatty acyl chains.
In most bacteria, the role of introducing acyl chain disorder is fulfilled by unsaturated fatty acids (UFA), which have much lower transition temperatures than saturated fatty acids (Cronan and Gelmann 1973). Desaturation of the acyl chains of membrane phospholipids results in an increase in the membrane lipid bilayer fluidity, with restoration of normal cell function at the lower temperature.
Cold shock imposes severe constraints on the biophysical properties of Bacillus subtilis cytoplasmic membrane ). In laboratory settings, a sudden temperature downshift, from 37 to 25 or 20°C, is used to trigger in B. subtilis a transiently transcriptional induction of the des gene coding for their sole lipid desaturase, D5-Des. This enzyme introduces double bonds in D5 positions of the acyl chain of preexisting membrane phospholipids (Aguilar et al. 1998;Altabe et al. 2003). This short-term membrane adaptation requires a canonical two-component regulatory system comprising the histidine kinase DesK and the response regulator DesR (Aguilar et al. 2001) (see Fig. S1). Upon cooling, DesK phosphorylates DesR, which stimulates the expression of D5-Des Cybulski et al. 2004). By introducing a double bond into saturated lipids, D5-Des induces a kink in the fatty acids (FA) that increases membrane disorder, offsetting the fluidity decrease that otherwise accompanies cooling. This DesK-dependent desaturation of membrane phospholipids enhances survival of B. subtilis at low temperatures (Weber et al. 2001). Although the structure of full-length DesK has not yet been solved, structural studies of the catalytic core of DesK highlights the plasticity of the central Dimerization and Histidine phosphotransfer domain and suggest an important role of the transmembrane (TM) sensor domain in catalysis regulation, either by modifying the mobility of the ATP-binding domains for autokinase activity or by modulating binding of DesR to sustain the phosphotransferase and phosphatase activity (Albanesi et al. 2009). A model in which the TM domain of DesK promotes these structural changes through conformational signals transmitted by the membrane-connecting two-helical coiled-coil was postulated (Albanesi et al. 2009).
DesK is a multipass TM sensor and its activation upon a decrease in the ambient temperature appears intimately related to a decrease in the order of the acyl chain of membrane phospholipids (Cybulski et al. 2002). However, the mechanism that allows DesK to discriminate the lipid environment to promote membrane remodeling upon a drop in environmental temperature remains fragmentarily understood. Reconstitution of full-length DesK into proteoliposomes showed that, whereas the structure of the lipid head group does not affect thermosensing, the length of the acyl chains, that determine the thickness of the hydrophobic core of the lipid bilayer, exerts a profound regulatory effect on kinase domain activation at low temperatures (Mart ın and de Mendoza 2013). Thus, a likely hypothesis is that at low temperature, the membrane becomes thicker due to an increase in the lipid order and this change in bilayer thickness could be sensed by the TM surface of DesK, favoring its autokinase activity. However, this hypothesis is challenged by the fact that the reconstitution experiments were performed in phosphatidylcholine (PC) vesicles containing straight-chain monounsaturated FA of different chain length (Mart ın and de Mendoza 2013). Nevertheless, PC is absent in B. subtilis, which instead contains phosphatidylethanolamine and phosphatidylglycerol with acyl chains mainly composed of branched-chain FA.
In this paper, we have investigated the mechanism by which the antibiotic cerulenin, a specific inhibitor of FA synthesis (Fig. 1), abolishes the cold-induced UFA production. We found that the key change lies in production of an excess of FA of short chain length which are incorporated into membrane phospholipids. This is sufficient to alter the lipid-protein interaction required for activation of DesK at low temperature. Furthermore, artificially increasing the synthesis and incorporation of long-chain FA into B. subtilis membranes results in constitutive expression of des at high temperature. Our results strongly suggest that the thickness of the bilayer is an important parameter regulating the signaling state of DesK associated to its native plasma membrane. These findings accord with previous in vitro studies aimed at understanding how the compositional and functional diversity of the surrounding membrane modulates DesK sensor function.
Bacterial strains and growth conditions
Bacterial strains and plasmids used in the present study are listed in Table 1. Escherichia coli and B. subtilis strains were routinely grown in Luria Bertani (LB) broth at 37°C (Sambrook et al. 1989). Spizizen salts (Spizizen 1958), supplemented with 0.5% glucose, 0.01% each tryptophan and phenylalanine, and trace elements (Harwood and Cuttings 1990) were used as the minimal medium for B. subtilis. This medium was designated MM. Antibiotics were added to media at the following concentrations: erythromycin (Erm) 0.5 lg mL À1 ; lincomycin (Lm) 12.5 lg mL À1 ; chloramphenicol (Cm) 5 lg mL À1 ; kanamycin (Km) 5 lg mL À1 ; ampicilin (Amp): 100 lg mL À1 , spectinomycin (Sp): 100 lg mL À1 . For the experiments involving desKC and plsC expression under the control of the inducible promoters PxylA and Pspac, 0.01% xylose and 1 mmol/L Isopropyl b-D-1-thiogalactopyranoside (IPTG) were added, respectively.
Genetic techniques
Escherichia coli competent cells were transformed with supercoiled plasmid DNA by the calcium chloride procedure (Sambrook et al. 1989). Transformation of B. subtilis was carried out by the method of Dubnau and Davidoff-Abelson (1971). The amy À phenotype was assayed with colonies grown during 48 h in LB starch plates, by flooding the plates with 1% I 2 -KI solution (Sekiguchi et al. 1975). amy + colonies produced a clear halo, while amy À colonies gave no halo. Figure 1. Pathway of lipid synthesis in Bacillus subtilis. Elongation of fatty acids (FA) is catalyzed by the type II fatty acid synthase (FASII) via a repeated cycle of condensation, reduction, dehydration, and a second reduction of carbon-carbon bonds, giving rise to acyl-acyl carrier protein (acyl-ACP) with two additional methylene groups at the end of each cycle. Generation of malonyl-CoA by acetyl-CoA carboxylase (ACC) is required to start the cycle of chain elongation by the complex. Phospholipid synthesis (shaded) initiates by the action of PlsX which converts acyl-ACPs to acyl-PO 4 . Then PlsY transfers the acyl moiety to the 1 position of glycerol-3-P (G3P) to form acyl-G3P. Acylation of the two position to form phosphatidic acid (PtdOH) is catalyzed by PlsC. Fungal toxin cerulenin inhibits the elongation condensing enzyme FabF, precluding not only FA but also phospholipid synthesis. The last step of the elongation cycle, catalyzed by enoyl-ACP reductases (FabI and FabL) is inhibited by triclosan. Expression of the genes coding for enzymes surrounded by ellipses is repressed by FapR, whose activity is, in turn, antagonized by malonyl-CoA. Steinmetz and Richter (1994) pDH88 Integrative plasmid containing the IPTG-induciblePspac promoter; Cm r Henner (1990) pMUTIN4 Integrative plasmid containing the IPTG-inducible Pspac-oid promoter; Erm r Lm r Vagner et al. (1998) pAR11 Contains Pdes cloned into the EcoRI-BamHI sites of pJM116 Aguilar et al. (2001) pCM9 PxylA-desKC cloned into pHPKS Albanesi et al. (2004) pLUP30 PspacOid of pMUTIN4 cloned into the EcoRI-HindIII sites of pDH88 This study pLUP32 5´end of the plsC gene cloned into the HindIII-SphI sites of pLUP30 This study pLUP124 PxylA-desK cloned into pHPKS This study 1 Cm r , Sp r , Km r , Erm r , Lm r , and Amp r denote resistance to chloramphenicol, spectinomycin, kanamycin, erythromycin, lincomycin, and ampicillin, respectively.
Plasmid and strains construction
In all cases, DNA fragments were obtained by PCR using the oligonucleotides described in the text (restriction sites underlined). Chromosomal DNA from strain JH642 was used as the template. The PCR products of the expected sizes were cloned into pCR-Blunt II-Topo (Promega, Madison, WI) and transformed in E. coli DH5a (Sambrook et al. 1989). Plasmid DNA was prepared using the Wizard DNA purification system (Promega Life Science) and sequenced to corroborate the identity and correct sequence of the cloned fragments.
To generate an integrative vector containing a tightly regulated IPTG-inducible Pspac-oid promoter, but without a transcriptional fusion to lacZ, we decided to replace the Pspac promoter of plasmid pDH88 (Henner 1990) by Pspac-oid of pMUTIN4 (Vagner et al. 1998). A 603-bp DNA fragment, generated by PCR using primers TerP-spacOid Up (CGTGAGGAATTCAATAAAACGAAAG GCTCAGTCGAAAGA) and TerPspacOid Lw (CT GGGATCCGCATGCTGTACATCAAGCTTAATTGTGAG), containing Pspac-oid promoter from pMUTIN4, was digested with EcoRI and HindIII and cloned into the integrative plasmid pDH88, previously digested with the same enzymes to release its Pspac, yielding plasmid pLUP30.
The plsC isogenic conditional mutant was constructed as follows: a 489 bp DNA fragment, corresponding to the ribosome binding site and a 5′ portion of plsC gene, was obtained by PCR amplification using the oligonucleotides PyhdOHindIII (AATCAAAGCTTACGACAAAGGAAGTG CGAT) and YhdOSphI (TTTTTTGCATGCTTCTTTTCC GCTTGAA). The fragment was cloned into the HindIII and SphI sites of vector pLUP30, which allows expression of the plsC gene under the control of Pspac-oid. The resulting plasmid was named pLUP32. This construct was then integrated by a single-crossover event at the plsC locus of B. subtilis JH642, yielding strain BLUP34. This approach results in the conditional inactivation of plsC gene, whose expression can be controlled by Pspac-oid. This strain was checked by PCR to ensure that the plasmid is integrated in the correct site.
To allow the introduction of the reporter fusion contained in the plasmid pJM116 (Cm r ) (Perego 1993), the chloramphenicol cassette present in the plsC locus of BLUP34 was changed for a spectinomycin resistance cassette through transformation and homologous recombination using the plasmid pCm::Spc (Steinmetz and Richter 1994). The resulting strain was named BLUP102. To introduce the transcriptional fusion of lacZ to the promoter region of the desaturase gene (Pdes-lacZ) into the plsC conditional mutant, the plasmid pAR11 (Aguilar et al. 2001) was linearized with ScaI and introduced by a double crossover event at the amyE locus of BLUP102 chromosome yielding strain BLUP103.
To ectopically express the Pdes-lacZ fusion in a B. subtilis JH642 cerulenin-resistant strain, plasmid pAR11 (Aguilar et al. 2001) was linearized with ScaI and introduced by a double crossover event at the amyE locus of strain GS77 (Schujman et al. 1998) giving rise to strain BLUP87.
To construct plasmid pLUP124, a 1227 bp DNA fragment containing desK was obtained by PCR amplification using the oligonucleotides DesKB33-Up (AGTAA CATGGATCCCAGAAAATGAGGTAAGATC) and desKP-Dw (GCTGATCTTCTGCAGTAAATATACTAATC). The fragment was cloned into the BamHI and PstI sites of vector pARD7 (pHPKS replicative vector containing the Pxyl promoter; M. C. Mansilla, pers. comm.).
b-galactosidase assays
Bacillus subtilis cells harboring a Pdes-lacZ chromosomal fusion were grown in MM at 37°C to an OD 525 of 0.35, then were split and half of the culture was treated with either 2.5 lg mL À1 of cerulenin (MIC 5 lg mL À1 , Schujman et al. 2001), 0.4 lg mL À1 of triclosan (MIC 2 lg mL À1 , Heath et al. 2000), or 0.01% xylose as indicated in each experiment. Then cultures were transferred to 25°C or 37°C, as indicated. B. subtillis BLUP103 cells were grown ON in MM at 37°C supplemented with 0.2 mmol/L IPTG to allow the expression of plsC. Cells were washed twice, resuspended in MM to OD 525 values of 0.03 in the presence or absence of 1 mmol/L IPTG and incubated at 37°C to an OD 525 of 0.35, then were split and incubated at 25 or 37°C. After each treatment, samples were taken at 1-h intervals and assayed for b-galactosidase activity as described previously (Mansilla and de Mendoza 1997). The specific activity was expressed in Miller units (Miller 1972). The results shown are the average of three independent experiments.
FA analyses
For the measurement of UFA biosynthesis, cultures of strains AKP3 (wild-type) and BLUP87 (cer r ) were grown in MM at 37°C to an OD 525 of 0.25 and then half of this culture was supplemented with 2.5 lg mL À1 of cerulenin for 45 min. When the strains reached an OD 525 of 0.35, 2 mL of these cultures were labeled with 0.2 lCi of [ 14 C] palmitate (specific activity, 58 mCi/mmol/L) and further shifted to 25°C for 5 h. Following incubation, cells were collected and lipids were prepared according to the method of Bligh and Dyer (1959). The FA methyl esters were prepared by transesterification of glycerolipids with 0.5 mol/L sodium methoxide in methanol (Christie 1989) and separated into saturated FA and UFA fractions by chromatography on 10% silver nitrate-impregnated Silica Gel G plates (0.5-mm thickness; Analtech Inc., Newark, DE). About 11,000 cpm of radioactivity were loaded into each lane. Chromatographic separation was achieved in a toluene solvent system at À20°C and detected by using a Typhoon 9200 PhosphorImager screen (STORM840; GE Healthcare Argentina S.A., Buenos Aires, Argentina). The radioactivity levels of the spots were quantified by Image-Quant 5.2 (GE Health Care Argentina S.A.).
Analysis of FA by GC-MS
To determine the FA composition, AKP3 cells were grown in MM to an OD 525 of 0.35 at 37°C. Cultures were treated or not with 2.5 lg mL À1 cerulenin and then shifted to 25°C for 5 h. Total lipids were extracted and transesterified to yield FA methylesters as described above. The FA methylesters were analyzed in a Perkin-Elmer Turbo Mass gas chromatography-mass spectrometer on a capillary column (30 mm by 0.25 mm in diameter Varian) of 100% dimethylpolysiloxane (PE-1; Perkin-Elmer, Waltham, MA). Helium at 1 mL/min was used as the carrier gas, and the column temperature was programmed to rise by 4°C/min from 100 to 320°C. Branched-chain FA, straight-chain FA, and UFA used as reference compounds were obtained from Sigma Chemical Co (St. Louis, MO).
Bacillus subtilis strain BLUP103 was grown overnight at 37°C in MM supplemented with 0.2 mmol/L IPTG. On the following day, fresh cultures were started by washing twice, resuspended in MM and grown at 37°C in the presence or absence of 1 mmol/L IPTG. After 6 h of growth in the absence of IPTG addition, cells stopped growing because of PlsC depletion. At this point, 50 mL samples cultures were collected and lipids were prepared and analyzed as described above.
Cerulenin inhibits des transcription and UFA synthesis at low temperatures
During previous work aimed to test whether preexistent lipids were able to regulate the expression of D5-Des (Cybulski et al. 2002; and S. G. Altabe, pers. comm.), we found that cerulenin, a specific inhibitor of FabF ( Fig. 1), the sole condensing enzyme accomplishing acyl chain elongation in B. subtilis (Schujman et al. 2001) repressed the induction of the des gene during cold shock. This was an unexpected observation as previous results of our laboratory showed that inhibition of FA synthesis by cerulenin significantly increases transcription of at least 10 genes, contained in five different ope-rons, encoding key enzymes involved in FA and phospholipid biosynthetic pathways (Schujman et al. 2001(Schujman et al. , 2003. As this intriguing preliminary observation could indicate a new level of control in the well-studied Des signal transduction pathway, we decided to examine in detail the mechanism of des promoter regulation by cerulenin. We first assayed the effect of cerulenin on expression of the des gene using strain AKP3 (Aguilar et al. 2001). This strain contains the lacZ reporter gene under the control of the des promoter, integrated ectopically at the nonessential amyE locus. AKP3 was grown until early exponential phase at 37°C and then transferred to 25°C. Half of the culture was treated with sublethal concentration of cerulenin (2.5 lg mL À1 ). As shown in Figure 2A, the b-galactosidase levels of AKP3 cells growing with cerulenin were five times lower than the untreated cultures. Consistently, after a temperature downshift, AKP3 cells treated with cerulenin synthesized lower levels of UFAs than cells growing in the absence of the antibiotic ( Fig. 2B and C). These data showed that cerulenin strongly inhibit des gene induction at low temperature.
The TM segments of DesK are essential for cerulenin repression of des transcription Repression of des transcription by cerulenin could be caused by alternative mechanisms that would block different steps in the transmission of the cold signal (Fig. 2D). It follows that, by a direct action or through inhibition of de novo lipid synthesis, cerulenin could affect sensing properties of DesK, its autophosphorylation and/or the flux of phosphate from DesK-P to the DesR transcription factor. Alternatively, cerulenin could promote dissociation of DesR-P from the des promoter. To distinguish between these possibilities, we used strain CM21, which carries a DesK null mutation, expresses desR from the xyloseinducible Pxyl promoter and contains a Pdes-lacZ fusion integrated at the amyE locus ). This strain was transformed either with plasmid pCM9 expressing an N-terminal truncated form of DesK, named DesKC, which lacks the complete TM region, but retains the catalytic core of DesK (Fig. 3A) or with plasmid pLUP124, which express full-length DesK. It has been previously shown that when DesKC is expressed in strain CM21 the Pdes-lacZ fusion is constitutively expressed, even at 37°C, and is not influenced by the addition of exogenous UFAs. This behavior probably takes place because DesKC, which is unable to respond to membrane signals, remains locked in a kinase-dominant state . As shown in Figure 3B, when CM21 is complemented with wild-type DesK cerulenin addition represses des transcription, conversely b-galactosidase activity of CM21/pCM9 was not repressed by cerulenin. These results indicate that the antibiotic does not inhibit the autokinase or the phosphotransferase activities of the truncated DesKC protein, or the activation of the des promoter by phosphorylated DesR. These data support the notion that the TM domain of DesK is essential to sense the inhibitory effect of cerulenin on des transcription. As it is well established that the TM domain of DesK discriminates the surrounding lipid environment to adjust the signaling state of the sensor kinase (Mansilla and de Mendoza 2005), possible changes in membrane lipid composition induced by cerulenin treatment could be responsible for shutting off the cold-induced DesK autokinase activity.
Downregulation of des expression is linked to inhibition of FA synthesis
It should be noted that at this stage of the work, we were unable to distinguish if inhibition of transcription of the des gene mediated by cerulenin was due to inhibition of its target enzyme, FabF, or by a side effect of the antibiotic, such as its insertion into the membrane. To answer this question, des transcription was evaluated in BLUP87, a cerulenin-resistant mutant of B. subtilis that contains a Pdes-lacZ transcriptional fusion in the amyE locus. Resistance to cerulenin in this strain is given by the I108F substitution of FabF (fabF1 allele), which introduces a residue in the hydrophobic acyl chain-binding pocket that hampers the optimum interaction between the enzyme and the acyl chain of cerulenin (Schujman et al. 2001(Schujman et al. , 2008. As shown in Figure 4A, after a cold shock, cerulenin produces a slight inhibition of des transcription in this strain (see Fig. 2A). Moreover, the levels of UFAs synthesized by BLUP87 at low temperatures were not diminished by addition of cerulenin ( Fig. 4B and C). These results rule out that cerulenin itself could modify the membrane microenvironment of DesK, leading to downregulation of des expression. Thus, this result strongly suggests that repression of des transcription is due to specific inhibition of FA synthesis by cerulenin. To confirm this idea, we tested des expression in the presence of triclosan, an antibiotic that inhibits other enzymes of the fatty acid biosynthetic pathway, FabI and FabL, which catalyze the NADPH reduction of enoyl-ACP to acyl-ACP during the elongation cycle of FA synthesis (Fig. 1, McMurray et al. 1998;Heath et al. 1998Heath et al. , 2000. Addition of sublethal concentration of triclosan (0.4 lg mL À1 ) to AKP3 (cer S ) or BLUP87 (cer R ) cells repressed des transcription in both strains (Fig. S2). These results demonstrate that downregulation of des expression is correlated with the specific inhibition of FA synthesis and suggest that this response would be observed when any step in the pathway is blocked.
Changes in membrane FA composition in response to cerulenin
The FA composition of strain AKP3, treated or not with cerulenin after a temperature downshift, was determined by gas chromatography-mass spectrometry (GC-MS). In agreement with the experiments shown in Figure 2B 3.4%, Table 2). Moreover, and surprisingly, we found that the addition of cerulenin increases the production of shorter FA, as the ratio of short-chain FA (FA of chain length of C13-C15) to long-chain FA (FA of chain length C16 or longer) rose about twofold (1.1-2.2, Table 2). Finally, cerulenin seems not to affect the amount of straight-chain FA nor the iso/anteiso-branched chain FA ratio. These results strongly suggest that inhibition of des expression by cerulenin is due to shortening of the acyl chains of membrane lipids that would be sensed by the TM domain of DesK, acquiring a phosphatase-dominant state.
Effect of long-chain FA on des transcription
As transcriptional repression of des by cerulenin seems to be a consequence of the shortening of the membrane FA chains, it is conceivable that the opposite effect can be obtained in cells whose membranes are enriched in long-chain FA. If this is the case, cells overproducing longchain FA, could activate des transcription even at 37°C. It has been described that the blockade of phospholipid synthesis in B. subtilis by depletion of PlsC, the enzyme that acylates acyl-glycerol phosphate, leads to a very significant accumulation of free FA (Paoletti et al. 2007). Nevertheless, the accumulation of the 1-acyl-glycerol-3-P intermediate was not observed (Paoletti et al. 2007). Thus, we used strain BLUP103, which contains the plsC gene under the control of the IPTG-inducible Pspac promoter and a Pdes-lacZ fusion integrated at the amyE locus. In this strain, removal of the inducer does not result in the immediate inactivation of the protein or cessation of cell proliferation, but rather, cell growth continues until the preexisting protein is diluted out by subsequent cell divisions, while the synthesis of FA continues at a very significant rate. Thus, during the transition from log phase to growth stasis of BLUP103, abnormally long-chain FA should be incorporated into membrane lipids. To test our hypothesis, we determined the FA composition of phospholipids of BLUP103 cells, grown at 25 or 37°C in the presence or in the absence of the inducer. To this end, the extracted FA was subjected to transesterification with sodium methoxide. This procedure allows analyzing by GC-MS only membrane FA esterified to glycerol. Thus, the FA percentages mostly reflect the composition of complex lipids rather than the content of free FA. As expected, the mass spectrum of PlsC-depleted cells displayed a significant increase (5.6-fold) in the proportion of long-chain FA (Table 3), including the synthesis of FA of a chain length of 19-22°C. This effect was more pronounced at 37°C. In addition, depletion of PlsC at 37°C led to a threefold increase in the amount of straight-chain FA when compared with cells grown with the inducer (45.9 vs. 13.1%). To determine whether these changes in membrane lipid composition affect des expression, we analyzed the activity of the Pdes-lacZ transcriptional fusion in BLUP103, with or without IPTG addition, on cells grown at 37°C and after a shift to 25°C. As shown in Figure 5, similar levels of des transcription were observed in plsC cells growing at 25°C in the presence or in the absence of the inducer. However, at 37°C the activity of the des promoter in PlsCdepleted cells reached induction levels about eightfold higher than the levels found in presence of IPTG. These data support the notion that the presence of higher amounts of long-chain FA in the membrane of plsCdepleted cells stabilize DesK in a constitutive kinase state.
Discussion
Bacillus subtilis is a typical mesophile that can be found in the upper layers of the soil, which is subjected to Cells were grown in MM at 37°C to an OD 525 of 0.35. Cultures were split and cerulenin (2.5 lg mL À1 ) was added to one half. Cultures were further transferred to 25°C and cells harvested after 6 h of growth. Total lipids were extracted, transesterified to yield FA methylesters and subjected to GC-MS analysis. Values are representative of three experiments. temperature changes both during the course of the day, and over longer time periods, as a consequence of seasonal changes. Rapid and severe temperature downshifts elicit genetic and cellular adaptive reactions that are collectively known as the cold shock stress response (Weber and Marahiel 2002). As cold shock imposes marked changes on the biophysical properties of B. subtilis cytoplasmic membrane , temperature sensing is important to optimize membrane fluidity in this organism. The cold signal induces transcription of the des gene, leading to the introduction of double bonds into the acyl chains of phospholipids. In this paper, we show that the fungal antibiotic cerulenin represses des induction during cold shock. As cerulenin has been extensively used as a tool to understand several aspects of lipid metabolism (Furukawa et al. 1993;Heath and Rock 1995;Loftus et al. 2000;Schujman et al. 2003), the main objective of this work was to uncover the mechanism by which this antibiotic causes inhibition of des transcription.
Cerulenin inhibits lipid synthesis in B. subtilis by the covalent active site-directed inactivation of the FabF condensing enzyme, the enzyme that catalyzes the condensation of malonyl-CoA with acyl-ACP (Fig. 1). Cessation of FA synthesis caused by cerulenin also inhibits the production of phosphatidic acid (PtdOH), the precursor of membrane phospholipids (Fig. 1). However, we have shown here that inhibition of phospholipid synthesis by PlsC depletion does not impair des transcription after cold shock (Fig. 5). So, inhibition of des expression by cerulenin is linked to inhibition of FA synthesis rather than to phospholipid synthesis. Besides, we showed that the addition of sublethal levels of cerulenin, that do not inhibit growth of B. subtilis wild-type strains, alters the length of the acyl chains of membrane phospholipids. In fact, we found that the membrane FA composition of B. subtilis treated with cerulenin is clearly biased toward shorter-chain fatty acyl groups (Table 2). An important question raised by this work is: how could the FA chain length of B. subtilis phospholipids be decreased by cerulenin? FabF forms part of the fap regulon of B. subtilis (comprising almost all the proteins that catalyzes the later steps of the FASII cycle as well as the earlier steps of phospholipid synthesis, Fig. 1), which is transcriptionally regulated by the FapR repressor. Binding of the repressor to its target sequences is modulated by the levels of malonyl-CoA (Schujman et al. 2006). Antibiotics that specifically inhibit the FASII cycle augment the intracellular levels of malonyl-CoA, which in turn release FapR from its binding sites, increasing the expression of the fap regulon (Schujman et al. 2003). In B. subtilis, two genes involved in the acyl transfer step of phospholipid synthesis, plsX and plsC, are upregulated by inactivation of the FASII cycle (Schujman et al. 2003). Although the experiments shown here suggest that FabF is responsible for the decrease in the acyl chain length of FA of cells exposed to cerulenin, it should be noted that this characteristic is dependent upon the competition between the elongation activity of the FASII and the rate of incorporation of FA by the acyltransferase system (Yao and Rock 2013). We envision that in cerulenin-treated cultures, the combined effect of a decrease in the relative rate of the acyl chain elongation by FabF and overproduction of PlsX and PlsC, are responsible for the increased incorporation of FA of shorter chain length. This proposal agrees with the observation that cells with normal rate of FA synthesis, but decreased PlsC activity accumulates long-chain FA ( Table 3).
Reconstitution of DesK into bilayers of PC containing acyl chains of different length showed that the longer the FA (the thicker the bilayer) the greater its kinase activity (Mart ın and de Mendoza 2013). Nevertheless, these data were obtained with vesicles made of phospholipids that are not normally present in B. subtilis membranes. In this paper, we demonstrate that modulation of DesK kinase activity by the thickness of the bilayer indeed take place in vivo under isothermal conditions, using B. subtilis native phospholipids. But, how could the acyl-chain length of membrane phospholipids influence DesK regulation? Changes in membrane thickness can alter the activity of membrane proteins by modifying the orientation or conformation of TM regions (Lee 2003;Cybulski and de Mendoza 2011). Chill stress, among other effects, causes an increase in membrane thickness generated by a decrease in the disorder of the acyl-chains of phospholipids that accompanies cooling (Rafael Oliveira, personal communication).
Periplasmic-sensing histidine kinases comprise the largest group of membrane-bound sensor kinases. They contain a significantly large extracytoplasmic input domain, which generally detects signals by direct interaction with chemically defined small molecules (Mascher et al. 2006). On the other hand, DesK belongs to a group of histidine kinases with the sensing mechanism linked to the TM regions. The molecular basis by which this group of histidine kinases sense environmental signals is largely unknown. We have recently reported that the multimembrane-spanning domain from DesK could be simplified into a chimerical single-membrane-spanning minimal sensor (MS)-DesK that fully retains in vivo and in vitro the cold-sensing properties of the parental system (Cybulski et al. 2010). Mutational and biochemical analysis of this membrane-bound chimera showed that two hydrophilic residues near the N-terminus of DesK's first TM segment are critical for its cold-activation (Cybulski et al. 2010). This region has been named the "buoy," as its hydrophilicity drives it toward the lipid/water interface, while the hydrophobicity of surrounding residues anchors the buoy to the membrane and can potentially pull it into the membrane interior. The "sunken-buoy" model of thermosensing poses that as the membrane thickens upon cooling, the hydrophilic buoy is pulled into the hydrophobic membrane, an energetically unfavorable situation that elicits conformational changes within the DesK protein that increase the activity of its histidine kinase domain (Cybulski et al. 2010). While the precise structural changes within the TM region remain uncertain, the results described here by manipulating in vivo the chain length of B. subtilis phospholipids, either by inhibiting the elongation activity of FASII or the rate of incorporation by the acyl transfer system, suggest that DesK regulation is indeed linked to changes in membrane thickness that could trigger buoy-dependent conformational changes in this integral membrane cold-sensor.
Supporting Information
Additional Supporting Information may be found in the online version of this article: Figure S1. The Des signaling pathway for regulation of UFAs synthesis in Bacillus subtilis. DesK could assume different signaling states in response to changes in membrane fluidity. An increase in the order of the acyl chains of membrane lipids (less fluid membrane) promotes a kinase-dominant state of DesK, which autophosphorylates and transfers the phosphate group to DesR. DesK-mediated phosphorylation of DesR enables interaction of DesR-P with des promoter and RNA polymerase, resulting in transcriptional activation of des. Then Δ5-Des is synthesized and desaturates the acyl chains of membrane phospholipids. These newly synthesized UFAs cause a decrease in the order of membrane lipids (more fluid membrane) favoring a phosphatase-dominant state of DesK, leading to dephosphorylation of DesR and thus turning off des transcription. Figure S2. Effect of triclosan on des transcription in cerulenin-sensitive or cerulenin-resistant strains. Cells of B. subtillis AKP3 (fabF cer S , amyE::Pdes-lacZ) (A) or BLUP87 (fabF1 cer r , amyE::Pdes-lacZ) (B) were grown in LB medium at 37°C to an OD 525 of 0.35 and then were treated with triclosan 0.4 lg mL À1 (white circles) or untreated (black circles). Cultures were further transferred to 25°C. b-galactosidase specific activities (in Miller units, MU) were determined at the indicated time intervals. Dotted lines: OD 525, solid lines: b-galactosidase specific activities. Values are representative of three independent experiments).
|
2018-04-03T04:22:53.974Z
|
2014-02-14T00:00:00.000
|
{
"year": 2014,
"sha1": "c537b922f8a56f7b10bf1e0d942d3750a2c4d2da",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1002/mbo3.154",
"oa_status": "GOLD",
"pdf_src": "Wiley",
"pdf_hash": "0bd9a554480d275901c62d8d9f023866fdaf1153",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
255826901
|
pes2o/s2orc
|
v3-fos-license
|
Synthesis of positive plasmas with known chromosomal abnormalities for validation of non-invasive prenatal screening
Non-invasive prenatal screening (NIPS) is a DNA sequencing-based screening test for fetal aneuploidies and possibly other pathogenic genomic abnormalities, such as large deletions and duplications. Validation and quality assurance (QA) of this clinical test using plasmas with and without targeted chromosomal abnormalities from pregnant women as negative and positive controls are required. However, the positive plasma controls may not be available for many laboratories that are planning to establish NIPS. Limited synthetic positive plasmas are commercially available, but the types of abnormalities and the number/quantity of synthetic plasmas for each abnormality are insufficient to meet the minimal requirements for the initial validation. We report here a method of making synthetic positive plasmas by adding cell-free DNA (cfDNA) isolated from culture media of prenatal cells with chromosomal abnormalities to the plasmas from non-pregnant women. Thirty-eight positive plasmas with various chromosomal abnormalities, including autosomal and sex chromosomal aneuploidies, large deletions and duplications, were synthesized. The synthetic plasmas were characterized side-by-side with real positive plasmas from pregnant women and commercially available synthetic positive plasmas using the Illumina VeriSeq NIPT v2 system. All chromosomal abnormalities in the synthetic plasmas were correctly identified with the same testing sensitivity and specificity as in the real and commercial synthetic plasmas. The findings demonstrate that the synthetic positive plasmas are excellent alternatives of real positive plasmas for validation and QA of NIPS. The method described here is simple and straightforward, and can be readily used in clinical genetics laboratories with accessibility to prenatal cultures.
Introduction
The discovery of cell-free DNA (cfDNA) of fetal origin in blood plasma of pregnant women paved a new way for non-invasive prenatal screening (NIPS) (Lo et al., 1997). With advances in next-generation sequencing (NGS) technology, tens of millions of short sequence tags can be generated from cfDNA in a single maternal plasma sample. By counting the number of sequence tags mapped to each chromosome, fetal aneuploidies can be correctly detected. This accurate and reliable genomic screening for common fetal aneuploidies clearly outperforms the traditional serum protein screening (Chiu et al., 2008;Fan and Quake, 2010;Norton et al., 2015). NIPS has transformed prenatal care in countries and regions where it is available (Norton, 2022).
As a screening test, NIPS is routinely offered to women at as early as 10 weeks' gestation. This test can be established in clinical genetics laboratories using commercially available platforms, for example the Illumina VeriSeq NIPT v2 system, or laboratory developed sequencing and bioinformatic pipelines. In either way, clinical validation and continuous monitoring of NIPS performance using both negative and positive plasma controls are required to ensure the test is performed appropriately. Negative plasmas can be obtained from female donors with normal pregnancies following appropriate protocols. However, positive plasmas that carry fetal cfDNA with targeted chromosomal abnormalities are usually very difficult to collect in a timely manner, in particular for laboratories new to this test, due to limited availability of such positive specimens. Although synthetic positive plasmas are commercially available, they are usually insufficient for the initial validation due to limited abnormality types and sample quantity. Therefore, development of reliable alternatives of the positive plasmas for NIPS validation and QA is needed to help and facilitate applications of NIPS. We describe here a simple method of making synthetic positive plasmas that are reliable and excellent alternatives of positive maternal plasmas for validation and monitoring NIPS performance.
Materials and equipment 2.1 Materials
Thirty-eight de-identified culture media were collected from backup cultures of chorionic villus cells or amniocytes that were submitted for prenatal diagnosis at the University of California San Francisco (UCSF) Clinical Cytogenetics Laboratory after reporting.
Twenty de-identified remaining plasmas of phenotypically normal non-pregnant females (age 20-42 years old) were collected after testing pathogens of infectious diseases at the UCSF Clinical Microbiology Laboratory. These samples that would be otherwise discarded were used as donor plasmas to make synthetic positive plasmas.
Two maternal blood samples from pregnancies with fetal aneuploidies were collected in Cell-Free DNA BCT tubes (Streck, Nebraska, United States) after obtaining the consent of each individual.
In addition, six synthetic positive plasmas, including two with trisomy 21, two with trisomy 18, and two with trisomy 13, were purchased from SeraCare Life Sciences (SeraCare Life Sciences, Massachusetts, United States).
Two hundred negative control plasmas with normal fetal cfDNA for NIPS system validation and training were provided by Illumina (Illumina, California, United States).
Microlab STAR liquid handling system (Hamilton, Nevada, United States).
Isolation of plasma
Approximately 10 mL blood sample collected in a Cell-Free DNA BCT tube was centrifuged at 1,000 g for 10 min with centrifuge break off (Avanti J-15R centrifuge). The supernatant was then transferred to four 1.5 mL centrifuge tubes (1.1 mL plasma/tube).
Each tube with 1.1 mL plasma was further centrifuged at 5,600 g for 10 min (Eppendorf MiniSpin plus centrifuge), and 1.0 mL supernatant was transferred to a new centrifuge tube.
Isolated plasma could be stored at 4°C for up to 10 days. They could also be stored at −80°C for up to 2 years.
Extraction of cfDNA from culture media and from donor plasmas
Chorionic villus cells or amniocytes were first cultured to about 90% confluence in a T25 flask following a standard protocol (Segeritz and Vallier, 2017). The culture was then fed with 5 mL fresh AmnioMAX complete medium. Three to 5 days after feeding (depending on cell growth), 3.0 mL culture medium was transferred from the flask into a 15 mL centrifuge tube and centrifuged at 1,000 g for 10 min with centrifuge break off (Avanti J-15R centrifuge).
Approximately 2.2 mL supernatant was transferred to two 1.5 mL centrifuge tubes (1.1 mL plasma/tube) (Eppendorf) and then centrifuged at 5,600 g for 10 min (Eppendorf MiniSpin plus centrifuge).
Two mL supernatant (1.0 mL from each tube) was used for cfDNA extraction. CfDNA was extracted using QIAamp MinElute ccfDNA Kit following the manufacture's instruction. CfDNA was eluted into 25.0 µL nuclease-free water provided in the kit and was checked for fragment size and quantity on Bioanalyzer using Agilent high sensitivity DNA kit following the kit instruction.
CfDNA from six donor plasmas was also extracted and measured in the same way to estimate the average concentration of the background cfDNA in the donor plasmas.
Synthesis of positive plasmas
Approximately 1.0 ng short cfDNA (130-190 bp) with targeted chromosomal abnormalities from a culture medium was added to 1.0 mL normal female donor plasma collected through step 3.1 to make a synthetic positive plasma. The expected average fraction of the cfDNAs from culture media in the synthetic positive plasmas is approximately 7%.
Characterization of synthetic positive plasmas for detecting targeted abnormalities
The synthetic positive plasmas were characterized using the Illumina VeriSeq NIPT v2 system according to the manufacturer's instruction. Briefly, cfDNA was extracted and the sequencing library was prepared using VeriSeq NIPT Extraction and Library Prep kits (Illumina) in Microlab STAR liquid handling system (Hamilton). The sample libraries were pooled and pair-end sequenced (36x2 cycles) on NextSeq550 (Illumina). The sequencing data were analyzed by VeriSeq NIPT software v2 (www.illumina.com/NIPTsoftware). This software aligned the sequencing reads to human reference genome GRCh37/hg19 and used a counting-based algorithm to generate the log-likelihood ratio (LLR) scores for chromosomes, as well as NCV_X and NCV_Y scores for sex classification. LLR thresholds for calling a sample high or low risk of specific chromosome abnormalities were internally validated. Data generated from fragment length and coverage analysis were used to estimate fetal fraction by the software.
NIPS data visualization
The LLRs of the synthetic positive plasmas with trisomy 21, trisomy 18, and trisomy 13, as well as the fetal fractions from the VeriSeq NIPT supplementary reports were plotted in RStudio (2021.09.2) using ggplot2 (3.3.6) for data visualization.
Results
A total of 38 cfDNA samples with targeted chromosomal abnormalities were extracted from cell culture media of chorionic villus cells or amniocytes. The quantity and size of the cfDNA were determined on Bioanalyzer using Agilent high sensitivity DNA kit, which showed a size range from 100 bp to >1 kb in discontinuous clusters, including a major cluster of Evaluation of synthetic plasmas. (A) cfDNA size distribution. From left to right, cfDNA from a normal female plasma (female), culture media from a chorionic villus specimen with a 45,X karyotype collected at the day 3, 5, 7, and 9 of the culture, respectively (cm-3d, cm-5d, cm-7d, and cm-9d). (B), (C), and (D) Log likelihood ratios (LLRs) of synthetic plasmas (fetal fraction estimate ≤8%) with trisomy 21, trisomy 18, and trisomy 13, respectively. Red dot, synthetic positive plasma; black dot, negative maternal plasma; gray dotted line, LLR cutoff.
Frontiers in Genetics frontiersin.org short sizes (130-190 bp) ( Figure 1A). The average concentration of the cluster of short cfDNA is approximately 60 ng/mL in culture medium. This cluster of cfDNA was used to make synthetic plasmas, since its size range is most representable to the size range of fetal cfDNA in maternal plasmas (Kim et al., 2015;Jiang and Lo, 2016). The average background cfDNA concentration measured by the same method in six donor plasmas was 13.7 ng/mL, ranging from 3.9 to 27.8 ng/mL. This range was in line with the findings of a broad survey of cfDNA from healthy donors (Raymond et al., 2017). Therefore, adding 1.0 ng abnormal cfDNA to 1.0 mL donor plasma resulted in an approximately 7% of average abnormal cfDNA fraction that would mimic the fetal fraction in the synthetic plasmas. This percentage was common in maternal plasmas based on the data reported in literatures. It was noteworthy that a wide range of fetal fraction (1%-15%) was estimated by the VeriSeq v2 system (Supplemental Table 1), most likely due to the various concentrations of the background cfDNA in the donor plasmas. In fact, this range of fetal fraction appeared to be consistent with a reported range (Canick et al., 2013;Artieri et al., 2017). We further analyzed the detectability of targeted chromosome abnormalities in synthetic positive plasmas with different fetal fractions to determine the sensitivity of the testing using the Illumina VeriSeq NIPT v2 system.
The abnormalities in the 38 synthetic positive plasmas included eighteen trisomy 21, six trisomy 18, four trisomy 13, four sex chromosomal aneuploidies (45,X and 47,XXY), one trisomy 7, one trisomy 16, two trisomy 20, one 10.5 Mb terminal deletion of chromosome 7p and 26.5 Mb terminal duplication of chromosome 9p, and one 26.3 Mb terminal duplication of chromosome 15q. All abnormalities in these synthetic positive plasmas were correctly detected by the Illumina VeriSeq NIPT v2 system (Supplementary Table S1). Figures 1B-D showed the LLRs of the synthetic plasmas with trisomy 21, trisomy 18, and trisomy 13, respectively, in comparison with that of the negative plasmas. Chromosomal abnormalities can be detected in the synthetic plasmas with the fetal fraction as low as 1% (Supplementary Table S1).
We also tested two real positive maternal plasmas with fetal trisomy 21 and trisomy 18, respectively, and six commercial positive plasmas, including two trisomy 21, two trisomy 18, and two trisomy 13 (SeraCare Life Sciences), in parallel with synthetic plasmas made in this study (Supplementary Table S1). There were no noticeable differences in sensitivity, specificity and other testing parameters between these samples and our synthetic plasmas.
Discussion
Short fetal cfDNAs in maternal plasmas were most likely derived from apoptosis (Jiang and Lo, 2016;Rostami et al., 2020). We noticed that cell culture media of prenatal specimens contain short cfDNA fragments that were probably derived from cell apoptosis during the culture. The sizes of such short cfDNA fragments are within the reported size range of fetal cfDNA in plasmas of pregnant women (Kim et al., 2015;Jiang and Lo, 2016). Therefore, it is possible to use this type of short cfDNA to make positive synthetic plasmas that could mimic maternal plasmas carrying fetal cfDNA with chromosomal abnormalities. Our study demonstrated that the synthetic positive plasmas can be readily and reliably used in clinical validation and QA of NIPS. The synthetic positive plasmas described in this study have been successfully used to validate and monitor the NIPS system in our laboratory, which are required by the national and state regulations. Negative synthetic plasma could also be synthesized using normal cfDNA as needed, although it may not be necessary since negative maternal plasmas are not difficult to collect.
Clinical laboratories that provide prenatal cytogenetic tests have unique advantages of making synthetic positive plasmas. It is required to maintain backup cultures for 2 weeks after reporting cytogenetic findings for all prenatal specimens in the United States. Other countries may also have similar requirements. Therefore, the laboratories can readily collect culture media of targeted abnormal cells from the backup cultures. The cfDNAs from the culture media can be directly used to make synthetic plasma after cfDNA extraction without further treatments. Synthetic positive plasmas may also be made using abnormal genomic DNA, but additional processes, such as fragmentation of long genomic DNA and isolation of short DNA, would be needed and those processes could be challenging.
The best time to collect short cfDNA from the culture medium of chorionic villus cell or amniocyte appears to be on day 3-5 after feeding the cells that grow at high confluency (~90%) with fresh culture medium ( Figure 1A). Short culture time might not be able to collect enough cfDNA; long culture time might result in more background of large DNA, probably due to increased cell death and reduced apoptotic activities.
De-identified remaining plasmas after pathogen testing from phenotypic normal non-pregnant females, which would be otherwise discarded, are readily to collect from clinical microbiology or immunology laboratories with appropriate protocols. It is less likely that a phenotypically normal nonpregnant female donor would carry aneuploidies that are usually associated with abnormal phenotypes. To ensure aneuploidy-free in the donors, each non-pregnant plasma was used to synthesize two positive plasmas with different abnormalities if possible. An abnormality of donor origin would be indicated if an abnormality showed in both synthetic plasmas.
While the synthetic plasmas can be used as controls on the Illumina VeriSeq NIPT v2 system, they have not been tested on other NIPS systems for validation of different methodologies, such as single nucleotide polymorphism (SNP)-based NIPS, cfDNA size selection, and targeted sequencing. We did not test cfDNA from culture media of other cell types. In addition, abnormal prenatal cell cultures may not be accessible for every laboratory in needs to make synthetic positive plasmas.
In conclusion, we reported a practical strategy of making synthetic positive plasmas that could be used for NIPS validation and QA. This method could be especially helpful for clinical genetics laboratories that plan to implement NIPS testing.
Data availability statement
The raw data supporting the conclusion of this article will be made available by the authors, without undue reservation.
Ethics statement
The studies involving human participants were reviewed and approved by Institutional Review Board of University of California San Francisco. Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements.
Author contributions
ZQ and JY designed the study. ZQ analyzed the data and drafted the manuscript. JY edited and revised the manuscript. Both authors have read and approved the final version of this manuscript.
|
2023-01-16T14:22:28.848Z
|
2023-01-16T00:00:00.000
|
{
"year": 2023,
"sha1": "381acb33b7a0a7301affb90b4da4e70cf12aafae",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "381acb33b7a0a7301affb90b4da4e70cf12aafae",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
270812278
|
pes2o/s2orc
|
v3-fos-license
|
Preparation, Characterization, and Oral Bioavailability of Solid Dispersions of Cryptosporidium parvum Alternative Oxidase Inhibitors
The phenylpyrazole derivative 5-amino-3-[1-cyano-2-(3-phenyl-1H-pyrazol-4-yl) vinyl]-1-phenyl-1H-pyrazole-4-carbonitrile (LN002), which was screened out through high-throughput molecular docking for the AOX target, exhibits promising efficacy against Cryptosporidium. However, its poor water solubility limits its oral bioavailability and therapeutic utility. In this study, solid dispersion agents were prepared by using HP-β-CD and Soluplus® and characterized through differential scanning calorimetry, Fourier transform infrared, powder X-ray diffraction, and scanning electron microscopy. Physical and chemical characterization showed that the crystal morphology of LN002 transformed into an amorphous state, thus forming a solid dispersion of LN002. The solid dispersion prepared with an LN002/HP-β-CD/Soluplus® mass ratio of 1:3:9 (w/w/w) exhibited significantly increased solubility and cumulative dissolution. Meanwhile, LN002 SDs showed good preservation stability under accelerated conditions of 25 °C and 75% relative humidity. The complexation of LN002 with HP-β-CD and Soluplus® significantly improved water solubility, pharmacological properties, absorption, and bioavailability.
Introduction
Cryptosporidium is a protozoan parasite that infects various vertebrate hosts and causes gastroenteritis syndrome [1].The most common symptom of cryptosporidiosis is watery diarrhea, which may progress to dehydration and shock if untreated [2,3].Other encountered symptoms may include myalgia, weakness, headache, and anorexia [4].Cryptosporidium infection not only affects human health, but it also seriously threatens the development of animal husbandry [5].
Nitazoxanide demonstrates remarkable efficacy as a broad-spectrum agent against parasitic, bacterial, and fungal infections in animals and humans [6].It is the sole pharmaceutical approved by the Food and Drug Administration for the treatment of Cryptosporidium [7].However, its use is limited to immunocompetent individuals because it lacks effectiveness in immunodeficient patients [8].
Cryptosporidium lacks the tricarboxylic acid cycle and cytochrome-based oxidative phosphorylation pathway typical of traditional energy metabolism and instead primarily relies on an alternate oxidative pathway for oxidative phosphorylation [9].The alternate oxidation pathway is a branch of the ubiquinone respiratory chain, which is a nonphosphorylated electron transport chain with alternate oxidase (Cryptosporidium parvum alternative oxidase, CpAOX) as the terminal oxidase, in mitochondria [10,11].AOX plays a crucial role in the life cycle of Cryptosporidium because this protein is absent in mammals [12].Given that AOX has become a crucial target for treating Cryptosporidium infection, finding and developing CpAOX inhibitors are essential.
Pyrazole, a five-membered heterocyclic ring containing two ortho nitrogen atoms, has important biological and pharmaceutical activities in the medical field [13,14].For example, it is used in the treatment of depression [15] and rheumatism [16] and has antilipemic [17] and antitumor biological functions [18].Phenylpyrazoles can interact with the γ-aminobutyric acid (GABA) receptors of insects and block the chloride channels controlled by GABA, thus interfering with the normal function of the central nervous system and leading to death [19].
ple, it is used in the treatment of depression [15] and rheumatism [16] and has ant [17] and antitumor biological functions [18].Phenylpyrazoles can interact with th nobutyric acid (GABA) receptors of insects and block the chloride channels contr GABA, thus interfering with the normal function of the central nervous system a ing to death [19].
In this study, solid dispersions of LN002 were prepared through freeze-dry HP-β-CD and Soluplus ® as carriers.Differential scanning calorimetry (DSC), transform infrared (FT-IR), powder X-ray diffraction (PXRD), Nuclear magne nance spectroscopy (1H NMR), and scanning electron microscopy (SEM) were ut characterize LN002 SDs.The results showed that the solid dispersions were suc prepared.The saturated solubility, dissolution rate stability, and pharmacoki LN002 and LN002 SDs in rats were evaluated.In this study, solid dispersions of LN002 were prepared through freeze-drying with HP-β-CD and Soluplus ® as carriers.Differential scanning calorimetry (DSC), Fourier transform infrared (FT-IR), powder X-ray diffraction (PXRD), Nuclear magnetic resonance spectroscopy (1H NMR), and scanning electron microscopy (SEM) were utilized to characterize LN002 SDs.The results showed that the solid dispersions were successfully prepared.The saturated solubility, dissolution rate stability, and pharmacokinetics of LN002 and LN002 SDs in rats were evaluated.
Preparation and Optimization of LN002 SDs
LN002 SDs were prepared through freeze-drying to enhance solubility, dissolution rate, and bioavailability.The solubility and dissolution rate of hydrophobic drugs in solid dispersions depend on the properties of polymers [29].The different molecular weights and surface activities of polymers have been found to improve solubility and dissolution rates.In this study, different polymer mass ratios were evaluated to improve the solubility of LN002.In the single-factor experiment, the drug/solid dispersion carrier mass ratios ranged from 1:6 to 1:12, while the solid dispersion carriers HP-β-CD and Soluplus® were added at a fixed mass ratio of 1:1.The single-factor analysis revealed that the optimal drug/solid dispersion carrier mass ratio was 1:12.Similarly to general solid dispersions, solid dispersion solubility increased with the polymer ratio, rather than decreasing [30].In orthogonal trials, solid dispersions with various effects were prepared by changing the mass ratio of the solid dispersion carrier (HP-β-CD and Soluplus ® mass ratios of 1:1, 3:1, and 1:3), reaction temperature (30-50 • C), stirring speed (300-500 rpm), and reaction time (1-4 h).Compared with the solid dispersion prepared with the mass ratio of HP-β-CD to Soluplus ® of 1:1 and 3:1, the solid dispersion prepared with the mass ratio of 1:3 had a large drug load and good solubility.In conclusion, the ratio of drug to polymer is crucial for increasing LN002 solubility.The LN002 SDs with the highest inclusion yield of 89.37% and the highest solubility of 1.124 mg/mL were obtained under the following optimal conditions: HP-β-CD/Soluplus ® mass ratio of 1:3, stirring speed of 500 rpm, reaction temperature of 30 • C, and reaction time of 2 h.
Optimal Physicochemical Properties of LN002 SDs 2.2.1. FT-IR
The FT-IR spectra of pure LN002, HP-β-CD, Soluplus ® , LN002 SDs, and the physical mixture are shown in Figure 2. The characteristic absorption peaks of LN002 were found at 3321 (N-H stretching vibration of the primary amine group), 2219 (C≡N stretching), 1640 (N-H bending vibrations), and 762 (N-H bending vibrations) cm −1 (Figure 2a).The spectral peaks of Soluplus ® were observed at 3397 (O-H stretching), 1645 (H-O-H bending), 1421 (C-H bending vibrations), and 1287 (C-N stretching and NH bending vibrations) cm −1 (Figure 2b).The wide absorption band of HP-β-CD at 3000-3700 cm −1 was attributed to intermolecular O-H stretching vibrations.Other absorption bands of HP-β-CD appeared at 2930 (C-H bending), 1651 (H-O-H bending), and 1032 (C-O-C stretching vibrations) cm −1 (Figure 2c).The FTIR spectra of the physical mixture were similar to the spectra of LN002, HP-β-CD, and Soluplus ® monomers, indicating the absence of chemical interactions in the physical mixture (Figure 2d).The absorption peaks of Soluplus ® and HP-β-CD moved from 3397 cm −1 to 3391 cm −1 , and the absorption peak of LN002 at 3321 cm −1 shifted to 3393 cm −1 .Wide wavenumber displacement indicates the presence of a hydrogen bond between LN002 and the solid dispersion carrier.The 5-NH 2 of LN002 is the hydrogen donor, while the secondary OHs of HP-β-CD are a stronger hydrogen bond acceptor than the 5-NH 2 group.The hydrogen atoms in the NH 2 group can form hydrogen bonds with the negatively charged oxygen atoms, while the oxygen atoms in the hydroxyl group are partially negatively charged, attracting the hydrogen atoms of the NH 2 group to form hydrogen bonds [31].The absorption peaks of the drug in the formulation of LN002 SDs at 3321, 2219, 1640, 762, and 695 cm −1 disappeared (Figure 2e).Here, it can be suggested that during the process of encapsulation, LN002 changed from the crystalline state into the amorphous state.
DSC
The thermograms of pure LN002, HP-β-CD, Soluplus ® , LN002 SDs, and the physical mixture are displayed in Figure 3.The pure LN002 thermograms showed a separate sharp endothermic peak at 234.99 °C, which corresponded to the melting point of pure LN002 (Figure 3a).Soluplus ® did not exhibit any peaks in the range of 25-350 °C (Figure 3b).HPβ-CD presented broad endothermic peaks at 300-350 °C that corresponded to its melting point (Figure 3c).The DSC curves of the physical mixture showed no peaks in the endothermic peak range of 25-350 °C (Figure 3d).The absence of the sharp endothermic peak of LN002 in the thermogram of LN002 SD is a clear indication of the phase transformation of LN002, indicating that the drug was highly dispersed in the solid dispersion, causing the drug to change from a crystalline state into an amorphous state (Figure 3e) [32,33].Generally, when crystallinity is lower than 2%, the melting peaks of the drug cannot generally be detected with DSC [34].Therefore, X-ray powder diffraction (PXRD) was used to detect the degree of crystallization of LN002 in solid dispersions.
DSC
The thermograms of pure LN002, HP-β-CD, Soluplus ® , LN002 SDs, and the physical mixture are displayed in Figure 3.The pure LN002 thermograms showed a separate sharp endothermic peak at 234.99 • C, which corresponded to the melting point of pure LN002 (Figure 3a).Soluplus ® did not exhibit any peaks in the range of 25-350 • C (Figure 3b).HP-β-CD presented broad endothermic peaks at 300-350 • C that corresponded to its melting point (Figure 3c).The DSC curves of the physical mixture showed no peaks in the endothermic peak range of 25-350 • C (Figure 3d).The absence of the sharp endothermic peak of LN002 in the thermogram of LN002 SD is a clear indication of the phase transformation of LN002, indicating that the drug was highly dispersed in the solid dispersion, causing the drug to change from a crystalline state into an amorphous state (Figure 3e) [32,33].Generally, when crystallinity is lower than 2%, the melting peaks of the drug cannot generally be detected with DSC [34].Therefore, X-ray powder diffraction (PXRD) was used to detect the degree of crystallization of LN002 in solid dispersions.
DSC
The thermograms of pure LN002, HP-β-CD, Soluplus ® , LN002 SDs, and the physical mixture are displayed in Figure 3.The pure LN002 thermograms showed a separate sharp endothermic peak at 234.99 °C, which corresponded to the melting point of pure LN002 (Figure 3a).Soluplus ® did not exhibit any peaks in the range of 25-350 °C (Figure 3b).HPβ-CD presented broad endothermic peaks at 300-350 °C that corresponded to its melting point (Figure 3c).The DSC curves of the physical mixture showed no peaks in the endothermic peak range of 25-350 °C (Figure 3d).The absence of the sharp endothermic peak of LN002 in the thermogram of LN002 SD is a clear indication of the phase transformation of LN002, indicating that the drug was highly dispersed in the solid dispersion, causing the drug to change from a crystalline state into an amorphous state (Figure 3e) [32,33].Generally, when crystallinity is lower than 2%, the melting peaks of the drug cannot generally be detected with DSC [34].Therefore, X-ray powder diffraction (PXRD) was used to detect the degree of crystallization of LN002 in solid dispersions.
PXRD
The PXRD results of pure LN002, HP-β-CD, Soluplus ® , LN002 SDs, and the physical mixture are shown in Figure 4.The spectrum of LN002 had characteristic diffraction peaks at diffraction angles (2θ) of 6.26 • , 12.55 • , 13.59 • , 18.97 • , and 24.98 • , confirming its crystalline nature (Figure 4a).The PXRD spectra of HP-β-CD and Soluplus ® lacked crystal peaks, revealing that the two polymer materials were basically amorphous (Figure 4b,c).The PXRD pattern of LN002 SDs exhibited a wide hollow pattern resembling that of Soluplus ® , and the characteristic peak of raw LN002 could not be observed (Figure 4d), indicating that LN002 transitioned from a crystalline into an amorphous state.The flat PXRD pattern of the physical mixture differed from that of the polymer materials, likely due to grinding (Figure 4e).This result further revealed that the drug was amorphous in the solid dispersion.
and the characteristic peak of raw LN002 could not be observed (Figure 4d), indicating that LN002 transitioned from a crystalline into an amorphous state.The flat PXRD pattern of the physical mixture differed from that of the polymer materials, likely due to grinding (Figure 4e).This result further revealed that the drug was amorphous in the solid dispersion.
SEM
The SEM images of pure LN002, Soluplus ® , HP-β-CD, the physical mixture, and LN002 SDs are displayed in Figure 5. LN002 exhibited irregular granular crystals and compact structures (Figure 5a).Meanwhile, Soluplus ® was uniformly dispersed and in an amorphous state (Figure 5b).SEM revealed that HP-β-CD particles presented an amorphous spherical shape with cavities (Figure 5c).The physical mixture exhibited needlelike crystals, indicating that the crystal structure of LN002 did not disappear as a result of physical mixing (Figure 5d).LN002 crystals were not detected in the solid dispersion (Figure 5e).These findings confirmed that LN002 was well encapsulated in the polymer as a result of freeze-drying.
SEM
The SEM images of pure LN002, Soluplus ® , HP-β-CD, the physical mixture, and LN002 SDs are displayed in Figure 5. LN002 exhibited irregular granular crystals and compact structures (Figure 5a).Meanwhile, Soluplus ® was uniformly dispersed and in an amorphous state (Figure 5b).SEM revealed that HP-β-CD particles presented an amorphous spherical shape with cavities (Figure 5c).The physical mixture exhibited needlelike crystals, indicating that the crystal structure of LN002 did not disappear as a result of physical mixing (Figure 5d).LN002 crystals were not detected in the solid dispersion (Figure 5e).These findings confirmed that LN002 was well encapsulated in the polymer as a result of freeze-drying.
and the characteristic peak of raw LN002 could not be observed (Figure 4d), indicating that LN002 transitioned from a crystalline into an amorphous state.The flat PXRD pattern of the physical mixture differed from that of the polymer materials, likely due to grinding (Figure 4e).This result further revealed that the drug was amorphous in the solid dispersion.
SEM
The SEM images of pure LN002, Soluplus ® , HP-β-CD, the physical mixture, and LN002 SDs are displayed in Figure 5. LN002 exhibited irregular granular crystals and compact structures (Figure 5a).Meanwhile, Soluplus ® was uniformly dispersed and in an amorphous state (Figure 5b).SEM revealed that HP-β-CD particles presented an amorphous spherical shape with cavities (Figure 5c).The physical mixture exhibited needlelike crystals, indicating that the crystal structure of LN002 did not disappear as a result of physical mixing (Figure 5d).LN002 crystals were not detected in the solid dispersion (Figure 5e).These findings confirmed that LN002 was well encapsulated in the polymer as a result of freeze-drying.
Solubility and In Vitro Release Study
Saturated solutions of LN002, the physical mixture, and LN002 SDs were prepared and analyzed by using HPLC.Raw LN002 exhibited a low solubility of 0.0236 mg/mL in water.Compared with that of raw LN002, the solubility of LN002 SDs significantly increased to 1.124 mg/mL.The dissolution curves of pure LN002, LN002 SDs, and the physical mixture are provided in Figure 6.The dissolution test results showed that the dissolution rates of LN002, the physical mixture, and LN002 SDs after 180 min were 0.35% ± 0.02%, 4.62% ± 0.21%, and 47.05% ± 1.72%, respectively.Soluplus ® is a polymer with micellar properties, which can be used to encapsulate drugs when preparing solid dispersions [35].In an in vitro release study, the solid dispersing micelle formation and stabilization process reduced the release of LN002 within 20 min.After 20 min, the dissolution rate of the drug increased, which may be due to the stable formation of micelles or increased wettability.These results demonstrated that the solid dispersion significantly improved the dissolution and release of LN002 (p < 0.01), indicating that the drug completely transformed from a crystalline state into an amorphous state.This finding was consistent with XRD and DSC results.0.02%, 4.62% ± 0.21%, and 47.05% ± 1.72%, respectively.Soluplus ® is a polymer with micellar properties, which can be used to encapsulate drugs when preparing solid dispersions [35].In an in vitro release study, the solid dispersing micelle formation and stabilization process reduced the release of LN002 within 20 min.After 20 min, the dissolution rate of the drug increased, which may be due to the stable formation of micelles or increased wettability.These results demonstrated that the solid dispersion significantly improved the dissolution and release of LN002 (p < 0.01), indicating that the drug completely transformed from a crystalline state into an amorphous state.This finding was consistent with XRD and DSC results.
In general, solid dispersions can enhance the dissolution of drugs through the following mechanisms.First, with the preparation of the solid dispersion by the hydrophilic carrier, the drug became more wettable and dispersible [36].In addition, solid dispersions reduced particle size, increasing surface area and dissolution [37,38].Most importantly, compared with drug powder, solid dispersion can effectively improve the solubility and dissolution rate of water-resistant drugs by adjusting the crystallinity [39].
Stability Study
The stability study results of LN002 SDs are provided in Figure 7. PXRD data showed that LN002 SDs remained amorphous within 90 days likely because the solid dispersion can provide physical or steric hindrance.It is well known that Soluplus ® can reduce supersaturation by increasing the equilibrium solubility of drugs, thereby achieving the effect of inhibiting crystallization [40,41].Soluplus ® and HP-β-CD, as carriers of the solid dispersion, facilitated the preservation of LN002 in its amorphous state for a long time, which is important to inhibit the crystallization of poorly soluble drugs in the amorphous state.
In general, solid dispersions can enhance the dissolution of drugs through the following mechanisms.First, with the preparation of the solid dispersion by the hydrophilic carrier, the drug became more wettable and dispersible [36].In addition, solid dispersions reduced particle size, increasing surface area and dissolution [37,38].Most importantly, compared with drug powder, solid dispersion can effectively improve the solubility and dissolution rate of water-resistant drugs by adjusting the crystallinity [39].
Stability Study
The stability study results of LN002 SDs are provided in Figure 7. PXRD data showed that LN002 SDs remained amorphous within 90 days likely because the solid dispersion can provide physical or steric hindrance.It is well known that Soluplus ® can reduce supersaturation by increasing the equilibrium solubility of drugs, thereby achieving the effect of inhibiting crystallization [40,41].Soluplus ® and HP-β-CD, as carriers of the solid dispersion, facilitated the preservation of LN002 in its amorphous state for a long time, which is important to inhibit the crystallization of poorly soluble drugs in the amorphous state.
In Vivo Pharmacokinetics Study
The mean blood concentration-time curves and main pharmacokinetic parameters of LN002 and LN002 SDs in rats are provided in Figure 8 and Table 1.At almost all time points, the plasma drug concentration of LN002 in LN002 SDs was higher than that of raw LN002, indicating that the solid dispersion agent could effectively increase the plasma
In Vivo Pharmacokinetics Study
The mean blood concentration-time curves and main pharmacokinetic parameters of LN002 and LN002 SDs in rats are provided in Figure 8 and Table 1.At almost all time points, the plasma drug concentration of LN002 in LN002 SDs was higher than that of raw LN002, indicating that the solid dispersion agent could effectively increase the plasma drug concentration of LN002.The drug-time curves of LN002 SDs and pure LN002 showed that the oral administration of the drug involved two distinct absorption processes.The first absorption time of LN002 SDs occurred 0-0.5 h after administration, and that of LN002 occurred 0-1 h after administration, which is the conventional absorption process in oral administration.The second absorption period occurred 1-4 and 3-4 h after administration.The mean blood concentration-time curves exhibited double peaks due to enterohepatic circulation, delayed gastric emptying, or reabsorption in various parts of the gastrointestinal tract [42,43].Compared with that of the LN002 suspension, the oral administration of LN002 SDs showed significantly increased Cmax and AUC0-t values.Statistical analysis revealed that the Cmax values of LN002 SDs and LN002 were 1.97 ± 0.11 and 0.85 ± 0.19 µg/mL, respectively, and the Cmax of SDs was 2.32 times that of the LN002 suspension (p < 0.01).Specifically, the AUC0-t of LN002 released from LN002 SDs (7.72 ± 0.50 µg/mL•h) increased by 3.38-fold compared with that of the LN002 suspension (2.28 ± 0.23 µg/mL•h) (p < 0.01).Cmax and AUC in LN002 SDs were higher than LN002, which is consistent with the results of dissolution in vitro.The improved bioavailability of LN002 can be attributed to its increased solubility when incorporated into the solid dispersion carrier, thereby facilitating drug dissolution and absorption in the gastrointestinal tract [44,45].
Solid Dispersion Preparation
LN002 solid dispersions were prepared through freeze-drying.Briefly, raw LN002 and the solid dispersion material (including HP-β-CD and Soluplus ® ) were co-dissolved in 8 mL of DMF at a ratio of 1:6, 1:9, or 1:12 (w/w).The HP-CD/Soluplus ® (w/w) ratios were 1:1, 1:3, or 3:1.The mixed DMF solution was stirred at a certain speed for several hours and stored in a refrigerator at −80 • C. The frozen samples were freeze-dried to obtain solid dispersions.A physical mixture consisting of LN002 mixed with Soluplus ® and HP-β-CD was prepared.
FT-IR Spectroscopy
Each sample was homogeneously mixed with KBr powder at a mass ratio of 1:100 and pressed.Raw LN002, HP-β-CD, Soluplus ® , the physical mixture, and LN002 SDs were scanned by a Nicolet iS50 FT-IR spectrometer (Thermo Fisher Scientific, Waltham, MA, USA).FT-IR spectra were scanned within the wavenumber range of 4000-400 cm −1 .Lastly, the resulting image was created using Origin 2021 (OriginLab Corporation, Northampton, MA, USA).
DSC
The differential scanning calorimeter TA-Q20 (TA Instruments, New Castle, DE, USA) was used to monitor thermal behavior to verify the existence state of the prepared samples.The heating rate was set at 10 • C/min, nitrogen was used as the carrier gas, and the temperature range was 25-350 • C. The Origin 2021 software (OriginLab Corporation, Northampton, MA, USA) was used to arrange the data.
X-ray Powder Diffraction
The X-ray diffraction (XRD) data for each sample were acquired by using a SmartLab diffractometer (Rigaku, Japan) at 40 kV and 40 mA with Cu Kα radiation (λ = 1.54056Å).The 2θ scanning range was set from 5 • to 50 • with a step size of 0.01 • and calculation time of 1 s per step.The outcomes were shown using a program Origin 2021 (OriginLab Corporation, Northampton, MA, USA).
SEM
The surface morphologies of LN002, HP-β-CD, Soluplus ® , LN002 SDs, and the physical mixture were acquired by using a Zeiss Sigma 300 scanning electron microscope (Carl Zeiss, Oberkochen, Germany) equipped with image acquisition software (version 7.0.5).The samples were affixed to an aluminum sample holder and coated with a layer of gold.Subsequently, the samples were observed under the scanning electron microscope at an acceleration voltage of 10 kV.
Saturation Solubility Analysis
Excess amounts of LN002 and LN002 SDs were added to distilled water.All samples were shaken for 48 h at 37 • C in a thermostat oscillator (Sunkun, China).After 15 min of centrifugation, the supernatants were collected and filtered through a 0.22 µm membrane filter.After appropriate dilutions, the dilutions were analyzed by HPLC.The analytes were identified utilizing a UV detector at 333 nm.Chromatographic conditions used an isocratic mobile phase that combines methanol (solvent A) and deionized water (solvent B) in a ratio of 75:25 (v/v), with a constant flow rate of 1 mL/min to ensure consistent separation.The injection volume of 20 µL was utilized, and the column temperature was maintained at 40 • C. Experiments were conducted in triplicate to minimize deviations.
In Vitro Dissolution Studies
The in vitro dissolution kinetics experiment on LN002, the physical mixture, and LN002 SDs was performed with a USP type II dissolution apparatus.In brief, pure LN002 (100 mg), LN002 SDs (containing 100 mg of LN002), and the physical mixture (containing 100 mg of LN002) were accurately weighed and placed in 200 mL of phosphate buffer (PBS pH 6.8).In accordance with the published protocols, the PBS was prepared [46].During dissolution, the dissolution medium temperature was maintained at 37 • C ± 0.5 • C, and the stirring speed was controlled at 100 rpm.At predetermined time points (5,10,20,30,45,60,90,120, and 180 min), exactly measured samples of the dissolution medium were removed and replaced with the same volume of prewarmed fresh dissolution medium.The collected solution was filtered through a 0.22 µm membrane filter, appropriately diluted, and then detected at 333 nm with an LC-20A UV detector (Shimadzu, Japan).All experiments were performed in triplicate, and the standard regression curve equation was used to calculate the dissolution rate.
Stability Study
LN002 SD samples were maintained in a controlled environment at a constant temperature of 25 • C ± 0.5 • C and relative humidity of 75% ± 5% for 90 days.PXRD analysis was performed at time points of 0, 30, 60, and 90 days to monitor the potential crystallization of the solid dispersion.
In Vivo Pharmacokinetic Studies
Two groups of 12 rats (n = 6) were randomly assigned for pharmacokinetic studies [47].Rats (200 ± 20 g) were fasted for 12 h before the experiment with access to water.The rats in the LN002 and LN002 SD groups were intragastrically administered LN002 or LN002 SDs at a dosage of 100 mg/kg.Blood samples were collected from the tail vein at 0.17, 0.5, 0.75, 1, 2, 3, 4, 6, 8, 12, and 24 h after administration.The collected blood samples were centrifuged at 3000 rpm for 10 min at 4 • C to isolate plasma.Subsequently, 0.2 mL of plasma was mixed with 0.8 mL of acetonitrile, vortex-mixed, and centrifuged at 12,000 rpm for 10 min.The resulting supernatants were filtered by using a 0.22 µm membrane filter and subjected to analysis.Plasma pharmacokinetic parameters in single-dose studies were evaluated by using noncompartmental analysis in Phoenix WinNonlin ® 8.2 software (Certara, L.P., Princeton, NJ, USA).
Conclusions
LN002 SDs, which were prepared by using an optimized composition of HP-β-CD and Soluplus ® , significantly enhanced the solubility of LN002 in water by 47.50-52.13times relative to that of LN002 alone.DSC, FTIR, SEM, and PXRD revealed that in SDs, the drug existed in an amorphous state, indicating that LN002 stably dispersed in solid dispersions.Furthermore, the pharmacokinetic analysis of LN002 SDs in rats demonstrated that the pharmaceutical properties of the absorption (C max : 0.85 µg/mL vs. 1.97 µg/mL) and bioavailability (2.28 µg•h/mL vs. 7.72 µg•h/mL) of LN002 significantly improved.
Table 1 .
Pharmaceutical parameters of LN002 and its solid dispersions.
|
2024-06-29T15:06:27.854Z
|
2024-06-27T00:00:00.000
|
{
"year": 2024,
"sha1": "bcc0b157d83d17cc2a0e7fa88fc8c1b73fbcd4bb",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/ijms25137025",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c9f84868740b41ac8fca3ad51a48b366bcfc21d6",
"s2fieldsofstudy": [
"Medicine",
"Chemistry"
],
"extfieldsofstudy": []
}
|
232382178
|
pes2o/s2orc
|
v3-fos-license
|
Fast Eating Is Associated with Increased BMI among High-School Students
Fast self-reported eating rate (SRER) has been associated with increased adiposity in children and adults. No studies have been conducted among high-school students, and SRER has not been validated vs. objective eating rate (OBER) in such populations. The objectives were to investigate (among high-school student populations) the association between OBER and BMI z-scores (BMIz), the validity of SRER vs. OBER, and potential differences in BMIz between SRER categories. Three studies were conducted. Study 1 included 116 Swedish students (mean ± SD age: 16.5 ± 0.8, 59% females) who were eating school lunch. Food intake and meal duration were objectively recorded, and OBER was calculated. Additionally, students provided SRER. Study 2 included students (n = 50, mean ± SD age: 16.7 ± 0.6, 58% females) from Study 1 who ate another objectively recorded school lunch. Study 3 included 1832 high-school students (mean ± SD age: 15.8 ± 0.9, 51% females) from Sweden (n = 748) and Greece (n = 1084) who provided SRER. In Study 1, students with BMIz ≥ 0 had faster OBER vs. students with BMIz < 0 (mean difference: +7.7 g/min or +27%, p = 0.012), while students with fast SRER had higher OBER vs. students with slow SRER (mean difference: +13.7 g/min or +56%, p = 0.001). However, there was “minimal” agreement between SRER and OBER categories (κ = 0.31, p < 0.001). In Study 2, OBER during lunch 1 had a “large” correlation with OBER during lunch 2 (r = 0.75, p < 0.001). In Study 3, fast SRER students had higher BMIz vs. slow SRER students (mean difference: 0.37, p < 0.001). Similar observations were found among both Swedish and Greek students. For the first time in high-school students, we confirm the association between fast eating and increased adiposity. Our validation analysis suggests that SRER could be used as a proxy for OBER in studies with large sample sizes on a group level. With smaller samples, OBER should be used instead. To assess eating rate on an individual level, OBER can be used while SRER should be avoided.
Introduction
Obesity is associated with severe health problems such as heart disease [1], diabetes mellitus type 2 [1], depression [2], and some cancers [3]. Together, they are among the leading causes of premature death in the world, while at the same time being preventable [4]. An estimated 2 billion people now suffer from obesity [5] and the associated economic costs have been compared to the cost of smoking, armed violence, war and terrorism combined, or approximately three percent of the global GDP [6]. The direct cause of obesity is overconsumption of energy from food in relation to the energy demands of the body [7]. However, the causes of overconsumption of energy are multifactorial [8,9].
Interestingly, various eating behaviors have been shown to increase energy intake and adiposity in humans [10][11][12]. For example, experimental studies suggest that a fast eating rate causes increased short-term food intake vs. a slow eating rate (random-effects standardized mean difference: 0.45 [13]). Moreover, objectively measured eating rate is a stronger explanatory variable for how much food high-school students eat during school lunch vs. subjective variables, such as changes in perceived fullness, as well as tastiness [14]. Objectively measured eating rate has also been shown to be associated with increased BMI and body fat among 4.5-year-old Singaporean children [15], as well as to moderate the association between important childhood obesity risk factors and adiposity outcomes at the age of 6 years [16].
A growing number of epidemiological studies have found a consistent association between self-reported fast eating rate and obesity (pooled odds ratio of 2.15 vs. slow eaters), as well as increased BMI (pooled mean difference in BMI = +1.78 kg/m2 vs. slow eaters [17]). Additionally, a randomized controlled trial in young people with obesity has shown benefits in training participants to eat slower and to take smaller food portions in treatment outcomes (i.e., 0.24 standard deviation score lower for both BMI and body fat vs. standard care after 18 months of follow-up [18]), suggesting that the association between faster eating rate and increased BMI might be causal. It should also be noted that disturbed eating speed is associated with eating disorders, either with excessively slow eating rates reported in anorexia nervosa or faster than normal eating rates in binge-eating disorder patients [19]. Furthermore, it has been proposed that long-term disturbance in eating behavior pattern is one of the causal factors for the development of eating disorders [20], with various clinical programs currently providing eating behavior training during treatment.
However, criticism has been raised against self-reported measures of dietary intake [21][22][23], and similar concerns exist regarding self-reported measures of eating rate. In the epidemiological literature related to eating rate and health outcomes, simple questionnaires asking people to estimate how fast they eat in comparison to "others" are used. Surprisingly, few studies have been conducted to investigate the concurrent validity of such questionnaires vs. objectively measured eating rate.
To date, three studies with relatively small sample sizes (n =~60-80) have validated self-reported eating rate categories vs. objectively measured eating rate [24][25][26]. The results of these studies indicate that self-reported eating rate categories can differentiate groups with high vs. low objectively measured eating rate, while categorization on an individual level should be avoided due to "minimal" categorical agreement. All three of the above-mentioned studies were conducted in laboratory environments. No comparisons between objective and subjective eating rate have been performed in "real-life" situations, nor among high-school students. Therefore, the generalizability of these results can be questioned, and proper validation of self-reported eating rate is needed in such contexts. Additionally, no studies have been conducted in Swedish or Greek populations. The epidemiological self-reported literature on eating rate is also location specific, as it has previously included predominantly Japanese populations (for example 18-year-old women [27], 29-to 39-month-old children [28] and middle-aged men and women [29]), as well as middle-aged women in New Zealand [30], Dutch adults [24], Singaporean adults [31] and South Korean adults [32]. Thus, data across additional regions and target populations are also needed to clarify the generalizability of past findings.
Aims
The aims of our three studies (Study 1. Single school lunch study, Study 2. Repeated school lunch study, and Study 3. BigO cohort study) were: Study 1. To investigate the association between objective eating rate, BMI z-scores, food intake and weight categories among high-school students, as well as to determine whether self-reported eating rate categories can distinguish groups of different objectively measured eating rates. Study 2. To assess the concurrent validity of self-reported eating rate categories vs. objective eating rate categories, and to assess the test-retest reliability of both subjective and objective eating rates across repeated measures from the same high-school students. Study 3. To distinguish differences in BMI z-scores among self-reported eating rate categories in larger populations of Swedish and Greek high-school students and to estimate its relation to BMI z-scores.
Study Design
A cross-sectional study design was used in studies 1 (the single school lunch study) and 3 (the BigO cohort study), while a repeated-measures, within-subject study design was used in Study 2 (the repeated school lunch study).
Setting
Studies 1 and 2 were conducted in the school lunch canteen environment at Internationella Engelska Gymnasieskolan Södermalm (IEGS) (Stockholm, Sweden). The studies were part of the multinational EU project SPLENDID, with the aim to develop systems for early detection and interventions for childhood obesity in school children [33,34]. IEGS is a high school located in the central area of Stockholm, Sweden. Recruitment took place during February 2015, December 2015, and April 2017 (see Figure 1 for the complete timeline for studies 1 to 3). Study 2 included a subset of Study 1 subjects coming back for a repeated meal in the same school environment during February and March 2016 (2-3 months after their initial meal). The third study used data from a separate multinational EU project BigO (Big data against childhood obesity, [35]). In the BigO project, a smartphone application was developed for school children and adolescents to gather data related to the development of childhood obesity. Data collection was supported through school-supported actions between March 2018 and June 2020. Presentations of the BigO project were conducted in selected schools in Sweden and in Greece, in collaboration with the local school personnel at each school.
Participants
In Study 1, students from six classes (187 students) at the IEGS high school were invited to participate in monitored school lunches. The outcome dataset with unique participants (non-repeated meals) included 15 students from 2015 (pilot meals), 97 students from December 2015 and early 2016 (this was the year when most of the data collection took place) and another 2 students from 2017 (students that did not participate in the preceding years). It is important to note that Study 1 only included unique participants, i.e., for students that participated in more than one year, only one meal (mainly the late 2015/early 2016 meal, secondly the early 2015 meal, and lastly the 2017 meal) was included in the dataset for Study 1.
In Study 2, students from four out of the six previously invited classes (in the late 2015 data collection) were invited to participate once again, with a subset of 50 students finally providing repeated meals two to three months later (February and March 2016).
For Study 3, the BigO project [35] supported the non-discriminative, large-scale, spontaneous recruitment of students (in this case aged 15-18 years) from different European countries, with the main volume of data collection taking place in Sweden and Greece. In Sweden, data collection included 748 students from two high schools, one in Stockholm (IEGS, n = 613) and one in Uppsala (NTI Gymnasiet, n = 135). In Greece (included students in total = 1084), data collection focused on three cities, Athens (Ellinogermaniki Agogi high school, n = 230), Larissa (Ekpaideutiria Mpakogianni, n = 111) and Thessaloniki (across 16 public and private high schools, n = 439). An additional 304 highschool students who participated in a multidisciplinary, personalized intervention program for the management of overweight and obesity (Out-patient Clinic for the Prevention and management of Overweight and Obesity in Childhood and Adolescence, First Department of Pediatrics "Aghia Sofia" Children's' Hospital, Athens, Greece) also contributed data.
In all cases, irrespective of the country where the data collection took place, local school administrators, teachers and clinicians handled the consent process and final study recruitment, with the remote support of the BigO researchers. All data collection in schools was supported through school-sponsored projects with subjects relevant to lifestyle monitoring, as well as to citizen science [36]. As a rule, the recruitment efforts in schools targeted the whole school population through school-wide project advertisements, but the specifics differed across schools due to local educational requirements and schedules. Out of these populations, 1909 of the consented students activated the BigO app in their personal smartphones (Android and iOS platforms were supported). Furthermore, 96% (1832) provided answers to the self-reported eating rate questionnaire that is analyzed in Study 3, as well as provided self-reported estimates about their weight and height that were the basis of their BMI z-score calculation (using 2007 World Health Organization reference charts).
All the presented studies aimed at "real-life", all-inclusive student population analysis. Thus, recruitment took place in a non-discriminatory fashion, meaning that no inclusion/exclusion criteria existed more than being part of the included schools, be willing to take part in the study procedures, and providing informed consent.
Participants
In Study 1, students from six classes (187 students) at the IEGS high school were invited to participate in monitored school lunches. The outcome dataset with unique participants (non-repeated meals) included 15 students from 2015 (pilot meals), 97 students from December 2015 and early 2016 (this was the year when most of the data collection took place) and another 2 students from 2017 (students that did not participate in the preceding years). It is important to note that Study 1 only included unique participants, i.e., for students that participated in more than one year, only one meal (mainly the late 2015/early 2016 meal, secondly the early 2015 meal, and lastly the 2017 meal) was included in the dataset for Study 1.
In Study 2, students from four out of the six previously invited classes (in the late 2015 data collection) were invited to participate once again, with a subset of 50 students finally providing repeated meals two to three months later (February and March 2016).
For Study 3, the BigO project [35] supported the non-discriminative, large-scale, spontaneous recruitment of students (in this case aged 15-18 years) from different European countries, with the main volume of data collection taking place in Sweden and Greece. In Sweden, data collection included 748 students from two high schools, one in Stockholm (IEGS, n = 613) and one in Uppsala (NTI Gymnasiet, n = 135). In Greece (included students in total = 1084), data collection focused on three cities, Athens (Ellinogermaniki Agogi high school, n = 230), Larissa (Ekpaideutiria Mpakogianni, n = 111) and Thessaloniki (across 16 public and private high schools, n = 439). An additional 304 high-school students who participated in a multidisciplinary, personalized intervention program for the management of overweight and obesity (Out-patient Clinic for the Prevention and management of Overweight and Obesity in Childhood and Adolescence, First Department of Pediatrics "Aghia Sofia" Children's' Hospital, Athens, Greece) also contributed data.
In all cases, irrespective of the country where the data collection took place, local school administrators, teachers and clinicians handled the consent process and final study recruitment, with the remote support of the BigO researchers. All data collection in schools was supported through school-sponsored projects with subjects relevant to lifestyle monitoring, as well as to citizen science [36]. As a rule, the recruitment efforts in schools targeted the whole school population through school-wide project advertisements, but the specifics differed across schools due to local educational requirements and schedules. Out of these populations, 1909 of the consented students activated the BigO app in their personal smartphones (Android and iOS platforms were supported). Furthermore, 96% (1832) provided answers to the self-reported eating rate questionnaire that is analyzed in Study 3, as well as provided self-reported estimates about their weight and height that were the basis of their BMI z-score calculation (using 2007 World Health Organization reference charts).
All the presented studies aimed at "real-life", all-inclusive student population analysis. Thus, recruitment took place in a non-discriminatory fashion, meaning that no inclusion/exclusion criteria existed more than being part of the included schools, be willing to take part in the study procedures, and providing informed consent.
Data Sources/Measurements
BMI z-scores calculations: During studies 1 and 2, study personnel weighed and measured the height (by use of a weight and height scales, Seca, Chino CA 91710, USA) of each student before taking part in the school lunch measurements. These measurements, together with students age and sex, enabled calculation of BMI z-scores. The BMI z-scores were derived from an online calculator (https://apps.cpeg-gcep.net/who2007_cpeg/ [37], accessed on 8 March 2021) that used the WHO reference charts for child growth [38]. Students were later categorized in two groups: (a) students with BMI z-scores < 0, and (b) students with a BMI z-score ≥ 0, in order to compare the results to those obtained in a previous study among children that used a similar categorization scheme [15].
Meal procedure at school: Upon arrival to the school lunch canteen environment, students were equipped with a portable food scale (Mandometer version 4 during early 2015 and Mandometer version 5 in the data collection that took place in the later part of 2015, early 2016 and 2017 [39]), a mobile phone, as well as a questionnaire in paper format. The food scale was used to record food mass intake in grams (accounting for food additions and leftovers). In the lunchroom environment, digital cameras (GoPRO) were placed in each corner of the room to assess each student's meal duration. The food mass intake (g) was later divided by the meal duration in minutes (established from the video recordings) to calculate the objective eating rate (g/minute) of each individual student. The questionnaires were used to assess the subjective eating rate of each student. The groups of slow, medium, and fast self-reported eating rate categories were used in accordance with previous literature (see the paragraph below for specific questions used as well as the available answers) [17].
The BigO data: During Study 3, each participant was asked to provide an estimation of their current weight and height during the initial registration in the BigO app. Afterwards, they were requested to fill in an eating rate questionnaire item: "How fast do you eat in comparison to others?". The user could then select one of five options: "Much slower than others", "Slower than others", "similar to others", "Faster than others" or "Much faster than others". Students who self-reported eating "Much slower than others" and "Slower than others" were later merged into the eating rate category "Slow" and those reporting eating "Faster than others" or "Much faster than others" were categorized as "Fast" in a similar fashion as in previous literature [17], while students reporting eating similar to others were labeled as "intermediate".
Served Food
During studies 1 and 2, students were offered a standardized buffet lunch meal in their natural school canteen environment that included potatoes, beef patties, celery patties, fish (pollock), cream sauce, vegetables (such as sliced carrots, cucumber, lettuce, sprouts, olives), crisp bread, cottage cheese and jam, all ad libitum. Water and milk were also available ad libitum during the lunch. This type of buffet meal is typical for Swedish schools and is served in IEGS daily. The study setup has previously been described in greater detail; see [14,40].
Study Size
Since both EU projects (SPLENDID and BigO) were novel health technology projects, the sample sizes in studies 1-3 were determined by the available students during each round of recruitment for the development of the technology. Therefore, post hoc power analyses were conducted with different aims and main outcomes in mind. In Study 1, 100% power was achieved for the primary ANOVA outcome analysis of group level differences in objective eating rate between the three groups of self-reported eating rate (effect size f = 0.742, alfa error probability = 0.05, total sample size = 114, and three groups). In Study 3, 98.7% power was achieved in the primary ANOVA outcome analysis of differences in BMI z-scores among the self-reported eating rate categories (effect size f = 0.105, alfa error
Statistics
For Study 1, Pearson's correlation was used to assess the association between objective eating rate and objective food mass intake. One-way analysis of variance (ANOVA) was used for both objective and subjective eating rate category differences (slow, intermediate, and fast) in objective food mass intake. Bonferroni post hoc test was conducted to assess specific group level differences. Independent sample t-test was used to assess group level differences in objective eating rate among the two weight categories (BMI z-scores < 0 vs. BMI z-scores ≥ 0).
Cohen's weighted kappa analysis was used for categorical eating rate agreement [42], i.e., for agreement between self-reported vs. objective eating rate categories in Study 1, for agreement between self-reported eating rate categories during lunch 1 vs. self-reported eating rate categories during lunch 2 (in Study 2) and for objective eating rate categories during lunch 1 vs. objective eating rate categories during lunch 2 (in Study 2). Furthermore, Pearson's correlation was used for test-retest reliability analysis in Study 2 (eating rate during repeated school lunch meals among the same subjects), while a publicly available spreadsheet was used for the calculation of other test-retest reliability measures (i.e., the systematic change in mean and the typical error of measurement) [43,44].
In Study 3 (BigO cohort), ANOVA was used for all statistical tests between the different groups of self-reported eating rate categories (slow, intermediate, and fast), with Bonferroni post hoc tests conducted to assess specific group level differences when the overall ANOVA model was significant.
Bias
BMI z-scores were used in all analyses instead of BMI to consider the natural growth occurring among adolescents (i.e., BMI z-scores takes into account age and sex in addition to weight and height [47]). In Study 3, in addition to the total population-level analysis, students were also split into separate groups based on their country (Sweden and Greece) to assess potential differences in self-reported eating rate within each country. A similar procedure was also done for the students who were participating in a personalized multidisciplinary management of obesity in Athens, Greece (n = 304, mean BMI Z-score: 2.19, standard deviation: 0.89).
Participants
For specifics on number of students at each stage of studies 1-3, see Figure 2 below.
Descriptive Data
For descriptive statistics related to the included students in studies 1-3, see Table 1.
Study 1. Single School Lunch Study
Association between Objective Eating Rate and Objective Food Mass Intake (g) There was a significant "large" correlation (Pearson's r = 0.667, p < 0.001) between eating rate (g/min) and food mass intake (g) during the school lunch (see Figure 3A).
Descriptive Data
For descriptive statistics related to the included students in studies 1-3, see Table 1. There was a significant "large" correlation (Pearson's r = 0.667, p < 0.001) between eating rate (g/min) and food mass intake (g) during the school lunch (see Figure 3A).
Association between Objective Eating Rate (g/min) and BMI z-Scores
There was also a significant "moderate" correlation (Pearson's r = 0.310, p = 0.001) between objective eating rate (g/min) and BMI z-scores during the school lunch (see Figure 3B).
Objective Food Mass Intake among Objectively Established Eating Rate Categories
There was a significant difference in objectively measured food mass intake (g) between the objectively established eating rate categories [F(2, 111) = 30.578, p < 0.001, partial Association between Objective Eating Rate (g/min) and BMI z-scores There was also a significant "moderate" correlation (Pearson's r = 0.310, p = 0.001) between objective eating rate (g/min) and BMI z-scores during the school lunch (see Figure 3B).
Objective Food Mass Intake among Objectively Established Eating Rate Categories
There was a significant difference in objectively measured food mass intake (g) between the objectively established eating rate categories [F(2, 111) = 30.578, p < 0.001, partial Objective Eating Rate among BMI z-score Based Weight Categories Students with a BMI z-score below 0 had a significantly lower eating rate (g/min) vs. students who had a BMI z-score equal to or above 0 (28.2 g/min vs. 35.9 g/min respectively, Objective Eating Rate among BMI z-Score Based Weight Categories Students with a BMI z-score below 0 had a significantly lower eating rate (g/min) vs. students who had a BMI z-score equal to or above 0 (28.2 g/min vs. 35.9 g/min respectively, mean difference = −7.7 g/min, p = 0.012, 95% CI −14 g/min-−1.8 g/min; see Figure 5). ts 2021, 13, x FOR PEER REVIEW 10 of 21 Figure 5. Objective eating rate among students with BMI z-score below 0 (n = 52) vs. students with BMI z-score equal to or above 0 (n = 62). * = significantly higher vs. BMI z-score < 0 group. Error bars represents 95% confidence intervals.
Objective Eating Rate among Self-Reported Slow, Intermediate, and Fast Eaters There was a significant difference in objectively measured eating rate (g/min) between the self-reported eating rate categories [F(2, 111) = 7.104, p = 0.001, partial η 2 = 0.113]. Post hoc comparisons using Bonferroni revealed that there was a statistically significant difference between slow and fast [mean difference = −13.7 g/min, 95% CI = −22.5 g/min to Objective eating rate among students with BMI z-score below 0 (n = 52) vs. students with BMI z-score equal to or above 0 (n = 62). * = significantly higher vs. BMI z-score < 0 group. Error bars represents 95% confidence intervals.
Objective Eating Rate among Self-Reported Slow, Intermediate, and Fast Eaters There was a significant difference in objectively measured eating rate (g/min) between the self-reported eating rate categories [F(2, 111) = 7.104, p = 0.001, partial η 2 = 0.113]. Post hoc comparisons using Bonferroni revealed that there was a statistically significant difference between slow and fast [mean difference = −13.7 g/min, 95% CI = −22.5 g/min to −4.84 g/min; p = 0.001)]. However, the difference between slow and intermediate and between intermediate and fast were not significant ( Figure 6).
Categorical Agreement: Self-Reported vs. Objective Eating Rate Categories The weighted kappa value for self-reported eating rate categories vs. objectively established eating rate categories in Study 1 was 0.31 (p < 0.001). Categorical Agreement: Self-Reported vs. Objective Eating Rate Categories The weighted kappa value for self-reported eating rate categories vs. objectively established eating rate categories in Study 1 was 0.31 (p < 0.001).
Study 2. Repeated School Lunch Study
Categorical Agreement: Self-Reported vs. Objectively Established Eating Rate Categories Lunches 1 and 2 The weighted kappa value for self-reported eating rate categories vs. objectively established eating rate categories during lunch 1 was 0.36 (p = 0.009), while the weighted kappa value for self-reported eating rate categories vs. objectively established eating rate categories during lunch 2 was 0.30 (p = 0.036).
Categorical Agreement: Self-Reported Eating Rate Categories Lunch 1 vs. Lunch 2 The weighted kappa value for self-reported eating rate categories during lunch 1 vs. lunch 2 was 0.62 (p < 0.001).
Study 2. Repeated School Lunch Study
Test-Retest Reliability of Objective Eating Rate: Lunch Meal 1 vs. Lunch Meal 2 Objectively measured eating rate during lunch 1 was significantly correlated with objectively measured eating rate during lunch 2 (Pearson's correlation = 0.75, 95% CI: 0.59-0.85); see Figure 7. There was a systematic change in the mean objectively measured eating rate from lunch 1 to lunch 2 (+4.4 g/min, 95% CI: 0.7-8.1 g/min) and the typical error of measurement for eating rate from lunch 1 to lunch 2 was 24.9% (95% CI: 20.4-31.9%). For more information on individual changes in objectively measured eating rate (g/min) from lunch 1 to lunch 2, see Figure 8. There was a systematic change in the mean objectively measured eating rate from lunch 1 to lunch 2 (+4.4 g/min, 95% CI: 0.7-8.1 g/min) and the typical error of measurement for eating rate from lunch 1 to lunch 2 was 24.9% (95% CI: 20.4-31.9%). For more information on individual changes in objectively measured eating rate (g/min) from lunch 1 to lunch 2, see Figure 8.
Study 3. BigO Cohort Study
The BigO Cohort: BMI z-Scores among Self-Reported Eating Rate (Categorical) There was a significant difference in BMI z-scores between the three groups of self-
When dividing the BigO cohort population into groups of Swedish (n = 748) and Greek (n = 1084) students, there were also significant differences in BMI z-scores between When analyzing students who were treated at the obesity clinic in Athens (n = 304) separately, there were no significant differences between the three groups of self-reported eating rate.
Discussion
This study assessed the concurrent validity of self-reported eating rate compared to objectively measured eating rate and the test-retest reliability of such measurements in a "real-life" school lunch context. Additionally, the association between eating rate and indices of obesity (i.e., BMI z-scores and increased food mass intake during school lunch) was investigated among Swedish and Greek high-school student populations.
The finding of a "large" [45] correlation between objectively measured eating rate and short-term food intake is in line with a study in 4.5-year-old Singaporean children [15], as well as experimental studies on effects of eating rate on short-term energy intake [13]. Interestingly, when dividing students into tertiles based on their objectively measured speed of eating, students who were in the "fast" eating rate tertile ate 247 g more food vs. students in the "slow" eating rate tertile (~117% increase). This finding is in line with results obtained in the study that included Singaporean children [15]. Furthermore, students with BMI z-scores ≥ 0 had ~27% higher eating rate during school lunch vs. students with BMI z-scores < 0. Taken together, these results corroborate the suggestion of a "obesogenic" fast eating style that has previously been reported in younger children [48]. In other words, students who eat faster than their peers might be at greater risk of developing obesity, most likely due to a combination of genetic and environmental factors [49]. However, these results should be interpreted with caution, since students with increased BMI z-scores most likely have higher resting energy expenditure due to their larger bodies [50]. In other words, students with BMI z-scores > 0 might need to eat faster vs. students When analyzing students who were treated at the obesity clinic in Athens (n = 304) separately, there were no significant differences between the three groups of self-reported eating rate.
Discussion
This study assessed the concurrent validity of self-reported eating rate compared to objectively measured eating rate and the test-retest reliability of such measurements in a "real-life" school lunch context. Additionally, the association between eating rate and indices of obesity (i.e., BMI z-scores and increased food mass intake during school lunch) was investigated among Swedish and Greek high-school student populations.
The finding of a "large" [45] correlation between objectively measured eating rate and short-term food intake is in line with a study in 4.5-year-old Singaporean children [15], as well as experimental studies on effects of eating rate on short-term energy intake [13]. Interestingly, when dividing students into tertiles based on their objectively measured speed of eating, students who were in the "fast" eating rate tertile ate 247 g more food vs. students in the "slow" eating rate tertile (~117% increase). This finding is in line with results obtained in the study that included Singaporean children [15]. Furthermore, students with BMI z-scores ≥ 0 had~27% higher eating rate during school lunch vs. students with BMI z-scores < 0. Taken together, these results corroborate the suggestion of a "obesogenic" fast eating style that has previously been reported in younger children [48]. In other words, students who eat faster than their peers might be at greater risk of developing obesity, most likely due to a combination of genetic and environmental factors [49]. However, these results should be interpreted with caution, since students with increased BMI z-scores most likely have higher resting energy expenditure due to their larger bodies [50]. In other words, students with BMI z-scores > 0 might need to eat faster vs. students with BMI z-scores < 0 given the same amount of time to eat (here pre-defined by the time allocated for lunch by the school), to cover their energy needs [50,51].
We could also confirm the results observed among older populations in laboratory settings [24][25][26], that self-reported eating rate categories (slow, intermediate, and fast) could distinguish differences in objectively measured eating rate on a group level. More specifically, students who reported eating faster than others had~56% increased objectively measured eating rate vs. students who self-reported eating slower than others, on a group level. Self-reported eating rate categories could explain~11% of the variance in objectively measured eating rate in our study (partial η 2 = 0.113). We are the first to confirm such findings in a "real-life" setting (i.e., the school canteen environment). However, in accordance with previous results [24,26], there was "minimal" agreement [46] between self-reported eating rate categories and objectively established eating rate categories on an individual level.
The above findings support the notion that self-reported eating rate could be used in larger-scale epidemiological studies to differentiate populations with different objective eating rates. On the other hand, self-reported eating rate should not be used on an individual level, i.e., in the context of clinical management of obesity in childhood or adolescence. It should be noted that in practice, due to monetary and time cost requirements, when it comes to monitoring of larger populations, the use of self-reported measures is often the only option. However, we argue that objective measures of eating rate should be used when eating rate is considered on an individual level.
Furthermore, when we tested the same individuals at consecutive time points (3 months apart), there was "moderate" agreement between self-reported eating rate categories during lunch 1 vs. lunch 2, as well as between objective eating rate categories during lunch 1 vs. lunch 2. These results suggest that both objective and self-reported eating rate categories are moderately stable on an individual level. Additionally, on a group level, self-reported eating rate categories could distinguish group level differences in objectively measured eating rate during both lunch 1 and lunch 2 (i.e., subjects who self-reported eating slow had lower objective eating rate vs. subjects who self-reported eating fast on a group level).
When assessing the test-retest reliability of the more granular measurement of objective eating rate (expressed as grams of food eaten/minute), there was a "large" correlation between repeated measures of objective eating rate during lunch 1 and lunch 2, indicating that objective eating rate in one lunch meal could be used to predict the rank of students eating rate during the second lunch meal with high accuracy. On the other hand, there was a systematic bias (~15%) between the two lunches (students were eating their meals 4.4 g/min faster during lunch 2 vs. lunch 1) although the same students (n = 50) participated in both lunch 1 and lunch 2 (i.e., within subject design) and the study setting was almost identical (i.e., with the same time duration available to eat, identical food choices, same time of the day, etc.). However, in Study 2, n = 37 students were eating their lunch without a food scale (i.e., the food was weighed at the food buffet before/after taking food, instead of continuously under the plate when eating food), while in Study 1, all (n = 50) students were eating their food on a food scale. A potential "observer effect" [52] could therefore be expected. However, food scale use at the lunch table vs. non-use at the lunch table during lunch 2 could explain 0% (R2 = 0.000) of the variation in the change in eating rate from lunch 1 to lunch 2. We therefore argue that the systematic change in eating rate must have been caused by another unmeasured environmental factor. Additionally, the typical error of measurement was similar to what has been observed for food mass intake during repeated school lunches, 24.9% in the current study vs. 26.1% for food intake in our previous study [14].
In our larger sample of students who self-reported their eating rate (i.e., the BigO cohort), we could show that students who self-reported eating faster than others had 0.4 units higher BMI z-scores vs. students who reported eating slower than others. This finding was significant both among Swedish and Greek students and was expected based on the overall epidemiological literature [17]. However, the explanatory power of selfreported eating rate categories was low in our study and could explain~1% of the variance in BMI z-scores. These results are in line with the results that we obtained in the single school lunch study (Study 1), in which self-reported eating rate categories could explain 1% of the variance in BMI z-scores. On the other hand, objectively established eating rate categories could explain approximately 5% of the variance in BMI z-scores, and objective eating rate expressed as grams eaten per minute could explain~10% of the variance in BMI z-scores in Study 1. These results suggest that self-reported eating rate cannot capture the full size of the association between "real" eating rate and BMI z-scores among student populations.
Additionally, in our clinical sample of Greek students with overweight/obesity (n = 304), the difference in BMI z-scores among the self-reported eating rate categories did not reach statistical significance, although they had a similar tendency as our larger sample, i.e., patients with slow self-reported eating rate had lower BMI z-scores vs. patients with fast self-reported eating rate (not significant). However, the sample size was too small for such comparison since the expected effect size for eating rate categories on BMI z-scores was low. Therefore, a larger-scale study (at the scale of thousands) would be needed in a clinical student population to investigate such association. Alternatively, objective measurements of eating rate could be utilized in a clinical high-school population to increase the power and reduce the need for a larger sample size.
Our results, combined with previous studies in the area of eating rate [17,[24][25][26], support the idea to include self-reported questionnaires to estimate eating rate in populationlevel investigations of dietary intake among students. For example, the national food agency in Sweden conducts population-level surveys about self-reported dietary intake during school lunches [53], and could include an additional question about self-reported eating rate. Such population-level information would give a more detailed understanding of the association between eating rate and BMI z-scores among the total Swedish student population, as well as enable investigations of potential regional differences in eating rates among high-schools. Similar organizations in other countries could conduct similar research without much added cost. Additionally, since objective measures are needed to assess eating rate on an individual level, large-scale studies that incorporate modern technological tools with such capability (i.e., food scales [39], off the shelf smartwatches [54] or algorithm-assisted video recordings [55]) should be considered in order to investigate the prospective association between fast eating rate with risk of long-term disease development. Such population-level data could give real-time policy advice to better manage environmental contributors to fast eating rate (i.e., school level interventions to increase time available for eating school lunch) [14].
It is important to acknowledge that the data collection in studies 1 and 3 were of an observational nature. Therefore, there is uncertainty of direction of the relationship between eating rate and BMI z-scores in our studies. Large-scale prospective studies are the logical next step and experimental studies that manipulates environmental modifiers of eating rate could also be helpful in the school lunch context. Furthermore, the students who were included in studies 1 and 2 were eating their school meals under observation (i.e., video cameras recording their meals as well as their food being measured by a kitchen scale). This addition to their normal school lunch context might have contributed to some form of "observer effect" [52,56]. Therefore, the discrepancy between objectively measured eating rate and self-reported eating rate categories might have been affected due to this setup. Covert measures of objective eating rate would be needed in the school lunch context to evaluate such effect (i.e., a study setup similar to what was used in [52]). Additionally, the questionnaire that was used to assess self-reported eating rate did not refer to a specific time point (i.e., it was framed in more general terms about their habitual eating rate), while the objective eating rate test was time and context specific (i.e., one school lunch). Also, students who had already participated in Study 1 were invited to participate in Study 2, this might have introduced self-selection bias in the results obtained from the reliability analysis. The observed bias between lunch 1 and lunch 2 (−4.4 g/min reduced eating rate) might have been related to the dates of the two lunches. Lunch 1 was conducted in the end of the semester (December 2015), while lunch 2 was conducted in the beginning of the next semester (February/March 2016). Students might have been more stressed during the end of the semester (December 2015) vs. in the beginning of the next semester (February/March 2016). The measurement of students' body weight in close connection to the school lunch might also have modified their eating behavior (perhaps the students might have become more conscious of how they were eating). It is also important to mention that self-reported measurements of height and weight were used to calculate BMI z-scores in Study 3 and potential bias might therefore have been introduced due to this as well [57]. Furthermore, students were eating their school lunch together with other students and some form of "social facilitation" of eating might have occurred (i.e., increased food intake) since they were eating with their peers [58]. However, since students usually eat their school lunch together with their classmates, this study setup is most likely the preferred one vs. having students eat their lunch in isolation (i.e., in a laboratory-like setting). Lastly, the objective measures of eating rate in studies 1 and 2 were conducted with Swedish high-school students, meaning that the results might not be fully generalizable to the Greek student population included in Study 3 (as well as students from other countries). Additionally, IEGS is a privately owned high-school in central Stockholm and the student population might not be representative of the overall Swedish high-school student population as well as student populations in areas of lower socioeconomic status in Sweden (perhaps Uppsala NTI high-school included in Study 3).
Conclusions
Objectively measured eating rate was associated with the weight status of students, and students with fast eating rate consumed more food during school lunch vs. students with a slow eating rate. Furthermore, self-reported eating rate could distinguish student populations of different eating rates (i.e., slow eaters had lower objective eating rate vs. fast eaters). However, the agreement between self-reported eating rate categories (slow, intermediate, and fast) vs. objectively measured eating rate categories was "minimal". Our results suggest that when objective measures of eating rate are available, those should be used instead of self-reported measures. However, in cases of studies with large sample sizes, when the behavior of the population is of interest, self-reported measures of eating rate categories could be used as a proxy of real eating rate on a group level. Furthermore, it should be emphasized that when the aim is to assess eating rate on an individual level, objectively measured eating rate should be used and self-reported eating rate categories should be avoided, regardless of the sample size. Lastly, in our larger population of Swedish and Greek high-school students, fast self-reported eating rate was associated with increased BMI z-scores vs. students who self-reported slow eating rate. The results were similar among both Swedish and Greek students.
Institutional Review Board Statement: The procedures followed during studies 1-3 were approved by Swedish ethical review board regarding the data collection from the Swedish students (DNR 2014.2100-31.2, 2015.1824-31, 2017/339-31 and 2018/1921-31/5), and the Committee for proper practice in Research in Aristotle University of Thessaloniki (DNR: 132649/2017), related to the data collection from Greek students during Study 3. Additionally, Study 3 was approved by the Committee on the Ethics of Human Research of 'Aghia Sophia' Children's Hospital (DNR: 26660/16-11-17). All study procedures were in accordance with the standards set by the Helsinki declaration.
Informed Consent Statement: Informed consent was obtained from all subjects involved in studies 1-3.
Data Availability Statement:
The data presented in studies 1-3 are available on request from the corresponding author.
|
2021-03-29T05:26:46.181Z
|
2021-03-01T00:00:00.000
|
{
"year": 2021,
"sha1": "873c6a1bf5616abb1262381f9d79a82848b6ce2c",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6643/13/3/880/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "873c6a1bf5616abb1262381f9d79a82848b6ce2c",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
3524623
|
pes2o/s2orc
|
v3-fos-license
|
Scalable Tucker Factorization for Sparse Tensors - Algorithms and Discoveries
Given sparse multi-dimensional data (e.g., (user, movie, time; rating) for movie recommendations), how can we discover latent concepts/relations and predict missing values? Tucker factorization has been widely used to solve such problems with multi-dimensional data, which are modeled as tensors. However, most Tucker factorization algorithms regard and estimate missing entries as zeros, which triggers a highly inaccurate decomposition. Moreover, few methods focusing on an accuracy exhibit limited scalability since they require huge memory and heavy computational costs while updating factor matrices. In this paper, we propose P-Tucker, a scalable Tucker factorization method for sparse tensors. P-Tucker performs an alternating least squares with a gradient-based update rule in a fully parallel way, which significantly reduces memory requirements for updating factor matrices. Furthermore, we offer two variants of P-Tucker: a caching algorithm P-Tucker-CACHE and an approximation algorithm P-Tucker-APPROX, both of which accelerate the update process. Experimental results show that P-Tucker exhibits 1.7-14.1x speed-up and 1.4-4.8x less error compared to the state-of-the-art. In addition, P-Tucker scales near linearly with the number of non-zeros in a tensor and number of threads. Thanks to P-Tucker, we successfully discover hidden concepts and relations in a large-scale real-world tensor, while existing methods cannot reveal latent features due to their limited scalability or low accuracy.
I. INTRODUCTION
Given a large-scale sparse tensor, how can we discover latent concepts/relations and predict missing entries? How can we design a time and memory efficient algorithm for analyzing a given tensor? Various real-world data can be modeled as tensors or multi-dimensional arrays (e.g., (user, movie, time; rating) for movie recommendations). Many real-world tensors are sparse and partially observable, i.e., composed of a vast number of missing entries and a relatively small number of observable entries. Examples of such data include item ratings [1], social network [2], and web search logs [3] where most entries are missing. Tensor factorization has been used effectively for analyzing tensors [4], [5], [6], [7], [8], [9], [10]. Among tensor factorization methods [11], Tucker factorization has received much interest since it is a generalized form of other factorization methods like CANDECOMP/PARAFAC (CP) decomposition, and it allows us to examine not only latent factors but also relations hidden in tensors.
While many algorithms have been developed for Tucker factorization [12], [13], [14], [15], most methods produce highly inaccurate factorizations since they assume and predict missing TABLE I: Scalability summary of our proposed method P-TUCKER and competitors. A check-mark of a method indicates that the algorithm is scalable with a particular aspect. P-TUCKER is the only method scalable with all aspects of tensor scale, factorization speed, memory requirement, and accuracy of decomposition; on the other hand, competitors have limited scalability for some aspects.
Method
Scale Speed Memory Accuracy TUCKER-WOPT [18] TUCKER-CSF [20] S-HOTSCAN [17] P-TUCKER entries as zeros, and the values of whose missing entries are unknown. Moreover, existing methods focusing only on observed entries exhibit limited scalability since they exploit tensor operations and singular value decomposition (SVD), leading to heavy memory and computational requirements. In particular, tensor operations generate huge intermediate data for large-scale tensors, which is a problem called intermediate data explosion [16]. A few Tucker algorithms [17], [18], [19], [20] have been developed to address the above problems, but they fail to solve the scalability and accuracy issues at the same time. In summary, the major challenges for decomposing sparse tensors are 1) how to handle missing entries for an accurate and scalable factorization, and 2) how to avoid intermediate data explosion and high computational costs caused by tensor operations and SVD.
In this paper, we propose P-TUCKER, a scalable Tucker factorization method for sparse tensors. P-TUCKER performs an alternating least squares (ALS) with a gradient-based update rule, which focuses only on observed entries of a tensor. The gradient-based approach considerably reduces the amount of memory required for updating factor matrices, enabling P-TUCKER to avoid the intermediate data explosion problem. Besides, to speed up the update procedure, we provide its timeoptimized versions: a caching method P-TUCKER-CACHE and an approximation method P-TUCKER-APPROX. P-TUCKER fully employs multi-core parallelism by carefully allocating rows of a factor matrix to each thread considering independence and fairness. Table I summarizes a comparison of P-TUCKER and competitors with regard to various aspects.
Our main contributions are the following: • Algorithm. We propose P-TUCKER, a scalable Tucker arXiv:1710.02261v1 [cs.NA] 6 Oct 2017 factorization method for sparse tensors. P-TUCKER not only enhances the accuracy of factorization by focusing on observed values but also achieves higher scalability by utilizing a gradient-based ALS rather than using tensor operations and SVD for updating factor matrices. • Theory. We suggest a row-wise update rule of factor matrices, and prove the correctness and convergence of it. Moreover, we analyze the time and memory complexities of P-TUCKER and other methods, as summarized in Table III. • Performance. P-TUCKER provides the best performance across all aspects: tensor scale, factorization speed, memory requirement, and accuracy of decomposition. Experimental results demonstrate that P-TUCKER achieves 1.7-14.1× speed-up with 1.4-4.8× less error for large-scale tensors, as summarized in Figures 6, 7, and 11. • Discovery. P-TUCKER successfully reveals hidden concepts and relations in a large-scale real-world tensor, MovieLens dataset, while the state-of-the-art methods cannot identify latent features due to their limited scalability or low accuracy (see Tables V∼VI). The source code of P-TUCKER and datasets used in this paper are publicly available at https://datalab.snu.ac.kr/ ptucker/ for reproducibility. The rest of this paper is organized as follows. Section II explains preliminaries on a tensor, its operations, and its factorization methods. Section III describes our proposed method P-TUCKER. Section IV presents experimental results of P-TUCKER and other methods. Section V describes our discovery results on the MovieLens dataset. After introducing related works in Section VI, we conclude in Section VII.
II. PRELIMINARIES
In this section, we describe the preliminaries of a tensor in Section II-A, its operations in Section II-B, and its factorization methods in Section II-C. Notations and definitions are summarized in Table II.
A. Tensor
Tensors, or multi-dimensional arrays, are a generalization of vectors (1-order tensors) and matrices (2-order tensors) to higher orders. As a matrix has rows and columns, an N -order tensor has N modes; their lengths (also called dimensionalities) are denoted by I 1 through I N , respectively. We denote tensors by boldface Euler script letters (e.g., X), matrices by boldface capitals (e.g., A), and vectors by boldface lowercases (e.g., a). An entry of a tensor is denoted by the symbolic name of the tensor with its indices in subscript. For example, a i1j1 indicates the (i 1 , j 1 )th entry of A, and X (i1,...,i N ) denotes the (i 1 , ..., i N )th entry of X. The i 1 th row of A is denoted by a i1: , and the i 2 th column of A is denoted by a :i2 .
B. Tensor Operations
We review some tensor operations used for Tucker factorization. More tensor operations are summarized in [11].
Symbol Definition
order of X In, Jn dimensionality of the nth mode of X and G A (n) nth factor matrix (∈ R In×Jn ) a (n) injn (in, jn)th entry of A (n) Ω set of observable entries of X Ω (n) in set of observable entries whose nth mode's index is in |Ω|, |G| number of observable entries of X and G λ regularization parameter for factor matrices X Frobenius norm of tensor X T number of threads α an entry (i1, ..., iN ) of input tensor X β an entry (j1, ..., jN ) of core tensor G P res cache table (∈ R |Ω|×|G| ) p truncation rate Definition 1 (Frobenius Norm): Given an N-order tensor X (∈ R I1×...×I N ), the Frobenius norm of X is denoted by ||X|| and defined as follows: Definition 2 (Matricization/Unfolding): Matricization transforms a tensor into a matrix. The mode-n matricization of a tensor X ∈ R I1×I2×···×I N is denoted as X (n) . The mapping from an element (i 1 , ..., i N ) of X to an element (i n , j) of X (n) is given as follows: Note that all indices of a tensor and a matrix begin from 1. Definition 3 (n-Mode Product): n-mode product enables multiplications between a tensor and a matrix. The nmode product of a tensor X ∈ R I1×I2×···×I N with a matrix U ∈ R Jn×In is denoted by X × n U (∈ R I1×···×In−1×Jn×In+1×···×I N ). Element-wise, we have Our proposed method P-TUCKER is based on Tucker factorization, one of the most popular decomposition methods. More details about other factorization algorithms are summarized in Section VI and [11].
C. Tensor Factorization Methods
Definition 4 (Tucker Factorization): Given an N-order tensor X (∈ R I1×...×I N ), Tucker factorization approximates X by a core tensor G (∈ R J1×...×J N ) and factor matrices {A (n) ∈ R In×Jn |n = 1...N }. Figure 1 illustrates a Tucker factorization result for a 3-way tensor. Core tensor G is assumed to be smaller and denser than the input tensor X, and factor matrices A (n) to be normally orthogonal. Regarding interpretations of factorization results, each factor matrix A (n) represents the latent features of the object related to the nth mode of X, and each element of core tensor G indicates the weights of the relations composed of columns of factor matrices. Tucker factorization with tensor operations is presented as follows: min Note that the loss function (4) is calculated by all entries of X, and whole missing values of X are regarded as zeros. Concurrently, an element-wise expression is given as follows: Equation (5) is used to predict values of missing entries after G, A (1) , ..., A (N ) are found. We define the reconstruction error of Tucker factorization of X by the following rule. Note that Ω is the set of observable entries of X.
Note that the loss function (7) only depends on observable entries of X, and L 2 regularization is used in (7) to prevent overfitting, which has been generally utilized in machine learning problems [21], [22], [23]. Definition 6 (Alternating Least Squares): To minimize the loss functions (4) and (7), an alternating least squares (ALS) technique is widely used [11], [14], which updates one factor matrix or core tensor while keeping all others fixed.
Algorithm 1 describes a conventional Tucker factorization based on the ALS, which is called the higher-order orthogonal iteration (HOOI) (see [11] for details). The computational and memory bottleneck of Algorithm 1 is updating factor matrices A (n) (lines 4-5), which requires tensor operations and SVD. Specifically, Algorithm 1 requires storing a fulldense matrix Y (n) , and the amount of memory needed for storing Y (n) is O(I n m =n J m ). The required memory grows rapidly when the order, the dimensionality, or the Algorithm 1: Tucker-ALS Input : Tensor X ∈ R I 1 ×I 2 ×···×I N , and core tensor dimensionality J1, ..., JN . Output: Updated factor matrices A (n) ∈ R In×Jn (n = 1, ..., N ), and updated core tensor G ∈ R J 1 ×J 2 ×···×J N .
1 initialize all factor matrices A (n) 2 repeat 6 until the max. iteration or reconstruction error converges; rank of a tensor increase, and ultimately causes intermediate data explosion [16]. Moreover, Algorithm 1 computes SVD for a given Y (n) , where the complexity of exact SVD is O(min(I n m =n J 2 m , I 2 n m =n J m )). The computational costs for SVD increase rapidly as well for a large-scale tensor. Notice that Algorithm 1 assumes missing entries of X as zeros during the update process (lines 4-5), and core tensor G (line 7) is uniquely determined and relatively easy to be computed by an input tensor and factor matrices.
In summary, applying the naive Tucker-ALS algorithm on sparse tensors generates severe accuracy and scalability issues. Therefore, Algorithm 1 needs to be revised to focus only on observed entries and scale for large-scale tensors at the same time. In that case, a gradient-based ALS approach is applicable to Algorithm 1, which is utilized for partially observable matrices [23] and CP factorizations [24]. The gradient-based ALS approach is discussed in Section III.
Definition 7 (Intermediate Data): We define intermediate data as memory requirements for updating A (n) (lines 4-5 in Algorithm 1), excluding memory space for storing X, G, and A (n) . The size of intermediate data plays a critical role in determining which Tucker factorization algorithms are spaceefficient, as we will discuss in Section III-E2.
III. PROPOSED METHOD
In this section, we describe P-TUCKER, our proposed Tucker factorization algorithm for sparse tensors. As described in Definition 6, the computational and memory bottleneck of the standard Tucker-ALS algorithm occurs while updating factor matrices. Therefore, it is imperative to update them efficiently in order to maximize scalability of the algorithm. However, there are several challenges in designing an optimized algorithm for updating factor matrices.
1) Exploit the characteristic of sparse tensors. Sparse tensors are composed of a vast number of missing entries and a relatively small number of observable entries. How can we exploit the sparsity of given tensors to design an accurate and scalable algorithm for updating factor matrices? 2) Maximize scalability. The aforementioned Tucker-ALS algorithm suffers from intermediate data explosion and high computational costs while updating factor matrices. How can we formulate efficient algorithms for updating factor matrices in terms of time and memory? 3) Parallelization. It is crucial to avoid race conditions and adjust workloads between threads to thoroughly employ multi-core parallelism. How can we apply data parallelism on updating factor matrices in order to scale up linearly with respect to the number of threads? To overcome the above challenges, we suggest the following main ideas, which we describe in later subsections.
1) Gradient-based ALS fully exploits the sparsity of a given tensor and enhances the accuracy of a factorization ( Figure 3 and Section III-B). 2) P-TUCKER-CACHE and P-TUCKER-APPROX accelerate the update process by caching intermediate calculations and utilizing a truncated core tensor, while P-TUCKER itself provides a memory-optimized algorithm by default (Section III-C). 3) Careful distribution of work assures that each thread has independent tasks and balanced workloads when P-TUCKER updates factor matrices. (Section III-D). We first suggest an overview of how P-TUCKER factorizes sparse tensors using Tucker method in Section III-A. After that, we describe details of our main ideas in Sections III-B∼III-D, and we offer a theoretical analysis of P-TUCKER in Section III-E.
A. Overview
P-TUCKER provides an efficient Tucker factorization algorithm for sparse tensors. After initialization, P-TUCKER updates factor matrices in a fully-parallel way. When the reconstruction error converges, P-TUCKER performs QR decomposition to make factor matrices orthogonal and updates a core tensor. Figure 2 and Algorithm 2 describe the main process of P-TUCKER. First, P-TUCKER initializes all A (n) and G with random real values between 0 and 1 (step 1 and line 1). After that, P-TUCKER updates factor matrices (steps 2-3 and line 3) by Algorithm 3 explained in Section III-B. When all factor matrices are updated, P-TUCKER measures reconstruction Algorithm 2: P-TUCKER for Sparse Tensors Input : Tensor X ∈ R I 1 ×I 2 ×···×I N , core tensor dimensionality J1, ..., JN , and truncation rate p (P-TUCKER-APPROX only). Output: Updated factor matrices A (n) ∈ R In×Jn (n = 1, ..., N ), and updated core tensor G ∈ R J 1 ×J 2 ×···×J N . 1 initialize factor matrices A (n) (n = 1, ..., N ) and core tensor G 2 repeat 3 update factor matrices A (n) (n = 1, ..., N ) by Algorithm 3 4 calculate reconstruction error using (6) 5 if P-TUCKER-APPROX then G Truncation 6 remove "noisy" entries of G by Algorithm 4 7 until the maximum iteration or X − X converges; Update core tensor G error using (6) (step 4 and line 4). In case of P-TUCKER-APPROX (step 5 and lines 5-6), P-TUCKER-APPROX removes "noisy" entries of G by Algorithm 4 explained in Section III-C. P-TUCKER stops iterations if the error converges or the maximum iteration is reached (line 7). Finally, P-TUCKER performs QR decomposition on all A (n) to make them orthogonal and updates G (step 6 and lines [8][9][10][11]. Specifically, QR decomposition [25] on each A (n) is defined as follows: is column-wise orthonormal and R (n) ∈ R Jn×Jn is upper-triangular. Therefore, by substituting Q (n) for A (n) , P-TUCKER succeeds in making factor matrices orthogonal. Core tensor G must be updated accordingly in order to maintain the same reconstruction error. According to [26], the update rule of core tensor G is given as follows: B. Gradient-based ALS for Updating Factor Matrices P-TUCKER adopts a gradient-based ALS method to update factor matrices, which concentrates only on observed entries of a tensor. From a high-level point of view, as most ALS methods do, P-TUCKER updates a factor matrix at a time while maintaining all others fixed. However, when all other matrices are fixed, there are several approaches [24] for updating a single factor matrix. Among them, P-TUCKER selects a rowwise update method; a key benefit of the row-wise update is that all rows of a factor matrix are independent of each other in terms of minimizing the loss function (7). This property enables applying multi-core parallelism on updating factor matrices. Given a row of a factor matrix, P-TUCKER updates the row by a gradient-based update rule. To be more specific, the update rule is derived by computing a gradient with respect to the given row and setting it as zero, which minimizes the loss function (7). The update rule for the i n th row of the nth factor matrix A (n) (see Figure 4) is given as follows; the proof of Equation (10) is in Theorem 1. Fig. 3: An overview of updating factor matrices. P-TUCKER performs a gradient-based ALS method which updates each factor matrix A (n) in a row-wise manner while keeping all the others fixed. Since all rows of a factor matrix are independent of each other in terms of minimizing the loss function (7), P-TUCKER fully exploits multi-core parallelism to update all rows of A (n) . First, all rows are carefully distributed to all threads to achieve a uniform workload among them. After that, all threads update their allocated rows in a fully parallel way. In a single thread, the allocated rows are updated in a sequential way. Finally, P-TUCKER aggregates all updated rows from all threads to update A (n) . P-TUCKER iterates this update procedure for all factor matrices one by one. for updating the inth row of A (n) . Note that λ is a regularization parameter, and IJ n is a Jn × Jn identity matrix. [a in: is a length J n vector whose jth entry is is a length J n vector whose jth entry is in indicates the subset of Ω whose nth mode's index is i n , λ is a regularization parameter, and I Jn is a J n × J n identity matrix. As shown in Figure 4, the update rule for the i n th row of A (n) requires three intermediate data B in . Thus, computational costs of updating factor matrices are proportional to the number of observable entries, which lets P-TUCKER fully exploit the sparsity of given tensors. Moreover, P-TUCKER predicts missing values of a tensor using (5), not as zeros. Equation (5) is computed by updated factor matrices and a core tensor, and they are learned by observed entries of a tensor. Hence, P-TUCKER not only enhances the accuracy of factorizations, but also reflects the latent-characteristics of observed entries of a tensor. Note that a matrix [B (n) in + λI Jn ] is positive-definite and invertible, and a proof of the update rule is summarized in Section III-E1.
Algorithm 3 describes how P-TUCKER updates factor matrices. First, in case of P-TUCKER-CACHE (lines 1-4), it computes the values of all entries in a cache table P res (∈ R |Ω|×|G| ) which caches intermediate multiplication results generated while updating factor matrices. This memoization technique allows P-TUCKER-CACHE to be a time-efficient algorithm. Next, P-TUCKER chooses a row a (n) in: of a factor matrix A (n) to update (lines 5-6). After that, P-TUCKER computes B in + λI Jn ] −1 (line 15). In case of P-TUCKER-CACHE, it recalculates P res using the existing and updated A (n) (lines [16][17][18][19] whenever A (n) is updated. Note that α and β indicate an entry of X and G, respectively.
if P-TUCKER-CACHE then
Precompute P res in and c (n) in: using (11) and (12) 14 inJn ] using (10) 16 if P-TUCKER-CACHE then Update P res successfully provides a memory-optimized algorithm. We can further optimize P-TUCKER in terms of time by a caching algorithm (P-TUCKER-CACHE) and an approximation algorithm (P-TUCKER-APPROX).
The crucial difference between P-TUCKER and P-TUCKER-CACHE lies in the computation of the intermediate vector δ The main intuition of P-TUCKER-APPROX is that there exist "noisy" entries in a core tensor G, and we can accelerate the update process by truncating these "noisy" entries of G. Then, how can we determine whether an entry of G is "noisy" or not? A naive approach could be treating an entry (j 1 , ..., j N ) ∈ G with small G (j1,...,j N ) value as "noisy" like the truncated SVD [27]. However, in this case, small-value entries are not always negligible since their contributions to minimizing the error (6) can be larger than that of large-value ones. Hence, Algorithm 4: P-TUCKER-APPROX Input : Tensor X ∈ R I 1 ×I 2 ×···×I N , factor matrices A (n) ∈ R In×Jn (n = 1, ..., N ), core tensor G ∈ R J 1 ×J 2 ×···×J N , and truncation rate p (0 < p < 1). Output: Truncated core tensor G ∈ R J 1 ×J 2 ×···×J N . 1 for β = ∀(j1, ..., jN ) ∈ G do 2 compute a partial reconstruction error R(β) by (14) 3 sort R(β) in descending order with their indices 4 remove p|G| entries in G, whose R(β) value are ranked within top-p|G| among all R(β) values.
we propose more precise criterion which regards an entry β = (j 1 , ..., j N ) ∈ G with a high R(β) value as "noisy". R(β) indicates a partial reconstruction error produced by an entry β, derived by the sum of terms only related to β in (6). Given an entry β = (j 1 , ..., j N ) ∈ G, R(β) is given as follows: (14) Note that we use α, β, and γ symbols to simplify the equation. R β suggests a more precise guideline of "noisy" entries since R β is a part of (6), while the naive approach assumes the error based on the value G (j1,...,j N ) . Figure 5 illustrates a distribution of R(β) and a cumulative function of relative reconstruction error on the latest MovieLens dataset (J = 10). As expected by our intuition, only 20% entries of G generate about 80% of total reconstruction error. Algorithm 4 describes how P-TUCKER-APPROX truncates "noisy" entries in G. It first computes R(β) (lines 1-2) for all entries in G, and sort R(β) in descending order (line 3) as well as their indices. Finally, it truncates top-p|G| "noisy" entries of G (line 4). P-TUCKER-APPROX performs Algorithm 4 for each iteration (lines 2-7 in Algorithm 2), which reduces the number of non-zeros in G step-by-step. Therefore, the elapsed time per iteration also decreases since the time complexity of P-TUCKER-APPROX depends on the number of non-zeros |G|. Note that we can find an optimal approximation point whose speed-up over accuracy loss is maximized (see Figure 9). = (j1, ..., jN ) of a core tensor G. Note that 20% "noisy" entries of G generate 80% of total reconstruction error.
With the above optimizations, P-TUCKER becomes the most time and memory efficient method in theoretical and experimental perspectives (see Table III).
D. Careful Distribution of Work
There are three sections where multi-core parallelization is applicable in Algorithms 2 and 3. The first section (lines 2-4 and 17-19 in Algorithm 3) is for P-TUCKER-CACHE when it computes and updates the cache table P res. The second section (lines 6-15 in Algorithm 3) is for updating factor matrices, and the last section (line 4 in Algorithm 2) is for measuring the reconstruction error. For each section, P-TUCKER carefully distributes tasks to threads while maintaining the independence between them. Furthermore, P-TUCKER utilizes a dynamic scheduling method [28] to assure that each thread has balanced workloads. The details of how P-TUCKER parallelizes each section are as follows. Note that T indicates the number of threads used for parallelization.
• Section 1: Computing and Updating Cache Table P res (Only for P-TUCKER-CACHE). All rows of P res are independent of each other when they are computed or updated. Thus, P-TUCKER distributes all rows equally over T threads, and each thread computes or updates allocated rows independently using static scheduling. • Section 2: Updating Factor Matrices. All rows of A (n) are independent of each other regarding minimizing the loss function (7). Therefore, P-TUCKER distributes all rows uniformly to each thread, and updates them in parallel. Since |Ω (n) in | differs for each row, the workload of each thread may vary considerably. Thus, P-TUCKER employs dynamic scheduling in this part. • Section 3: Calculating Reconstruction Error. All observable entries are independent of each other in measuring the reconstruction error. Thus, P-TUCKER distributes them evenly over T threads, and each thread computes the error separately using static scheduling. At the end, P-TUCKER aggregates the partial error from each thread.
E. Theoretical Analysis 1) Convergence Analysis: In this section, we theoretically prove the correctness and the convergence of P-TUCKER.
Theorem 1 (Correctness of P-TUCKER): The proposed row-wise update rule (15) minimizes the loss function (7) regarding the updated parameters.
Note that the full proof of Theorem 1 is in the supplementary material of P-TUCKER [29].
Proof: According to Theorem 1, the loss function (7) never increases since every update in P-TUCKER minimizes it, and (7) is bounded by 0. Thus, P-TUCKER converges.
2) Complexity Analysis: In this section, we analyze time and memory complexities of P-TUCKER and its variants. For simplicity, we assume I 1 = ... = I N = I and J 1 = ... = J N = J. Table III summarizes the time and memory complexities of P-TUCKER and other methods. As expected in Section III-C, P-TUCKER presents the best memory complexity among all algorithms. While P-TUCKER-CACHE shows better time complexity than that of P-TUCKER, P-TUCKER-APPROX exhibits the best time complexity thanks to the reduced number of non-zeros in G. Note that we calculate time complexities per iteration (lines 2-7 in Algorithm 2), and we focus on memory complexities of intermediate data, not of all variables.
Theorem 3 (Time complexity of P-TUCKER): The time complexity of P-TUCKER is O(N IJ 3 + N 2 |Ω|J N ).
Proof: Given the i n th row of A (n) (lines 5-6) in Algorithm 3 , computing δ Proof: Refer to the supplementary material [29]. Theorem 8 (Memory complexity of P-TUCKER-APPROX): The memory complexity of P-TUCKER-APPROX is O(J N ).
Proof: Refer to the supplementary material [29].
IV. EXPERIMENTS
In this section, we present experimental results of P-TUCKER and other methods. We focus on answering the following questions.
1) Data Scalability (Section IV-B). How well do P-
TUCKER and competitors scale up with respect to the following aspects of a given tensor: 1) the order, 2) the dimensionality, 3) the number of observable entries, and 4) the rank? 2) Effectiveness of P-TUCKER-CACHE and P-TUCKER-APPROX (Section IV-C). How successfully do P-TUCKER-CACHE and P-TUCKER-APPROX suggest the trade-offs between time-memory and time-accuracy, respectively? 3) Parallelization Scalability (Section IV-D). How well does P-TUCKER scale up with respect to the number of threads used for parallelization? 4) Real-World Accuracy (Section IV-E). How accurately do P-TUCKER and other methods factorize real-world tensors and predict their missing entries?
We describe the datasets and experimental settings in Section IV-A, and answer the questions in Sections IV-B to IV-E. A. Experimental Settings 1) Datasets: We use both real-world and synthetic tensors to evaluate P-TUCKER and competitors. Table IV summarizes the tensors we used in experiments, which are available at https://datalab.snu.ac.kr/ptucker/. For real-world tensors, we use Yahoo-music 1 , MovieLens 2 , Sea-wave video, and 'Lena' image. Yahoo-music is music rating data which consist of (user, music, year-month, hour, rating). MovieLens is movie rating data which consist of (user, movie, year, hour, rating). Sea-wave video and 'Lena' image are 10%-sampled tensors from original data. Note that we normalize all values of realworld tensors to numbers between 0 to 1. We also use 90% of observed entries as training data and the rest of them as test data for measuring the accuracy of P-TUCKER and competitors. For synthetic tensors, we create random tensors, which we describe in Section IV-B.
2) Competitors: We compare P-TUCKER and its variants with three state-of-the-art Tucker factorization (TF) methods. Descriptions of all methods are given as follows: TUCKER, which shows a trade-off between time and accuracy by truncating "noisy" entries of a core tensor. • TUCKER-WOPT [18]: the accuracy-focused TF method utilizing a nonlinear conjugate gradient algorithm for updating factor matrices and a core tensor. • TUCKER-CSF [20]: the speed-focused TF algorithm which accelerates a tensor-times-matrix chain (TTMc) by a compressed sparse fiber (CSF) structure. • S-HOT SCAN [17]: the TF method designed for large-scale tensors, which avoids intermediate data explosion [16] by on-the-fly computation.
Note that other TF methods (e.g., [19], [30]) are excluded since they present similar or limited scalability than that of competitors mentioned above, and some factorization models (e.g., [31], [32]) not directly applicable to tensors are not considered as well.
3) Environment: P-TUCKER is implemented in C with OPENMP and ARMADILLO libraries utilized for parallelization and linear algebra operations, and the source code of P-TUCKER is publicly available at https://datalab.snu.ac. kr/ptucker/. From a practical viewpoint, P-TUCKER does not automatically choose which optimizations to be used. Hence, users ought to select a method from P-TUCKER and its variations in advance. For competitors, we use the original implementations provided by the authors (S-HOT SCAN 3 , (c) Number of observable entries. TUCKER-CSF 4 , and TUCKER-WOPT 5 ). We run experiments on a single machine with 20 cores/20 threads, equipped with an Intel Xeon E5-2630 v4 2.2GHz CPU and 512GB RAM.
The default values for P-TUCKER parameters λ and T are set to 0.01 and 20, respectively; for P-TUCKER-APPROX, the truncation rate per iteration is set to 0.2; for TUCKER-CSF, we set the number of CSF allocations to 1 and choose a LAPACK SVD routine. We set the maximum running time per iteration to 2 hours and the maximum number of iterations to 20. In reporting running times, we use the average elapsed time per iteration, not the total running time.
B. Data Scalability
We evaluate the data scalability of P-TUCKER and other methods using both synthetic and real-world tensors.
1) Synthetic Data: We generate random tensors of size I 1 = I 2 = ... = I N with real-valued entries between 0 and 1, varying the following aspects: tensor order, tensor dimensionality, the number of observable entries, and tensor rank. We assume that the core tensor G is of size J 1 = J 2 = ... = J N .
Order. We increase the order N of an input tensor from 3 to 10, while fixing I n = 10 2 , |Ω| = 10 3 , and J n = 3. As shown in Figure 6(a), P-TUCKER exhibits the fastest running time with respect to the order. Although S-HOT SCAN and TUCKER-CSF can decompose up to the highest-order tensor, they run 11× and 7.1× slower than P-TUCKER, respectively. TUCKER-WOPT runs 60000× (when N = 4) slower than P-TUCKER and shows O.O.M. (out of memory error) when N ≥ 5. The enormous speed-gap between P-TUCKER and TUCKER-WOPT is explained by their time complexities. The speed of TUCKER-WOPT mainly depends on the dimensionality term I N , while P-TUCKER relies on the rank term J N where I >> J.
Dimensionality. We increase the dimensionality I n of an input tensor from 10 2 to 10 7 , while setting N = 3, |Ω| = 10 × I n , and J n = 10. As shown in Figure 6(b), P-TUCKER consistently runs faster than other methods across all dimensionality. TUCKER-WOPT runs 20000× (when I n = 10 3 ) slower than P-TUCKER and presents O.O.M. when I n ≥ 10 4 . The speedgap between P-TUCKER and TUCKER-WOPT is also described 4 https://github.com/ShadenSmith/splatt 5 http://www.lair.irb.hr/ikopriva/Data/PhD Students/mfilipovic/ in a similar way to that of the order case. Though S-HOT SCAN and TUCKER-CSF scale up to the largest tensor as well, they run 13.8× and 10.7× slower than P-TUCKER, respectively.
Number of Observable Entries. We increase the number of observable entries |Ω| from 10 3 to 10 7 , while fixing N = 3, I n = 10 7 , and J n = 10. As shown in Figure 6(c), P-TUCKER, S-HOT SCAN , and TUCKER-CSF scale up to the largest tensor, while TUCKER-WOPT shows O.O.M. for all tensors. P-TUCKER presents the fastest factorization speed across all |Ω| and runs 14.1× and 44.3× faster than S-HOT SCAN and TUCKER-CSF on the largest tensor with |Ω| = 10 7 , respectively. Note that P-TUCKER scales near linearly with respect to the number of observable entries.
Rank. We increase the rank J n from 3 to 11 with an increment of 2, while fixing N = 3, I n = 10 6 , and |Ω| = 10 7 . As shown in Figure 6(d), P-TUCKER, S-HOT SCAN , and TUCKER-CSF successfully factorize input tensors for all ranks. P-TUCKER is the fastest in all cases; in particular, P-TUCKER runs 12.9× and 13.0× faster than S-HOT SCAN and TUCKER-CSF when J n = 11, respectively. TUCKER-WOPT causes O.O.M. errors for all ranks.
2) Real-world Data: We measure the average running time per iteration of P-TUCKER and other methods on the realworld datasets introduced in Section IV-A1. Due to the large scale of real-world tensors, TUCKER-WOPT shows O.O.M. for two of them, which are set to blanks as shown in Figure 7. Notice that P-TUCKER and P-TUCKER-APPROX succeed in decomposing the large-scale real-world tensors and run 1.7 − 275× faster than competitors. Comparison results of P-TUCKER and P-TUCKER-CACHE. P-TUCKER-CACHE runs up to 1.7× faster than P-TUCKER for higherorder tensors, while P-TUCKER decomposes the highest-order tensor with 29.5× less memory than P-TUCKER-CACHE.
C. P-TUCKER-CACHE and P-TUCKER-APPROX To investigate the effectiveness of P-TUCKER-CACHE, we vary the tensor order N from 6 to 10, while fixing I n = 10 2 , |Ω| = 10 3 , and J n = 3. Figure 8 shows the running time and memory usage of P-TUCKER and P-TUCKER-CACHE. P-TUCKER uses 29.5× less memory than P-TUCKER-CACHE for the largest order N = 10. However, P-TUCKER-CACHE runs up to 1.7× faster than P-TUCKER, where the gap between the running times grows as tensor order N grows since running times of P-TUCKER-CACHE and P-TUCKER are mainly proportional to N and N 2 , respectively.
In the case of P-TUCKER-APPROX, we measure the running time and f it = 1 − ||X − X ||/||X|| for each iteration, while fixing N = 3, I n = 10 6 , |Ω| = 10 7 , and J n = 10. Figures 9(a) and 9(b) illustrate the effectiveness of P-TUCKER-APPROX. P-TUCKER decomposes a given tensor with an almost perfect fit. Meanwhile, P-TUCKER-APPROX runs faster than P-TUCKER (when iteration ≥ 8), but presents lower accuracy as a trade-off. Note that one iteration corresponds to lines 2-7 in Algorithm 2, and we select iteration 14 as an optimal approximation point since it maximizes speed-up over fit loss. : Comparison results of P-TUCKER and P-TUCKER-APPROX. P-TUCKER-APPROX gets faster at every iteration and eventually runs quicker than P-TUCKER (when iteration ≥ 8). However, P-TUCKER-APPROX shows lower accuracy than P-TUCKER as a trade-off. Note that we can choose an optimal approximation point (when iteration = 14) whose speed-up over accuracy loss is maximized.
D. Parallelization Scalability
We measure the speed-ups (T ime 1 /T ime T where T ime T is the running time using T threads) and memory requirements of P-TUCKER by increasing the number of threads from 1 to 20, while fixing N = 3, I n = 10 6 , and |Ω| = 10 7 . Figure 10 shows near-linear speed up and memory requirements of P-TUCKER regarding the number of threads. The linear speedup implies that our parallelization works successfully, and the linearity of memory usage demonstrates that our theoretical memory complexity of P-TUCKER matches the empirical result well.
E. Real-World Accuracy
We evaluate the accuracy of P-TUCKER and other methods on the real-world tensors. The evaluation metrics are reconstruction error and test root mean square error (RMSE); the former describes how precisely a method factorizes a given tensor, and the latter indicates how accurately a method estimates missing entries of a tensor, which is widely used by recommender systems. As shown in Figure 11, P-TUCKER factorizes the tensors with 1.4-4.8× less reconstruction error and predicts missing entries of given tensors with 1.4-4.3× less test RMSE compared to the state-of-the-art. In Figure 11, we present S-HOT SCAN and TUCKER-CSF with the same bar since they have similar accuracy, and an omitted bar indicates that the corresponding method shows O.O.M. while decomposing the dataset. Note that P-TUCKER-APPROX records lower or similar test RMSE compared to that of P-TUCKER. It corroborates our intuition about P-TUCKER-APPROX which asserts that there exist "noisy" entries in a core tensor, and they are unnecessary for estimating the values of missing entries. V. DISCOVERY In this section, we present discovery results on the latest MovieLens dataset introduced in Section IV-A. Exist-ing methods cannot detect meaningful concepts or relations owing to their limited scalability or low accuracy. For instance, S-HOT SCAN and TUCKER-CSF produce factor matrices mostly filled with zeros, which trigger highly inaccurate clustering. In contrast, P-TUCKER successfully reveals the hidden concepts and relations such as a 'Thriller' concept, and a relation between a 'Drama' concept and hours (see Tables V and VI).
Concept Discovery. Our intuition for concept discovery is that each row of factor matrices represents latent features of the row. Thus, we can apply K-means clustering algorithm [33] on factor matrices to discover hidden concepts. In the case of movie-associated factor matrix, each row represents a latent feature of a movie. Therefore, by analyzing the clustered rows, P-TUCKER excavates diverse movie genres, such as 'Thriller', 'Comedy', and 'Drama', and all the movies belonging to those genres are closely related (see Table V).
Relation Discovery. Core tensor G plays an important role in discovering relations. An entry (j 1 , ..., j N ) of G is associated with the j n th column of A (n) , and it implies that those columns are related to each other with a strength G (j1,...,j N ) . Hence, examining large values in G gives us clues to find strong relations in a given tensor. For instance, P-TUCKER succeeds in revealing relations between year and hour attributes such as (2015, 2pm) by investigating the top−3 largest value of a core tensor. In a similar way, P-TUCKER finds strong relations between movie, year, and hour attributes, as summarized in Table VI. VI. RELATED WORK In this section, we review related works on CP and Tucker factorizations, and applications of Tucker decomposition.
CP Decomposition (CPD). Many algorithms have been developed for scalable CPD. GigaTensor [16] is the first distributed CP method running on the MapReduce framework. Park et al. [34] propose a distributed algorithm, DBTF, for fast and scalable Boolean CPD. In [35], Papalexakis et al. present a sampling-based, parallelizable method named ParCube for sparse CPD. AdaTM [36] is an adaptive tensor memoization algorithm for CPD of sparse tensors, which automatically tunes algorithm parameters. Kaya and Uçar [37] propose distributed memory CPD methods based on hypergraph partitioning of sparse tensors. Those algorithms are based on the ALS similarly to the conventional Tucker-ALS.
Since the above CP methods predict missing entries as zeros, tensor completion algorithms using CPD have gained increasing attention in recent years. Tomasi et al. [38] and Acar et al. [39] address CPD models for tensor completion problems. Shin et al. [24] propose CDTF and SALS, which are distributed CPD methods for partially observable tensors. Smith et al. [40] explore three optimization algorithms for high performance, parallel tensor completion: alternating least squares (ALS), stochastic gradient descent (SGD), and coordinate descent (CCD++). Karlsson et al. [41] discuss parallel formulations of ALS and CCD++ for tensor completion in the CP format. Note that [24] and [40] offer a gradientbased update rule and row-wise parallelization for CPD as P-TUCKER does for Tucker decomposition. Tucker Factorization (TF). Several algorithms have been developed for TF. [12] presents an early work on TF, which is known as HOSVD. De Lathauwer et al. [13] propose Tucker-ALS, described in Algorithm 1. As the size of realworld tensors increases rapidly, there has been a growing need for scalable TF methods. One major challenge is the "intermediate data explosion" problem [16]. MET (Memory Efficient Tucker) [14] tackles this challenge by adaptively ordering computations and performing them in a piecemeal manner. HaTen2 [15], [42] reduces intermediate data by reordering computations and exploiting the sparsity of realworld tensors in MapReduce. However, both MET and HaTen2 suffer from a limitation called M-bottleneck [17] that arises from explicit materialization of intermediate data. S-HOT [17] avoids M-bottleneck by employing on-the-fly computation. Kaya and Uçar [19] discuss a shared and distributed memory parallelization of an ALS-based TF for sparse tensors. The above methods depend on SVD for updating factor matrices, while P-TUCKER utilizes a gradient-based update rule.
There are also various accuracy-focused TF methods including TUCKER-WOPT [18]. Yang et al. [43] propose another TF method that automatically finds a concise Tucker representation of a tensor via an iterative reweighted algorithm. Liu et al. [30] define the trace norm of a tensor, and present three convex optimization algorithms for low-rank tensor completion. Liu et al. [44] propose a core tensor Schatten 1-norm minimization method with a rank-increasing scheme for tensor factorization and completion. Note that these algorithms have limited scalability compared to P-TUCKER since they are not fully optimized with respect to time and memory.
Applications of Tucker Factorization. Tucker factorization (TF) has been used for various applications. Sun et al. [3] apply a 3-way TF to a tensor consisting of (users, queries, Web pages) to personalize Web search. Bro et al. [45] use TF for speeding up CPD by compressing a tensor. In [46], TF is used for separating conversations from chatroom communication. Sun et al. [47] propose a framework for content-based network analysis and visualization which employs a biased samplingbased TF method. TF is also used for classifying handwritten digits [48], and analyzing trends in the blogosphere [49].
VII. CONCLUSION In this paper, we propose P-TUCKER, a scalable Tucker factorization method for sparse tensors. By using an ALS method with a gradient-based update rule, and with careful distributions of works for parallelization, P-TUCKER successfully offers time and memory optimized algorithms with theoretical proof and analysis. P-TUCKER runs 1.7-14.1× faster than the state-of-the-art with 1.4-4.8× less error, and exhibits nearlinear scalability with respect to the number of observable entries and threads. We discover hidden concepts and relations on the latest MovieLens dataset with P-TUCKER, which cannot be identified by existing methods due to their limited scalability or low accuracy. Future works include extending P-TUCKER for distributed platforms such as Hadoop or Spark, and applying sampling techniques on observable entries to accelerate decompositions, while sacrificing little accuracy.
|
2017-10-06T02:54:44.000Z
|
2017-10-06T00:00:00.000
|
{
"year": 2017,
"sha1": "c9a838da16814eda336871a2a187bf4cb7631b36",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1710.02261",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "98350d465186d72eb68a109ed6a1af9e23cea569",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
245544405
|
pes2o/s2orc
|
v3-fos-license
|
Energy potential of long-period oscillations (on the example of Kakhovka plain reservoir, Ukraine)
: The energy potential of long-period oscillations is estimated by comparing it with watercourse power. The relaxation time of long-period waves is chosen for the estimation time interval, during which their amplitude decreases e (Euler's number) times from the initial one. According to calculations, the amount of energy produced during this time by the watercourse is 9.35–18.71 million kW h, while the amount of energy of long-period oscillations is 3–6 times less – 1.60–5.48 million kW h. The components of the economic factor of using long-period waves and currents for electricity production are the predictability of their magnitudes and location of maxima, long-term availability, concentration.
Introduction
Concerned about the expected shortage of fossil fuels, we continue to look for alternatives for electricity production. Not much time has passed since the opening of the first power plant, a converter of moving water energy into electricity (1878, a small hydroelectric power plant on the Coquet River in England), compared to the history of creative humanity. However, the list of available surface water energy resources has expanded significantly due to the increasing technological capacity to use them (see Fig. 1). A strong periodicity and a certain height of tidal waves determined the use of tidal energy. The first tidal hydroelectric power plant was built in 1966 at the mouth of the Rance river, France [2].
Wind waves and capillary waves, unlike tides, are good because they occur in any water area. The first wave hydroelectric power plant was opened in 2008 near Aguçadoura, Portugal [2].
Analysis of the wave spectrum of Michigan and Ontario lakes (USA) revealed powerful long-period storm surges and standing waves (seiches) in their composition, which can also be used for electricity production [3,4].
The purpose of the paper is to evaluate the energy potential of long-period waves and currents of the Kakhovka plain reservoir (Ukraine), the largest of the Dnieper river cascade.
Method
Estimation of the potential of long-period waves is performed by comparing it with watercourse power [5]: where water density (10 3 kg/m 3 ); ggravitational acceleration (g9.8 m/s 2) ; Qwater consumption; H fall of water in the selected area.
Specific energy flux per unit width of wavefront and unit wavelength along its propagation direction over one oscillation period can be calculated by the formula [3,4]: where A=h/2, h, Tamplitude, height and period of the wave, respectively.
Seiche period is a function of the morphometric characteristics of the reservoir. The period of longitudinal seiches in a closed rectangular reservoir with a horizontal bottom (see Fig. 2) is calculated by the Merian formula [6]: where Mwave mode; L, Dlength and depth of the reservoir, respectively. Fig. 2. Profiles of the first two modes of longitudinal seiches in a closed reservoir [6].
The dynamics of the damping seiche is represented by the formula [7]: where Atamplitude at time t; A0initial oscillation amplitude at time t=0; angular frequency (=2/T); damping coefficient. The damping coefficient is inversely proportional to the relaxation time τ, during which the amplitude decreases e times from the initial one.
The wavefront power is calculated by the formula [8]: where the specific energy flux is taken under the sign of the module, which takes into account the change in the direction of wave motion relative to the non-excited surface. The wavefront power takes into account its lengths lw, which for a rectangular basin is equal to the width of the basin W, and the number of which is determined by the wave mode M (see Fig. 2).
Oscillations of seiche waves are accompanied by a reversible current (forward and backward movements of water), as shown in Fig. 3. The power density of the water flow moving at velocity V can be calculated by the formula [10]: Let us estimate the velocity of the waves, caused by the seiche current, by the formula [6]: Current power is calculated by the formula [8]: where the flow power density is taken under the module sign. The flow power takes into account the flow cross-sections sc, which for a rectangular basin are numerically equal to the product of the width and depth of the basin, and the number of which is determined by the wave mode M.
Waves and currents availability for a specified period of time t is calculated by the formula [11]: where twctotal duration of oscillation sessions over a specified period of time. The excitement of seiches by the storm surge is shown in Fig. 4, the standing wave profiles, as suggested by M. Longuet-Higgins [13], are represented by the corresponding vibrational positions of the mathematical pendulum. Fig. 4. Standing wave oscillations: on the leftmathematical pendulum whose positions correspond to the standing wave profiles in Fig. 3; on the rightwater levels in the anti-node (1) and the flow velocity in the node (2), at fixed times t=0.25n, n=0,1,2,… The dynamics of the seiche power of the Kakhovka reservoir during the oscillation relaxation time is shown in The wave and current phases correspond to the oscillation profiles of Fig. 4. The authors of the papers [3,4], considering the surge energy on the Great Lakes (USA, Canada), examine extreme cases. Using their idea, let us consider the energy of long-period oscillations of the Kakhovka reservoir for a 106 cm surge (see Table 3).
Conclusions
1. Calculations showed the significant energy potential of the "storm surge -seiche wave and current" sequence. The amount of energy produced by the watercourse for 1.5-3 days is 9.35-18.71 million kWh, compared with the amount of energy produced by long-period waves and currents (1.60-5.48 million kWh).
2. Predictability of wave values, wave availability, location of maxima (see Table 4) are components of the economic factor of using them for electricity production.
3. The energies of waves and currents are distributed over the reservoir area, while the energy of falling water is concentrated in a relatively narrow line of the dam. To concentrate the energy of long-period waves, the authors of the papers [3,4] suggest using tidal basins.
|
2021-12-30T16:19:39.966Z
|
2021-12-24T00:00:00.000
|
{
"year": 2021,
"sha1": "2997d9dbcb38f9c8b3fef39ba7029f1047614a89",
"oa_license": "CCBY",
"oa_url": "https://www.preprints.org/manuscript/202112.0399/v1/download",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "4e1fc7c0812641dacf6c6aeaeae1915a71a218b6",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
}
|
235335003
|
pes2o/s2orc
|
v3-fos-license
|
Preparation and Practice of the Necessary Documents in Hospital for the “Act on Decision of Life-Sustaining Treatment for Patients at the End-of-Life”
Purpose Six forms relating to decisions on life-sustaining treatment (LST) for patients at the end-of-life (EOL) in hospital are required by the “Act on Decision of LST for Patients at the EOL.” We investigated the preparation and creation status of these documents from the database of the National Agency for Management of LST. Materials and Methods We analyzed the contents and details of each document necessary for decisions on LST, and the creation status of forms. We defined patients completing form 1 as “self-determined” of LST, and those whose family members had completed form 11/12 as “family decision” of LST. According to the determination subject, we compared the four items of LST on form 13 (the paper of implementation of LST) and the documentation time interval between forms. Results The six forms require information about the patient, doctor, specialized doctor, family members, institution, decision for LST, and intention to use hospice services. Of 44,381 who had completed at least one document, 36,693 patients had form 13. Among them, 11,531, 10,976, and 12,551 people completed forms 1, 11, and 12, respectively. The documentation time interval from forms 1, 11, or 12 to form 13 was 8.6±13.6 days, 1.0±9.5 days, and 1.5±9.7 days, respectively. Conclusion The self-determination rate of LST was 31% and the mean time interval from self-determination to implementation of LST was 8.6 days. The creation of these forms still takes place when the patients are close to death.
Introduction
There are many obstacles and taboos in Korea and other Asian nations regarding discussions on death. Proxy decision-making for end-of-life (EOL) is overwhelming, and the EOL discussion takes place approximately 2 to 8 days before death [1,2]. The use of advanced directives can promote patient participation in EOL discussions [3]. In Korea, the Act on Hospice and Palliative Care and Decisions on Life-Sustaining-Treatment for Patients at the EOL was enacted in 2016 and implemented in 2018 to enhance patients' involvement in making decisions about EOL [4]. This law allows terminally ill patients with no chance of rehabilitation to withdraw or withhold life-sustaining treatment (LST) with their own consent or that of their family members. The patient's intention for LST on the Act is a decision on four items, including cardiac resuscitation, mechanical ventilation, hemodialysis, and anti-cancer drugs. This law covers 43 pages, including the act, enforcement decree, enforcement rules, a table, and seven forms. The seven forms required by law include the following: form 1 (LST plan form), form 6 (advanced directive form), form 9 (determination of whether the patient is at the EOL process), form 10 (confirmation of the patient's intention by advanced directive), form 11 (confirmation by consistent statements of two or more of the patient's family members), form 12 (confirmation by unanimous consensus of the patient's family), and form 13 (implementation of LST). Form 6 is written in advance by a person aged ≥ 19 years with a direct submission of his or her decision on whether to use a hospice, and should be written directly by the registration authority designated by the Minister of Health and Welfare; this form does not involve patient decisions regarding LST at the EOL. With the exception of form 6, the other six forms must be written by a doctor in a hospital, and occasionally also a specialist. With the commencement of the Act, more than one of the six forms should be prepared in the hospital and used to confirm the patient's intention to withdraw or withhold LST, determine whether the patient is at the EOL stage, and implement the patient's decision regarding LST.
Here we evaluated the components of the six forms that should be written in the hospital and are required by law to make plans for EOL treatment. We also analyzed the preparation of the forms and the implementation of LST decisions from the database of the National Agency for Management of LST in the year following the enforcement of the Act.
Materials and Methods
The database of the National Agency for Management of LST includes seven forms: forms 1, 6, and 9-13. Form 6 can be prepared regardless of the disease and is excluded from the documentation required by the hospital. We collected the terminal status and EOL information required by each of the forms under the Act on Decisions LST for Patients in Hospice and Palliative Care or at the EOL as follows (S1 Fig. is Korean version of forms 1, 9-13): form 1, LST plan form; form 9, determination of whether the patient is at the EOL process; form 10, confirmation of the patient's intention by advanced directive; form 11, confirmation by consistent statements of two or more of the patient's family members; form 12, confirmation by unanimous consensus of the patient's family; and form 13, regarding the implementation of LST. A doctor with a patient who is in terminal status or at the EOL process writes form 1 to leave his/her intention of LST. If the patient is unable to leave his/her intention to the doctor, form 11 or 12 is completed by a doctor or specialist with his/her family.
We analyzed the preparation of the form's from the database of the National Agency for Management of LST between February 4, 2018 and January 31, 2019. Form 10 was excluded from this analysis because it was indirectly prepared in form 13 and details of form 10 were not available from dataset the National Agency for Management of LST. We defined patients with form 1 as "self-determination," and those with forms 11 or 12 as "family decisions". We collected the following data on forms other than form 10 from the database of the National Agency for Management of LST: Form 1, patient information (sex, age, status, address), institution (type and address), intention to use hospice services, and four decisions relating to LST, including the date on which the patient's intention was identified and the date of creation; form 9, patient information (sex, age, diagnosis), date of the doctor's decision, date of the specialist's decision, and date of creation; form 11, patient information (sex and age), information relating to family members (total number, number of making a statement, relationships), and date of creation; form 12, patient information (sex and age), number of family members, and date of creation; and form 13, patient information (sex and age), institution (type and address), four decisions for LST, including verification of the patient's intention, the date on which the patient's intention was identified, and the date of creation. We compared the four items of LST on form 13, and the order of documentation and time interval between forms 1, 11, or 12 and form 13 according to the decision subject. Formally, after decision were made on LST in forms 1, 11, or 12, LST in form 13 were implemented. It is generally the order in which LST is implemented after a decision has been made. The time interval dates from the creation of forms 1, 11, or 12 to form 13 reflect the time from the decision to the implementation of LST. We also analyzed several dates that required more than one date within forms 1, 9, and 13.
Description of each form
There are three forms relating to the decision on LST: forms 1, 11, and 12. Form 1 is Physician Order for Life Sustaining Treatment (POLST), which includes four items relating to LST decisions: The plan for the use of hospice services, the description of advanced statements, and the allowance of access to advanced statements. The patients state their intentions by marking each of the four items. Form 11 and 12 can be written by a family member on behalf of the patient when the patient is unable to express their intentions regarding LST. Form 11 can be prepared if a patient has previously expressed his/her thoughts regarding LST to more than one family member, and form 12 is a document that unanimously determines the LST if not. When completing form 12, the doctor should check the family relationship certificate to ensure that everyone who is mentioned on the form is in the patient's family.
Form 10 is used to identify the patient's intention in combination with form 6, which is an advanced directive. Form 9 is the doctor's determination of whether the patient is in the EOL process. Form 13 is created when the patient or his/ her family makes the decision to withdraw or withhold LST.
The person who completes the forms must be a doctor, and a specialist should be added to form 11 and form 12. Form 10 requires a specialist when the patient is unable to express his or her opinion regarding LST. Up to four forms are required to confirm a patient's LST plan (forms 1, 11, or 12), as well as an advanced directive to verify the patient's intention (form 10), to determine whether the patient is at the EOL process (form 9), and to implement his or her LST plan (form 13).
Contents and frequency of written forms
Information about doctors, specialists, institutions, and the patients who are withdrawing or withholding LST must be prepared. Form 11 includes information about two or more family members who have identified the patient's indirect intention, and form 12 includes the number of families and information on all members of patient's family. The patient's name is required on all forms, but the resident registration number, birthdate, diagnosis, expressive ability for LST, address, phone number, and patient's status may or may not be required. Information about doctors, which may include name, certification number, institution in duty, decision date, or signature, is required on all forms except form 10. Forms 9, 11, and 12 require specialist information in addition to information about the doctor, board number, and specialized field. Only form 13 requires information on the institution and the date and time of implementation separately. In addition to this information, form 13 also includes the following four items relating to LST decisions: Plan for the use of hospice services, the doctor's legal description, permission to view patients' advanced statements, and the verification of the patient's intentions by a doctor. All forms must be dated, and forms 1, 9, 10, and 13 require more than one date, such as the date on which the status was identified and created. The contents of each form are summarized in Table 1.
Four items of LST determination in form 13 and time interval from the LST decision and to implementation of LST according to decision subject
According to decision subjects, we compared the four items of LST decisions on form 13. For cardiac resuscitation, there was no significant difference in the implementation rate between self-determination and family decisions (p=0.943). For mechanical ventilation, hemodialysis, and anti-cancer drugs, the rates of self-determination were higher than those of family decisions (all p < 0.001) ( Table 2). Table 3 shows the time interval dates for the creation of forms 1, 11, or 12 to form 13 for patients with forms 1, 11, or 12 and form 13 (including patients with two forms in form 1, 11, and 12). The mean time interval from form 1 to form 13 was 8.6±23.6 days. The mean time interval from forms 11 or 12 to form 13 was 1.0±9.5 days, and 1.5±9.7 days, respectively. The mean time interval of patients who had completed form 1 was longer than those of patients with forms 11 or 12 (all p < 0.001). Among patients who had completed form 1, 56.5% were created on the same day as form 13. Among patients who had completed forms 11 or 12, 81.7% and 76.6% were created on the same day as form 13, respectively. The proportion of patients who had completed forms 11 or 12 on the same day was higher than that of patients who had completed form 1 (p < 0.001). Other 43.5%, 18.3%, and 23.6% with form 1, form 11, or form 12, respectively, were created on the different day as form 13. Patients with reversed order that form 1, 11, or 12 were created after form 13 were 1.0% in form 1, 2.3% in form 11, and 2.3% in form 12. For patients with correct date order which forms 1, 11, and 12 was created before form 13, the mean time interval from forms 1, 11, or 12 to form 13 (written on different days) was 20.6±32.2 days, 8.4±17.3 days, and 8.1±17.6 days, respectively. Form 1 includes two dates: The date on which the patient's intention was identified and the date of creation. In 15,943 of the 16,408 patients (97%) who had completed form 1, the two dates were the same. Form 9 requires the following three dates: The date on which the decision was made by the doctor, the date on which the decision was made by a specialized doctor, and the date of creation. Of the 37,359 patients who had completed form 9, 35,712 patients (96%) had the same date of decision by a doctor and date of creation, and 36,131 patients (97%) had the same date of decision by a specialized doctor and date of creation. Form 13 includes the following two dates: The date on which the patient's intention was identified and the date of creation. Of the 35,104 patients who had completed form 13, 25,268 (72%) had the same date.
Discussion
In the current study, we found that the self-determination rate of LST was 31% and the mean time interval from selfdetermination to implementation of LST was approximately 8.6 days, which is higher and longer than those from previous studies [1,2,5]. Indeed, the self-determination rates from recent single center and national retrospective studies were 29% and 33.5%, respectively [6,7]. There was little difference in the self-determination rate as a result of differences in the hospital settings, enrolled subjects, or research period. The time interval dates from the creation of forms 1, 11, or 12 to form 13 reflect the time from the decision to the implementation of LST. In the case of family decision, the mean time interval was about 1 day. It remains to be filled out when a patient is close to dying. Our results showed that approxi- Values are presented as number (%) unless otherwise indicated. SD, standard deviation. a) Among patients who completed both form 1 and form 13 (including patients who also filled out forms 11 or 12), b) Among patients who completed both form 11 and form 13 (including patients who also filled out forms 1 or 12), c) Among patients who completed both form 12 and form 13 (including patients who also filled out forms 1 or 11), d) In the reversed order that doctor wrote form 13, then forms 1, 11, or 12.
mately 56% of patients in the self-determination group and approximately 80% in the family decision group decided their LST and implemented it on the same day. In other word, these patients implemented LST on the day of the decision of LST. Family decisions are much more likely to be made and implemented on the verge of death. Therefore, it is necessary to encourage patients to participate in the discussion of LST at an earlier stage of illness. Advanced care planning (APC), including EOL discussion, is important to help patients meet a peaceful and dignified death. Both the advanced directive and POLST are APC forms, although there are some differences between the two in terms of the population, who completes the form, and the time frame [8]. An advanced directive is a legal document that can be written by anyone regardless of his/her illness and includes a future medical care plan. The POLST form is a medical document that includes mainly EOL discussions. In 1991, physicians in Oregon developed the POLST program [9], which converts patients' wishes for treatment into medical orders. In Korea, the legislation of the POLST program is the Act on the decision of LST for patients at the EOL. This Act established an approach to EOL planning that is based on conversations between patients, family members, and doctors to determine and honor the wishes of seriously ill patients. The POLST forms need to be consistent and easy to write to allow patients' preferences regarding the use of LST to be honored. Incomplete and contradictory POLST forms may cause confusion among healthcare providers and may result in patients receiving treatment contrary to their wishes [10,11].
The implementation of the Act includes both withholding and withdrawing LST. Generally, there is no ethical or legal distinction between withdrawing and withholding LST. However, approximately 70% of Koreans think there should be ethical and legal differences between withdrawing and withholding LST [12], withholding being acceptable, but withdrawing socially unacceptable. As one law attempted to control acceptable withholding and difficult-to-accept withdrawing LST, complex forms and penalties were included in accordance with the latter. Under the Act, a person who violates the this LST law may be sentenced to up to 3 years in prison or fined up to 30,000,000 won. However, legislating and penalizing issues with insufficient social consensus do not change the social awareness of death. To this end, a complex, multifaceted, and longitudinal intervention such as continuous social efforts, institutional publicity, and education should be accompanied [13].
On the forms required for the Act in Korea, information about a patient and his/her doctor is repeatedly written. Information about hospitals is also repeated in that it is included in the information kept at the workplace of doctors and specialists or requires administrative information. With the exception of forms 11 and 12, the other forms require more than one date to be input, such as the date of creation, the date of identification, and the date of decision. In more than 95% patients who have completed forms 1 and 9, several dates within the forms were same. Conversely, it is relatively simple to write the items relating to the LST decision. The LST decision item is written to express the patient's intention by marking the item and not expressing his/her wishes as I will or will not. Marking could cause confusion regarding whether a patient wishes to either receive or postpone LST.
The Oregon POLST registry is a 1-or 2-page format [14]. In Korea, there are six forms that have been created in the hospital, among which, at least two should be written. Six forms required an Act involving various basic information on patients, doctors, specialist doctors, family members, and institutions. Information about patients and doctors as subjects is required in all six forms. Information about hospitals is repeated in that it is included in the workplace information of doctors and specialists or requires administrative information. It is necessary to reduce both the number and items of forms to reduce the repetition of information between forms and to reduce the effort required to prepare the forms.
As four items of LST are based on a social consensus that took place in 2009 [15,16], they need to be organized according to LST intensity or patient status when a patient need LST and then modified to the current perception or medical judgment. In the revised version of March 26, 2019, extracorporeal membrane oxygenation, blood transfusion, inotropics, and other LST were added LST which terminally ill patient with no chance of rehabilitation could decide to withhold or withdraw. The Act not only relates to cancer patients but also terminally ill patients with chronic disease. Moreover, given that anti-cancer drugs with fewer toxicities, such as targeted agents, have been developed, the exclusion of anti-cancer drugs among LST items should be reconsidered. Considering that Oregon is using version 11 of the POLST in 2017 [13], constant renewal are needed to make it easier to verify the patient's intention of LST. Furthermore, the education of people who help patients understand the medical situation and write POLST properly is needed. According to a previous study, approximately 90% of patients who completed POLST and wished to receive chemotherapy in the first step changed their intention to not receive chemotherapy in the second step after the doctors had thoroughly explained the time of EOL [17].
To improve EOL care by reflecting the patient's value, designating an agent in case the patient cannot make a decision is another way. Establishing patients' rights to forgo and the authority of surrogate decision makers were achievements in the first phase of improving EOL care [18]. There is no concept of a surrogate in the act of LST; therefore, without designating an agent, the decision is made based on indirect decisions by family unanimity or after receiving two or more statements from family members regarding the patient's wishes. However, the patient's family may not know the value of the patient's wishes for EOL care and may sometimes show an aggressive attitude toward EOL care [19,20]. In addition, there might be conflicts among family members or an issue of no family members who are available to make decisions. For this limited case, it might be important to establish surrogate decision makers about EOL process on behalf of incompetent patients. Actually, Korea has a family oriented Confucian culture, and many patients prefer to consider the best interests of their family members as opposed to their own. Indeed, some patients may suspend their LST to lessen the economic burden on the family [17]. Non-family surrogate decisions can cause another conflicts if there are no authority on the law. Therefore, earlier discussions on EOL care can not only help patients have time for self-determination, but also mediate different perspectives among patients and family members regarding life prolongation. This requires continuous social effort.
Our study had several limitations. First, since we examined data within a year of law enforcement, the results also included data related to trial and error in the settlement process. Approximately 17% of patients with at least one form had not completed form 13, 5% of patients with form 13 had not completed any other forms, and 0.5% of patients had completed forms 1, 11, and 12. There were also about 1-2% of patients whose decisions were made after LST was implemented. Second, we could not fully investigate all of the problems associated with form preparation, such as the completeness of the form and the difference in LST between forms, because of limited accessibility to data. Third, we excluded several patients with form 10 as advanced directive because it was not available. This can explain the lower self-determination ratio compared to the previous national survey. Although these results involve small numbers of patients, they are all self-determinators and will also increase in the future as more people complete form 6. Finally, there is an ambiguity about the fact that the date of creation of form 13 was indirectly regarded as the date of death. The dates of the completion of form 13 are expected to be similar to the date of death when the LST decision is implemented, but in practice, it implies uncertainty that the patient can survive and the condition can improve.
In conclusion, we found that the self-determination rate of LST was 31%, and the mean time interval from self-determination to implementation of LST was approximately 8.6 days after the enforcement of the Act. However, the creation of these forms still takes place when the patient is near to death. Moreover, in the early stages of implementation, there are many types of forms, and some information on patients and doctors, and the date of creation need to be written repeatedly. Therefore, continuous revisions and updates of forms are needed. Moreover, social efforts and communication are important to change the perception of death and move forward with the discussion of death earlier.
Electronic Supplementary Material
Supplementary materials are available at Cancer Research and Treatment website (https://www.e-crt.org).
Ethical Statement
This study was reviewed and approved by the Ethics Committee of the National Evidence-Based Healthcare Collaborating Agency (NA19-008) and the Kangdong Sacred Heart Hospital (2019-12-013). The informed consent by patients was waived because all the information was tabulated in anonymized and deidentified fashion.
Author Contributions
Conceived and designed the analysis: Baek SK, Kim
Conflicts of Interest
Conflict of interest relevant to this article was not reported.
|
2021-06-05T06:16:59.858Z
|
2021-06-02T00:00:00.000
|
{
"year": 2021,
"sha1": "eb9eb7b5c46d56b1e17733dc08cf8cf9f352aa75",
"oa_license": "CCBYNC",
"oa_url": "http://www.e-crt.org/upload/pdf/crt-2021-326.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6bbf976a1632309a1309995ef99856409ac52712",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
233702462
|
pes2o/s2orc
|
v3-fos-license
|
Preliminary Feasibility Study of a Community-Based Wellness Coaching for Cancer Survivors Program
Purpose: In the United States, there are almost 17 million cancer survivors who often have poorer health outcomes and an increased risk for developing a second cancer and other chronic illnesses. Evidence suggests substantial cancer burden may be prevented through lifestyle modications. The purpose of this study was to determine the feasibility of health coaching for the improvement of health, tness, and overall well-being of cancer survivors in a community setting. Methods: Participants were recruited from community-based cancer agency locations. Health coaching was provided to people diagnosed with cancer anywhere along the cancer survivorship continuum. Coaches provided six individual sessions to each participant. Surveys were sent pre- and post-intervention on topics including tness, eating habits, perceived stress, anxiety, depression, and quality of life. Results were analyzed using repeated measures multilevel modeling. Results: 48 participants completed an average of 85% of health coaching sessions. Coaching participants noted signicant improvements in weekly physical activity, including moderate-vigorous physical activity. Small signicant increases were found in healthy eating behavior. Participants reported moderate change in the quality of their sleep and smaller signicant changes in sleep duration and sleep eciency. Moderate signicant reductions were found in perceived stress and anxiety, with small but signicant decreases in depression. Importantly, participants reported improved quality of life, particularly in areas of physical and emotional well-being. Smaller increases were found in functional and total wellbeing. Conclusion: Preliminary ndings indicate real behavior change in the measured outcomes and suggests health coaching may be an important tool for cancer survivorship.
Background
The prevalence of cancer survivorship in the United States totals close to 17 million people and is estimated to increase to over 22 million by 2030 [1]. This is in part due to the overall aging population, but is also due to an increase in cancer screenings, more sensitive detection methods, and more targeted and advanced treatments [1]. This allows cancers to be detected earlier and treated more effectively, which is associated with increased survivorship. While cancer is still the second leading cause of death in the U.S., from 2001-2017 the mortality rates have declined while 5-year survival rates have increased, meaning people are now living longer with the disease. Survivors are often at an increased risk of developing a second cancer or comorbidity, so nding a way to mitigate these burdens could have a positive effect on quality of life (QoL) and other long term health outcomes [2,3].
A Healthy Lifestyle
The National Cancer Comprehensive Network (NCCN) has guidelines that recommend several healthy lifestyle habits for survivorship [9]. They recommend survivors engage in 150-300 minutes of moderate intensity or 75 minutes of vigorous intensity aerobic activity per week along with strength training that involves all muscle groups two to three times per week. The American College of Sports Medicine (ACSM) Roundtable Report on Physical Activity concluded there is consistent, compelling evidence that physical activity plays a role in preventing many types of cancer while also improving longevity and cancer related side effects among cancer survivors. Even 30 minutes of moderate to vigorous activity three times per week was enough to help relieve the burden of cancer related fatigue, anxiety, and depression, while helping to increase physical function and health-related QoL [10].
The NCCN recommends eating a diet high in vegetables, fruit, and whole grains with a reduction in excess sugars, fried foods, and red meat [9]. Diets higher in fruits, vegetables, and whole grains have been associated with increased survival after cancer diagnosis and treatment, especially when coupled with physical activity and weight maintenance [11,12]. Ensuring adequate caloric intake and counteracting any nutritional de ciencies experienced can help to reduce symptoms and improve QoL, but a healthier diet may also impact cancer progression, overall survival, and possibly risk of recurrence [11,12].
They also recommend cancer survivors get an adequate amount of sleep [9]. Negative changes to sleep patterns in cancer survivors have been associated with more severe fatigue, less energy, more pain, increased weight gain, and lower physical and emotional functioning scores leading to impairments in being able to perform daily tasks causing an increased risk of anxiety and depression. Those who have undergone treatment for their sleep disorder have shown lower levels of depression and anxiety as well as increased QoL compared to those who have not [13][14][15].
The NCCN also has suggestions for standards of care regarding distress [16]. Feelings of anxiety, depression, and distress can interfere with a patient's ability to cope with their cancer, with depression being associated with a high risk of medical non-compliance [16,17]. Left unmanaged, stress is associated with higher all-cause and cancer related morbidity and mortality, as well as decreased QoL [18]. Exercise, education on stress management techniques, appropriate symptom management, and support groups may help to diminish the effects [16,19].
While all of these factors have been individually studied in cancer survivors, they often overlap and are interrelated. Psychological distress or sleep disturbances can cause symptoms of anxiety and depression, but those who already have mental health problems may experience more distress or sleep problems [14,15,18]. While optimal nutrition and physical activity are important components on their own, they are often compounded when combined [9,11]. Cancer and treatment related symptoms can diminish QoL, which is associated with negative clinical outcomes, but can be improved upon with changes to any of the lifestyle factors previously mentioned. When considering community and population level change, it may be important to consider multiple behavior changes to elicit the best outcomes for healthy cancer survivorship.
Health and wellness coaching (HWC) can be a viable tool for creating lasting, sustainable behavior change for cancer survivors via healthy lifestyle and lifestyle modi cations. HWC coaches work with clients utilizing a client-centered approach to address the health topics most important to the individual [20]. Sessions are tailored to the client and utilize techniques such as motivational interviewing, goal setting, and creating accountability within a nonjudgmental environment [21]. HWC has been found to be effective in the general adult population for changing a variety of health behaviors such as increasing physical activity, improving nutrition intake, weight loss, and sleep improvements which subsequently reduces risk factors for various chronic diseases [22,23]. Clients often experience a sense of empowerment and increase in self-e cacy towards self-management techniques [20]. However, HWC has not been well studied with cancer survivors. The HWC interventions that do exist for this population tend to center on pain management [24,25]. Very few focus on multiple behavior changes and those that do, often focus on just one type of cancer [26,27].
Due to the variability of the symptoms experienced by cancer survivors, having an individualized, tailored program suited to their particular health priorities and abilities could help to facilitate adherence to behavioral changes, thereby improving overall health outcomes. By working within an established cancer community support setting, survivors can participate in a comfortable and familiar non-clinical environment with additional resources and programs readily available to them to help support their behavior change.
With limited literature around HWC in this population, there is little guidance as to how to best implement a program of this sort. To ll this need, we designed a HWC intervention with the aim to determine the feasibility of implementing a HWC program within a cancer community setting. In addition, to determine the real-world effectiveness of HWC for improving health, tness, and overall wellbeing of cancer survivors over a three-month time period.
Project Overview
The "Wellness Coaching for Cancer Survivors" program was an intervention aimed at providing individualized HWC services to cancer survivors anywhere along the cancer continuum throughout a mid-Atlantic state in a community-based setting. The project was a collaboration between a communitybased cancer agency and two mid-Atlantic universities, using certi ed health coaches [28]. Approval was obtained by both Institutional Review Boards.
Setting
Cancer survivors anywhere along the cancer continuum from early diagnosis through long-term survivorship were recruited for the study through the community-based cancer agency's locations. The community-based cancer agency is a statewide non-pro t community organization that provides cancer survivors and their caregivers and/or family with counseling support groups, educational workshops, exercise and nutrition groups, and other programs free of charge to help cope with and manage the emotional aspect of cancer.
Study Design and Participants
A single group pretest-posttest design was utilized for this study. Participants were recruited through yers, email, and an advertisement in the agency's weekly newsletter. Those who showed interest were contacted by the research coordinator to complete a phone screen to determine eligibility. Participants were considered eligible if they (1) were over the age of 18, (2) had been previously diagnosed with cancer at any time in the past, and (3) were able to read and complete an online questionnaire. There were no exclusion criteria independent of the inclusion criteria. If eligibility criteria were met, informed consent was obtained. Baseline and 3-month post-program follow-up assessments were completed online using a REDCap database.
Program
Six individual HWC sessions were provided over a three-month period to cancer survivors. Sessions were led by certi ed health coaches and followed the standard treatment model used by the host institution [28]. The rst session was a 90-minute in-person session and held at one of the community locations. The remaining ve sessions were approximately 30-minutes in length and conducted either in-person, telephonically, or through a secure video conferencing platform, as designated by participant preference. Sessions were tailored to the individual, allowing them to talk about the most important aspects of their health and what behaviors they were most interested in changing during the next three months.
Data Collection
Following the phone screen, participants were sent an email with an individualized link to the surveys, collected in REDCap [29]. Surveys were completed in the same manner approximately three months later and sent immediately following their nal HWC session.
Instruments
Physical Activity Readiness Questionnaire (PAR-Q) [30]. The PAR-Q consists of seven questions assessing whether a person is physically ready to engage in physical activity, or whether they should consult a doctor before beginning an exercise program. The PAR-Q questions were verbally asked during the phone screen. In the event someone failed the PAR-Q, they were asked to contact their primary care provider and obtain medical clearance. Until permission was gained through a healthcare provider, participants were not allowed to be coached around exercise or physical activity.
Demographics and Health Coaching Questionnaire. Demographic information included gender, age, race, ethnicity, marital status, education, and income. Medical information included cancer type, stage, and date of diagnosis, as well as whether the participant had surgery, chemotherapy, or radiation to treat their cancer. The Health Coaching Questionnaire included general physical activity and sleep habits as well as additional information regarding smoking or intake of alcohol.
Perceived Stress Scale (PSS) [31]. The PSS is a 10-item measure used to determine participants' psychological perception of stress within the last month. Positive questions are reverse scored and scores are summed for a total perceived stress score. The higher the score, the more stress the participant perceives experiencing (baseline : 0.87).
Functional Assessment of Cancer Therapy: General, Version 4 (FACT-G) [32]. The FACT-G is a 27-item questionnaire measuring four facets of cancer related QoL: physical well-being, social and family wellbeing, emotional well-being, and functional well-being. It provides scores for each individual subscale, as well as a total score. Higher scores indicate higher reported health related QoL (baseline : physical = 0.80; social = 0.86; emotional = 0.87; functional = 0.86; total = 0.91).
Hospital Anxiety and Depression Scale (HADS) [33]. This 14-item scale assesses anxiety and depression separately and categorizes symptoms as "normal", "borderline abnormal", or "abnormal". In this study, this scale was included as a screening tool to help determine if the participant needed to be referred to a mental health professional before beginning the program (baseline : anxiety = 0.88; depression = 0.89).
Rapid Eating Assessment for Patients Short Form (REAP-S) [34]. This 16-item questionnaire assesses various eating habits. The higher the score, the healthier a person's overall eating habits (baseline : 0.75).
International Physical Activity Questionnaire -Short Form (IPAQ) [35]. The IPAQ is a seven-question measure assessing the number of bouts of vigorous physical activity, moderate physical activity, and/or walking a person does on average in a seven-day period in their leisure time as well as how many minutes they spend during each bout. The questionnaire also assesses how many minutes per day a person spends sitting. Bouts per week, minutes per week, and MET-minutes of moderate-vigorous and total physical activity was calculated and assessed.
Pittsburg Sleep Quality Index (PSQI) [36]. The PSQI measures various aspects of sleep and sleep patterns in adults. Nine questions determines subjective sleep quality, sleep latency, sleep duration, habitual sleep e ciency, sleep disturbances, use of sleeping medications, and daytime dysfunction over the last month.
Scores of 5 or above are indicative of poor sleep (baseline : 0.71).
Statistical Analysis
Analyses were conducted on the baseline sample (N=51) using IBM SPSS version 26 [37]. Variable distributions were inspected, and a 5% winsorization technique was applied to preserve out-of-range rank order values in the distribution while limiting their in uence [38]. Demographic information was analyzed using means and standard deviations for continuous variables and frequency or percentages for categorical variables. Our analyses examined the overall effects of the program on eliciting change in the various behaviors from baseline to program completion. To do so, estimated marginal means models were computed for each instrument and corresponding sub-scales. Model effects were further decomposed using pairwise comparisons. In addition, Cohen's d, a distribution-based effect size measure, was calculated for each outcome variable between baseline and program completion. Cohen's d effect sizes can be interpreted as 0.20 as a small effect, 0.50 as a medium effect, and 0.80 as a large effect [39].
Results
In this study, 58 people completed the phone screen, 51 completed baseline measures and 48 initiated the coaching process (Figure 1). Of those who began coaching, the sample was primarily White, non-Hispanic, female, aged 45 or older, who were married and well educated with a college degree or higher ( Table 1). The most common type of cancer diagnosis was breast cancer (41%) and 10% of the population had been diagnosed with more than one type of cancer ( Table 2). The average number of HWC sessions attended was 5.13 of 6 sessions, or 85.4% of sessions ( Table 4).
Discussion
This study aimed to determine the effectiveness of HWC for improvements in health, tness, and overall wellbeing of cancer survivors within a community setting. Since survivorship is on the rise, mitigating the burden of cancer and its treatment through the modi cation of lifestyle factors is imperative to improving health outcomes [8]. HWC has been shown to have positive effects on many of the outcomes considered in this study in other populations [23,40,41].
From baseline to 3-month follow-up, participants reported statistically signi cant and clinically meaningful improvements in overall physical activity frequency and moderate-vigorous physical activity frequency, adding an average of two bouts of physical activity per week with one bout of moderatevigorous physical activity each week. This points to participation in leisure activity more often and at a greater intensity. It has been previously noted that physical activity interventions including behavior change techniques such as goal setting, social support, and action planning are more likely to be successful at maintaining long-term results, all of which are inherent in HWC [42]. It was also noted that interventions involving older adults with physical limitations, those involving less contact with participants, and those without a supervised exercise component were more likely to be ineffective [42]. However, 60% of our participants failed the PAR-Q suggesting underlying comorbidity or physical limitation. Almost two thirds were aged 55 or older and coaching duration was limited across three months, yet signi cant results were found, suggesting HWC may be a viable method to elicit change in a less time intensive manner. While signi cance was not found for the number of minutes or MET-minutes for overall and moderate-vigorous physical activity, this might be explained by a ceiling effect, as our participants reported being fairly active at baseline.
A small, signi cant improvement was found in healthy eating behaviors. The improvement in overall healthy eating behaviors points towards participants increasing their intake of fruits, vegetables, lean proteins, and/or whole grains while reducing intake of sugar and saturated fats [34]. Prior studies assessing diet in cancer survivors have often used dietary counselling and goal setting as part of their modality, providing a foundation that HWC might elicit similar responses [43]. Coupled with increases in physical activity, this is promising for both cancer prevention, recurrence, and predicted cancer outcomes.
Signi cant improvements in sleep, notably in quality of sleep, duration of sleep, and sleep e cacy were found, suggesting participants were sleeping longer, sleeping better, and falling asleep more quickly. Negative changes to sleep patterns in this population have been associated with more severe fatigue, having less energy, and more pain, leading to impairments in performance in daily tasks and increases the risk of anxiety and depression [14,15]. This in turn affects physical and emotional wellbeing which may be mitigated when the sleep disturbance is modi ed. While it is well known that physical activity can reduce stress and anxiety, improving sleep has also been shown to reduce symptoms of anxiety and depression and increase QoL in cancer survivors [9,13].
Improvements in QoL, particularly in the areas of physical, functional, and emotional wellbeing were also noted. QoL has been shown to be interrelated with, and often a byproduct of, other behavioral factors. As previously mentioned, being physically active, improving sleep metrics, and stress management have all been shown to independently improve wellbeing [10,13,19]. Evidence also suggests that the more healthy lifestyle behaviors a person engages in, the better their perceived QoL, so improvements in the QoL metrics could be due to independent, or a combination of, behavior changes [45].
Strengths & Limitations
Compared to many other interventions, HWC is less time intensive yet may permit scaffolded behavior change to emerge through speci c goal setting. Over the course of three months, participants meet with coaches less than ve hours total. HWC is highly individualized, allowing the cancer survivor to work on the issue most important to them while also addressing their own barriers and facilitators towards change. Having this type of exibility within a program increases the potential for participant adherence and also future sustainability and adoptability of the program within a cancer care setting.
There are also several limitations, most notably the single group study design. Future work should consider the use of a control or comparison group to usual cancer care to increase legitimacy of the results. Participants ranged across the cancer continuum in diagnosis, stage and various treatment interventions. The impact of coaching may have a different role for various impairments at different times, depending on whether a person is currently undergoing cancer treatment or is post-treatment.
Because participants were those who showed interest in health coaching and the data collected was completed by the participant, self-selection and self-report bias should also be considered. While this study provides valuable information about the feasibility of implementing a HWC intervention for cancer survivors within a community setting, the small sample size and predominately homogenous sample may limit generalizability. Further research should examine intervention effects in various subgroups of cancer survivors, for example, for different diagnostic groups within various stages, alongside duration and sustainability of coaching. Identifying barriers at the levels of patient, provider, and health system is essential.
Clinical Implications And Conclusion
While long-term follow-up would be necessary to demonstrate potential survival bene ts, based on the literature it stands to reason that by making improvements in the behaviors studied here, cancer survivors could decrease their risk of developing another cancer, chronic condition, or worsening existing comorbidities, which in turn could reduce the risk of cancer-related death, improve QoL, and increase productivity. Furthermore, leveraging speci c time points for regular and ongoing coaching assessments and modi cation of goals may provide surveillance through the trajectory of cancer survivorship. With less hands-on time needed and more exibility available to tailor the program to the cancer survivor's needs, HWC could be a viable way to creating lasting behavior change in this population. (Did not answer) 1 1
|
2021-05-05T00:09:09.442Z
|
2021-03-16T00:00:00.000
|
{
"year": 2021,
"sha1": "0fed30c754078ac474f4cf0f6ca9d48eba17d0ed",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-260358/v1.pdf?c=1631893672000",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "7ad264b03c319d7104fb73a1d819cc37471af084",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
260043232
|
pes2o/s2orc
|
v3-fos-license
|
Classical Guidance Service Tools to Increase the Creativity of Junior High School Students
The low creativity of students is due to the need for more guidance provided by teachers and counselors, so to overcome these problems, guidance service tools are needed to maximize the development of children's creativity. This study aims to produce a classical guidance service toolkit to increase the creativity of junior high school students. This research belongs to the development research developed using the 4-D development model. The 4-D development model comprises the define, design, development
INTRODUCTION
Creativity creates new alternatives to solve problems experienced (Mardliyah et al., 2020;Marwiyati & Istiningsih, 2020). People are said to be creative if they can create new ideas or develop existing knowledge (Imamah & Muqowim, 2020;Murdana, 2019). Human creativity will bring prosperity and success by contributing creative ideas, discoveries, and new technologies from creative individuals (Dwiana et al., 2021;Lestari & Halim, 2022;Prabowo, 2020). Creativity has characteristics that are divided into two aspects: affective and cognitive. The cognitive aspect includes original thinking, flexible thinking, evaluating, detailing, and fluent thinking, while the affective aspect consists of feeling challenged, imaginative, confident, willing to take risks, and having curiosity (Astuti & Aziz, 2019;Haerunisa et al., 2021;Nadziroh & Mutmainah, 2017). Increasing student creativity can be done by providing overall guidance and providing a basic understanding of how to increase creativity independently from within the student himself so that the student can adapt himself according to the capacity and location of the topics that are developed optimally (Farkhatun, 2022;Maarif & Prasetyo, 2020;Rahmawati & Tirtayani, 2021). The efforts that can be made to increase children's creativity are not often dictating, criticizing, limiting activities, scaring, and more often giving choices to children so they can think about the choices given (Hairiyah & Mukhlis, 2019;Zakiah et al., 2020). Creativity possessed by a person will determine success in achieving various human development needs such as character, intellect, and self-ability and influence the nation's civilization (Astuti & Aziz, 2019;Hasanah, 2021). The reality shows that the creativity possessed by children is decreasing. It is shown by the results of observations made at SMP Negeri 2 Singaraja. The observation results show that some students are accustomed to waiting for friends to work on them and copying the work results, such as summarizing assignments and essay questions. Many students chose to be silent during discussion activities and had to be appointed to be more active. One of the factors causing the low creativity of children is the need for more implementation of guidance services provided by teachers. If left continuously, the lack of creativity of children will certainly have an impact on low learning outcomes and the ability of children to follow the learning process. One of the efforts that can be made to overcome this problem is to develop a classical guidance service tool for children.
Classical guidance is a service of counseling guidance in schools arranged systematically by giving students direct practice and question and answer (Kamalia et al., 2020;Soleman, 2021). Classical guidance can provide services to students in large numbers because it is carried out in class (Anggraini et al., 2020;Khoiriyah et al., 2021). Classical guidance can provide services to students in large numbers because it is carried out in class (Fridaram et al., 2021;Jannah, 2021). Classical guidance service tools are a variety of equipment that can be used to provide basic services in a class in the form of discussions, questions and answers, and hands-on practice to develop students' potential. Classical guidance service tools include the Classical Guidance Service Implementation Plan, service materials, service media, Student Worksheets, and evaluation tools (Agustina, 2022;Selenda et al., 2022;Widnyani, 2022).
The plan for implementing classical guidance services is a structured document containing methods, objectives, identities, and steps drawn to carry out classical guidance services (Silviana et al., 2022;Wiantisa et al., 2022). Service material contains material or information that will be conveyed to students during service activities (Agustina, 2022;Silviana et al., 2022). Counseling service media can make it easier to convey service information so that activities become effective (Setyawati et al., 2021). Student Worksheets are tools made systematically, containing questions about the discussed material (Putra & Agustiana, 2021). Evaluation tools are divided into two types, outcome evaluation and process evaluation, where each of these evaluation tools aims to find out the development of services (Putri, 2019). Several previous studies have revealed that classical guidance service tools are effective in improving the attitudes of junior high school students (Silviana et al., 2022). Other studies reveal that classical guidance service tools improve interpersonal communication skills (Selenda et al., 2022). The results of further research revealed that using classical guidance service tools effectively increased the hard work of junior high school students (Widnyani, 2022). Based on some of these research results, the classical guidance service tools can significantly increase various positive character scores in students. In previous studies, no studies specifically discussed the development of classical guidance service tools to increase the creativity of junior high school students. So this research is focused on this study to produce classical guidance service tools to increase the creativity of junior high school students.
METHOD
This research belongs to the development research developed using the 4-D development model, which consists of 4 stages: define, design, development, and disseminate. The 4-D development research model is used because it is recommended for developing devices/products. The design of this study is to develop tools used in implementing classical guidance services to increase student creativity. The subjects involved in this study were three lecturers as experts and two guidance and counseling teachers as practitioners. Data collection in this study was carried out using observation, interviews, and questionnaires. The instrument used in this research is the student creativity questionnaire. For product acceptance in this study, it can be done using the formula from Lawshe, measuring CVR and CVI, which can be used to determine the content validity of a product. The acceptance test provisions are considered appropriate if half of the validators say they are appropriate/valid on the CVI score results. The effectiveness test was carried out to determine the level of influence of the implementation of classical guidance services on increasing student creativity. In contrast, the tests used were the normality test, homogeneity test, and t-test. The data normality test is used to determine whether the research data is normally distributed and, simultaneously, determines parametric or non-parametric statistics-data analysis using the SPSS 23 application. The criteria for testing the normality of this data pay attention to the significance column (sig) and compare it with α = 0.05. The provisions of the normality test are that if the sig score > α, then the data is normally distributed, and if the sig score < α, then the data is not normally distributed. Furthermore, a homogeneity test was conducted to determine whether the two data variants had similarities. This test is carried out when the research data has two or more groups of research data at once as a requirement in the independent sample t-test. Data analysis in this test used the SPSS 23 application by looking at the calculation results in the significance column (sig-Based On Mean). The provisions used in the homogeneity test are that if Sig > 0.05, the variant is homogeneous, and if Sig < 0.05, it means the variant is not homogeneous. The final analysis in this study is the t-test analysis which is carried out based on effectiveness using instruments and hypothesis testing using the t-test. The data obtained through the creativity instrument is analyzed using SPSS 23.
The data is divided into two types: data in the experimental and control classes. The hypothesis is H0: classical guidance service tools are ineffective for increasing junior high school student creativity, and Ha: classical guidance service tools are effective for increasing junior high school student creativity. The provisions for decision-making in this test are if sig. (2-tailed) < 0.05, then H0 is rejected, Ha is accepted, and if sig. (2-tailed) > 0.05, then H0 is accepted, and Ha is rejected. The final analysis in this study is the ttest analysis which is carried out based on effectiveness using instruments and hypothesis testing using the t-test. The data obtained through the creativity instrument is analyzed using SPSS 23. The data is divided into two types: data in the experimental and control classes. The hypothesis is H0: classical guidance service tools are ineffective for increasing junior high school student creativity, and Ha: classical guidance service tools are effective for increasing junior high school student creativity. The provisions for decision-making in this test are if sig. (2-tailed) < 0.05, then H0 is rejected, Ha is accepted, and if sig. (2tailed) > 0.05, then H0 is accepted, and Ha is rejected.
Result
The first analysis in this study was an acceptance test analysis involving three lecturers as experts and two guidance and counseling teachers as practitioners. The tool used to test the acceptability of this product is an acceptance questionnaire that was prepared during the preparation stage of the benchmark reference test. This test is used to determine the level of conformity with the acceptability indicator. The results of the acceptance test can be seen in Table 1. Based on the CVI calculation results, a score of 1 is obtained, so the classical guidance service tools to increase creativity developed are in a very appropriate category. The second analysis is the effectiveness tests conducted by conducting product trials involving students of classical guidance service devices. This activity aims to find out whether the tools that have been developed are effective or not to increase the creativity of junior high school students. However, previously there were prerequisite tests that had to be carried out, normality and homogeneity tests. The test results for normality and homogeneity of the data can be seen in Tables 2 and 3. The Shapro-Wilk normality test using SPSS obtained sig pretest and posttest scores in the experimental class of 0.302 and 0.101 and sig scores in the pretest and posttest of the control class of 0.099 and 0.230. Based on the basic decision-making provisions, if the sig score obtained above is greater than 0.05, then the normality test of this data can be fulfilled / the data distribution is declared normal. Furthermore, on the homogeneity test results, the results obtained from the SPSS calculation in the sig column Based on Mean were obtained at 0.074 > 0.05, so the data variant was declared homogeneous. After obtaining the normality and homogeneity test results, the research continued with independent sample t-test analysis to determine the developed device's effectiveness. The results of the paired sample t-test can be seen in Table 4. Based on the acquisition of the SPSS calculation, the t score is 2.710, and sig. (2-tailed) is 0.009 <0.05, then H0 is rejected, and Ha is accepted, or classical guidance service tools effectively increase junior high school student creativity.
Discussion
The results of the data analysis show that the classical guidance service tools effectively increase the creativity of junior high school students. These results then show that students need classical guidance services to develop various positive characters within themselves. Classical guidance services are counseling services provided to students through direct interaction (Kamalia et al., 2020;Soleman, 2021). The implementation of classical guidance is carried out through material presentation alone and by providing debriefing to students to increase various positive characters and skills to increase student independence (Anggraini et al., 2020;Khoiriyah et al., 2021). Specifically, classical guidance services are carried out to increase student potential and complete developmental tasks to achieve educational goals (Agustina, 2022;Selenda et al., 2022;Widnyani, 2022). Classical guidance has an oral and direct way of discussion, thus enabling direct communication between students and counselors.
The guidance process that is carried out directly will maximize the process of conveying information and basic concepts regarding the material to be provided and can increase the socialization of students. This makes classical guidance have a large and efficient influence on counseling guidance (Agustina, 2022;Silviana et al., 2022). Increasing student creativity through classical guidance can be done through the development of guidance service tools, where the guidance service tools contain various tools that can be used to provide basic services in class both in the form of discussions, question and answer, and hands-on practice to develop students' potential (Silviana et al., 2022;Wiantisa et al., 2022). Classical guidance service tools include the Classical Guidance Service Implementation Plan, service materials, service media, Student Worksheets, and evaluation tools (Agustina, 2022). Good service tools are service tools that have practical characteristics, are systematic, and are easy to understand (Putra & Agustiana, 2021;Putri, 2019). Practical, in this case, means that the service tools developed are easy for teachers and students to understand and can be used continuously. In addition to being practical, service devices must also be presented systematically in the appropriate order, with the aim that the services provided can run properly and optimally. In developing the creativity of classical guidance service devices, it is carried out by maximizing the role of students in the learning process. On the cognitive aspect, children with creativity generally show the characteristics of original thinking, flexible thinking, evaluating, detailing, and fluent thinking. In contrast, the affective aspect consists of feeling challenged, imaginative, confident, daring to take risks, and curious (Haerunisa et al., 2021;Nadziroh & Mutmainah, 2017).
Increasing student creativity can be done by providing overall guidance and a basic understanding of how to increase creativity independently from within the student himself so that the student can adapt according to the capacity and location of the topics that are developed optimally (Huda & Munastiwi 2020). The efforts that can be made to increase children's creativity are not often dictating, criticizing, limiting activities, scaring, and giving choices to children so they can think about the choices given (Hairiyah & Mukhlis, 2019). The results obtained in this study are in line with the results of previous research, which also revealed that classical guidance service tools are effective in improving attitudes in junior high school students (Silviana et al., 2022). Other studies reveal that classical guidance service tools improve interpersonal communication skills (Selenda et al., 2022). The results of further research revealed that using classical guidance service tools effectively increased the hard work of junior high school students (Widnyani, 2022). Based on some of the results of these studies, the classical guidance service tools can significantly increase various positive character scores in students.
CONCLUSION
Based on the results of data analysis and discussion, the classical guidance service tools can significantly increase the creativity of junior high school students. Through this service product, students can maximize providing basic services, especially to increase student creativity.
|
2023-07-22T15:37:12.233Z
|
2023-07-18T00:00:00.000
|
{
"year": 2023,
"sha1": "074cbf1e3138ba294ec083631b56355f2a82ee15",
"oa_license": "CCBYSA",
"oa_url": "https://ejournal.undiksha.ac.id/index.php/bisma/article/download/58804/26643",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "20039e84c3b22fc4625a91c8ef8f03fe11cdb5c0",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
}
|
236166673
|
pes2o/s2orc
|
v3-fos-license
|
Large-scale modern climate change and reactions of steppe birds of Inner Asia
On the basis of many years of work by ornithologists covering the entire second half of the past and the beginning of the current century, the features of the dynamics of the bird fauna of Inner Asia as a result of climate warming are considered. The central position of Eastern Siberia caused its stronger warming, which makes it possible to consider in detail various aspects of this process. Severe droughts, followed by long dry periods, observed in the arid regions of Central Asia in the second half of the 20th century, caused massive migrations of birds to the north. Strong changes in habitats were found in birds using intrazonal wetlands ecosystems for nesting. Native steppe and desert birds inhabited areas within their natural zone. They are characterized by occasional flights to the northern boundaries of their ranges towards the end of the second half of the study period. As a result of mass evictions, the diversity of birds in Eastern Siberia increased by 22.6% (110 species), but their abundance remained almost at the initial level. At present, the number of coastal birds in the south of Eastern Siberia, as well as in Central Asia, has greatly decreased, as a result of a shift in the optimum range to the north.
Introduction
The materials of ornithological research in Eastern Siberia for the first period of rather intensive work (mid 19thmid 20th centuries) were summarized by T.N. Gagina [1]. It should be noted that this review used materials from earlier studies, but they were not enough to characterize this huge region [2][3][4]. By the middle of the last century, Eastern Siberia had been surveyed quite fully, and special reports were published for its individual regions. This allows us to consider the large generalization made as relevant for further research. It gives a fairly complete picture of the bird fauna of the past period and allows you to use it for further comparisons in subsequent studies. At the same time, it should be pointed out that this is the initial period of climate warming, in comparison with the previous Little Ice Age (early-mid-14th -mid-19th centuries). Therefore, the general fauna of birds at that time was depleted, although the abundance of waterfowl in temperate latitudes was very high.
The modern bird fauna of Eastern Siberia (the second half of the 20th and the beginning of the 21st centuries) was formed under more favorable conditions. Since the end of the last century, a very strong warming has been observed, which has caused significant changes in the bird fauna of this region. By this time, the bird fauna of Eastern Siberia had been studied quite fully. This is especially typical for the southern regions and the Baikal region. However, in the subsequent (restructuring) the intensity of ornithological research has decreased markedly. At the same time, the system of nature reserves and national parks created by this time provided the receipt of materials necessary for comparing the bird fauna of these long and peculiar periods. At present, several large reports of birds have been published, which make it possible to assess the ongoing changes in a sufficiently qualitative way [5][6][7][8][9][10][11][12][13][14][15][16][17]. Along [18][19][20][21]. This allows a more complete and correct assessment of the changes taking place in the bird fauna of Eastern Siberia.
Climate warming has led to significant changes in the species composition of birds in Eastern Siberia [12-15, 17, 22-24]. However, the process of changing their fauna was quite long and included several characteristic stages. This required a special analysis of the collected materials. The dynamics of the faunistic composition of primordial steppe bird species is of particular interest. Droughts and dry periods, first of all, covered Central Asia, and birds were moved from arid territories. Therefore, the reactions of birds in these regions are of greatest interest. In this work, the processes that characterize the dynamics of the species composition of migrating birds and the formation of the modern avifauna of Eastern Siberia are specially considered.
Materials and Methods
Our own work covered the whole of Eastern Siberia, but the most intensive and detailed studies were carried out in the Baikal region and in the basin of Lake Baikal. The start of work coincided with the first large evictions of birds from Central Asia and continues to this day (1963-2021), Eastern Siberia, mainly a mountainous country with a large number of plateaus. In the south of this region, vast areas of zonal steppes pinch out, entering here from the territory of Mongolia and northern China. The central part of Eastern Siberia is occupied by the vast Sayano-Baikal Stanovoye Upland, which includes the most highly elevated and highly fragmented mountain ranges. Their height reaches 2500-3500 m above sea level, the elevation of the bottom of the basins is 455-1400 m. North and south of the Sayano-Baikal Stanovoy Upland, the heights of the plateaus are from 500 to 1800 m.
The climate of this territory, with the exception of the Baikal Basin, is moderately continental and sharply continental, but in the deep basins it is ultra-continental. The western transport of air masses predominates, with little moisture supply. Inland water bodies do not have a noticeable effect on the total reserves of atmospheric moisture. The river network of the region belongs to the basins of the three largest rivers of Siberia: Yenisei, Lena and Amur. It differs, with the exception of the Uldza-Torey Basin, in very high density. However, this territory is distinguished by a small lakes with a sharp predominance of small lakes, mainly of thermokarst origin, as well as lakes derived from the river bed. Special long-term observations were carried out at three stations located in different regions of Eastern Siberia. The longest special studies were carried out on the Barluksko-Sayan section of the middle reaches of the Oka river (over 20 years) and in the Selenga river delta (continuously for 10 years, and then periodic observations for 20 years), at the mouth of the Irkut river (five years old). In addition, expeditionary research was widely used using various types of transport and methods of observation and selection of field material. The entire territory of Eastern Siberia was covered by such surveys. The work used generally accepted research methods, adapted to local conditions [5-8, 10-14, 23-24].
The analysis is based on the materials of faunistic works collected in previous periods of research. They were compared with modern materials and thus the differences in the species composition of birds in specific areas were revealed. To identify new species, photographs of amateur bird watchers were widely used, the number of which has increased significantly in the region. Additional analysis was carried out for each new species recorded. The conditions under which it was discovered and the possibility of its appearance here in this period were determined. The generalized data for fairly large areas with approximately the same physical and geographical conditions were compared with the materials of T.N. Gagina [1], obtained during the first period of research (for specific ornithological areas). On the basis of such comparisons, the current composition of birds in ornithological areas and its differences in different periods of research were clarified. On the basis of these data, the modern faunistic composition of birds in Eastern Siberia was determined and differences in the composition of the bird fauna for the first and second periods of the study of this region were revealed.
Features of the development of the climatic situation in the Northern Hemisphere of the Earth
Modern studies of climate dynamics confirm that its changes are associated with solar activity. Its increase at the beginning of the 20th century coincides with the general increase in the NAO values. This time is characterized by a change in latitudinal atmospheric transport to meridional transport, as well as warming and melting of ice in high latitudes [25][26]. The second phase of warming was also accompanied by the same processes, more pronounced in the Pacific Ocean sector of the Arctic [25][26]. However, in the middle of the general period of climate warming , its cooling developed in the Arctic. The melting of sea ice has led to a very strong desalination of the ocean in the northern branch of the Gulf Stream and a noticeable change in water circulation in the North Atlantic. Here, vertical convection of oceanic water flows developed, and the region of formation of warm deep waters noticeably shifted to the south [25][26]. This led to a significant change in climatic conditions in the coastal regions of the northern and temperate latitudes of Eurasia and America.
The development of a stronger meridional atmospheric transfer in the North Atlantic coincided with an increase in droughts, covering very large areas in Africa, Anterior and Central Asia (1958)(1959)(1960)(1961)(1962)(1963)(1964). Subsequently, the same processes began to be noted in the Pacific sector [25][26] and the change from latitudinal air mass transfer to meridional transport led to the development of very severe droughts in Mongolia and Eastern China (1968)(1969)(1970)(1971)(1972)(1973)(1974)(1975)(1976)(1977)(1978) [27]. Undoubtedly, these processes are associated with a weakening of the zonal atmospheric circulation, due to which the temperature of adjacent regions is leveled. As a result, there was a noticeable increase in the warming up of the central regions of Asia [12][13], where the largest droughts for the 20th century developed and the frequency of their recurrence increased [27]. The area of contact of air masses of temperate latitudes on the periphery of the southwest direction, characteristic of this region and the East Asian monsoon, shifted to the north, and in the area of ordinary contacts, a very strong weakening of the general atmospheric circulation was observed [28]. The result of a change in the main directions of atmospheric circulation is the formation within Inner Asia of a vast, very strongly warmed up region, which almost completely includes Eastern Siberia.
The above processes were characterized by significant spatio-temporal and seasonal heterogeneity [16,[25][26][27][28][29][30][31][32][33], due to the high complexity of the underlying Earth's surface, especially pronounced in the mountainous regions of Asia. The area of conjugation of air masses of these directions -the Siberian frontal zone went far to the north (South Yakutia) and the frequency of contacts of the Arctic air masses with the eastern monsoons increased. At the same time, the instability of the Siberian anticyclone has greatly increased. He began very often to move from Yakutia to the border of Mongolia and China, capturing large mountain systems in the south of Buryatia, Transbaikalia and the Khabarovsk Territory. In this regard, cold air masses coming from the north began, flowing around the anticyclone, not to enter Siberia, but to Canada and Alaska, where frosts intensified to -30 ° C. Similar processes are observed in the North Atlantic, which causes snowfall on the islands of the Mediterranean Sea and in Africa, as well as on the Atlantic coast of North America.
The observed changes in the circulation of air masses, leading to dramatic climate changes, strongly affect the bird fauna of the Central regions of Asia. The development of large and prolonged droughts, as well as the increasing frequency of their repetitions in its southern regions, caused a significant outflow of birds to the northern regions, primarily to the territory of Eastern Siberia. Inner Asia is characterized by the formation of very long (several tens of years) dry periods, combined with shorter, but very strong, extensive and rather long (from 5 to 10 years) droughts [16,[25][26][27][28][30][31][32].
Dynamics of the climatic situation in Eastern Siberia
Eastern Siberia, in comparison with many other regions of the Northern Hemisphere, is characterized by a stronger climate warming. Here it is expressed more than twice as much -1.9°С / 100 years (from 1.5°С to 2.2°С / 100 years) than the average throughout the Northern Hemisphere of the Earth -0.7°С [12-13, 27-28, 30-32]. The most noticeable increase in the temperature of the surface air layer was noted in the cold season. It was accompanied by an increase in the meridional and a strong weakening of the zonal circulation of the atmosphere [25][26]. The course of these processes is greatly influenced by the nature of regional conditions, most often due to the nature of the underlying surface, which has been repeatedly emphasized by many authors [12-13, 16, 22-23, 25-33]. In this regard, each sufficiently large region always has its own specifics in the dynamics of climatic conditions [25][26].
In Eastern Siberia, climate warming in the second half of the 20th century was very clearly expressed. The significant intensity of warming and its very long duration indicate the end of the current warm-dry period of the climatic cycle not lower than a secular, and most likely, a centuries-old level, lasting about 2 thousand years [12-13, 22-23, 25-26, 31]. A significant change in the bird fauna of Eastern Siberia, including Yakutia, is traced from south to north from the deserts and steppes of Central Asia to the Arctic tundra, covering the Subarctic mountains and islands of the Arctic Ocean [7, 12-14, 17, 24]. Changes in the level of water cut in adjacent territories with an increase in warming in the western direction were also revealed in the Middle and Upper Amur [16,32], they are very close to North China and Mongolia.
The general level of warming in the river basin cupid increases from east to west. For 1891-2004 in Khabarovsk the warming was 1.1°C / 100 years, in Chita -1.7°C / 100 years, and on the lake Baikal 1.9°С / 100 years [12,16,33]. Obviously, the dynamics of the surface air temperature in the eastern (coastal) regions of Russia is significantly influenced by the Pacific monsoons, which reduce the level of warming in the coastal regions. The climate also changes from south to north. The highest level of warming was recorded in the lowland regions of Mongolia and China adjacent to Russia: 2.2°C / 59 years [16,28,32]. In mountainous regions, it changes from 1.9°C / 100 years in the south [12,26,33] to 1.0-1.5°C / 59 years in the north of the Baikal region [24, 32], i.e. this process is much less pronounced here.
In the vast territory of Eastern Siberia, in the modern period, climate changes occur quite synchronously. Most likely, this is what determines the specificity of the migration of birds to the lake Baikal and further north. The position of the lake basin Baikal, located in fact in the center of North Asia, makes it possible to well track the overall dynamics of the ranges of many bird species. The data collected here make it possible to obtain a complete and reliable picture of the characteristics of the reaction of birds of various taxonomic groups and natural zones to changes in the ecosystems of vast territories [1, 5-11, 14-15, 17-21, 24]. In some areas, different directions and strengths of climatic changes can be observed. On average, in the basin of the lake Baikal is characterized by warming in winter and early spring [33]. At the same time, in the Barguzin Basin, warming is much less pronounced, on average, only by 1.0°C, and on the northeastern coast of Lake Baikal it is expressed in the spring and summer months [13,24]. In this case, it should be borne in mind that the methodological approaches to the study of climatic changes by different authors are somewhat different. In particular, the joint influence of temperature and humidity was studied on the northeastern coast of Lake Baikal. In general, the warming of the climate in Eastern Siberia was much more pronounced than in the adjacent territories. Therefore, it was here that it was possible to obtain the most complete and reliable picture of the dynamics of the bird population under the influence of strong climatic changes.
However, the key moment that caused the development of the trend towards the eviction of birds turned out to be strong, extensive and prolonged droughts that began in Mongolia and China in the late 40s of the last century. A critical situation developed in the mid 70s of the XX century. At this time (1975-77), a severe drought was observed in all regions of Northeast China. Its development peaked in 1977 -it covered the western regions of Mongolia, the southern regions of the Baikal region and almost all of China, with the adjacent regions of Eastern Mongolia. The next year (1978), a very severe drought was observed in all western regions of Mongolia [31]. According to experts, the probability of recurrence of such droughts is once every 100-600 years [27].
The first stage of the eviction of birds from Central Asia
Departures of birds from Central Asia began to be in the late 40s and early 50s of the XX century from the territory of Mongolia and China observed [11][12][13][14]23]. They coincided in time with the beginning of the formation of large and extensive droughts here [27]. During this period, in the south of Eastern Siberia, species appeared that had not previously been found here: the Asian Dowitcher Limnodromus semipalmatus, the Pied Avocet Recurvirostra avosetta, the Black-winged Stilt Himantopus himantopus, Spotbill Duck Anas poecilorhyncha. The abundance of many other species of this group of birds has also significantly increased: the Common Crane Grus grus, the Shelduck Tadorna ferruginea, the Gadwall Anas strepera, the Shoveler A. clypeata, the Garganey A. querquedula, the Pochard Ayhtya ferina, and the Eurasian Curlew Numenius arquata.
The most common phenomenon was the eviction of birds using damp, wet and swampy meadows and shallow waters. In the first wave of mass migrations of birds, the Lapwing Vanellus vanellus, the White-winged Black Tern Chlidonias leucopterus, the Coot Fulica atra, and the Asian Dowitcher were dominant. There were no pronounced mass migrations at this time, although in some areas of the migratory flows of these birds at places of rest stops, there was a noticeable increase in their abundance (Toreyskie lakes, Wetlands complex of the Chivyrkuisky Bay of Lake Baikal, the Barguzin river valley, the Dzhida, Orongoiskaya depression, Goose lake, of the Selenga river delta) [8, 11-12, 14, 17, 23]. The first stage of the eviction of birds is associated with a sharp drying out of the territory and the disappearance of swampy meadows and shallow waters. At this time, the number of coastal birds in the south of Russia begins to increase and new and rare species are found, many of which will later be included in the Red Book of Russia.
The second stage of the eviction of birds from Central Asia
The next stage (60-70s of the last century) is associated with a decrease in the number of migrating birds and a rather long period of habitat formation under a new regime of moisture supply in Central Asia. However, the establishment in the upper part of the basin Selenga river and adjacent regions of a long dry period (1976-2011) [28] caused the second wave of evictions (80s of the 20th century), including the most widespread and common species of shorebirds and waterfowl. At this time, a significant movement of them to the north was noted, up to the Central Yakut lowland, and in some species even to the tundra zone. The optima of the ranges moved from the southern regions of Eastern Siberia and the steppes of Central Asia to Central Yakutia (Central Yakut lowland) [7-9, 11-14, 17, 21, 23]. This was especially noticeable in the common species of waders: the Common Snipe Gallinago, the Pin-tailed Snipe G. stenura, the Swinhoe's Snipe G. megala, the Marsh Sandpiper Tringa stagnatilis, the Green Sandpiper T. ochropus, the Wood Sandpiper T. glareola, etc.
During this period, the main number of new steppe species of waders was noted, which had not previously been found in this territory [11][12][13][14]23]. However, all of them were found in the south of Eastern Siberia in flight, with isolated cases of episodic nesting. Among them, the Greater Sand Plover Charadrius leschenaultii, the Lesser Sand Plover Charadrius mongolus, the Oriental Plover Charadrius veredus, the Kentish Plover Charadrius alexandrinus, the Oriental Pratincole Glareola maldivarum and the Gray-headed Lapwing Microsarcops cinereus should be noted. By the end of the 20th century, this process was largely over and the number of coastal birds in the south of Eastern Siberia decreased. On the territory of Mongolia, the abundance of birds remained high only in large lake basins, with a predominance of deep-water lakes. China maintained the water level of reserves created to protect wetland ecosystems only through a special supply of water from large water reservoirs, created artificially -reservoirs on large waterways [11-14, 16, 23]. However, all these measures could not contain the general decline in the number of birds in coastal ecosystems -their abundance here sharply decreased.
At the same time, the number of coastal birds from the southern taiga subzone to the tundra zone has noticeably increased. Their very high abundance was noted in the Lena river delta. However, during the spring and autumn migrations in the southern regions of Eastern Siberia, the number of near-water and waterfowl remained high. Their noticeable concentration was observed on the Angara reservoirs, since the main part of the shallow lakes of the forest-steppe and southern taiga had dried up by this time. It was during this period, the end of the 20th -the beginning of the 21st centuries, that the appearance of the Great Cormorant Phalacrocorax carbo was recorded in the basin of the lake Baikal and adjacent territories, primarily the Bratsk reservoir. Its mass settlement is associated with a period of significant drying up of Central Asia and the establishment of a long dry period in the basin of the Selenga river [28]. The number of typical ichthyophages has also increased -the Caspian Tern Hydroprogne caspia, the Gull-billed Tern Gelochelidon nilotica, the Little Tern Sterna albifrons and, in part, the Common Tern S. hirundo. The sharp decrease in the area of shallow waters in the large lake systems of Inner Asia reduced their fish productivity, associated with the loss of a significant area of spawning grounds.
The third stage of the eviction of birds from Central Asia
In the last years of the 20th and at the beginning of the 21st centuries, the frequency of appearance of typical steppe and mountain birds not associated with wetland ecosystems has increased near the northern boundaries of the ranges. At the same time, an increase in their diversity was observed [11][12][13][14]23]. Among them, there were both small and common species that were not recorded in Eastern Siberia, or earlier their single flights were very rare: This indicates significant changes in the steppe ecosystems of Central Asia, which is also emphasized by the northward movement of a number of steppe plant species. This period is also characterized by the migrations of new species of coastal birds, against the background of a noticeably reduced number in the southern regions of the Baikal region. Among them, shorebirds from the more southern regions of Asia, previously extremely rare in the south of Eastern Siberia, were noted.
The fourth stage of the eviction of birds from Central Asia
This process gradually intensified, and in the last decade, flights, and, possibly, cases of single nesting sites, have become more frequent in a number of southern species of semiaquatic birds, previously extremely rare in the south of Eastern Siberia. Despite the sharply increased diversity of the bird fauna, their total abundance was insignificant. These are small species with small areas, adapted to the use of very specific environmental conditions. Among them, semi-aquatic, mountain, steppe and desert bird species prevail. In the last two centuries, most of them have not been found in Eastern Siberia, which indicates their southern distribution. Among
Discussion
A detailed list of birds allows us to highlight the most important aspects of their dispersal and understand the features of the development of this process. It undoubtedly gradually intensified, and in the second decade of this century, birds with more southern ranges began to form the basis of new species, and earlier, even in Mongolia and North China, were few in number for nesting or were found only in flight. It should be noted that the bulk of the new species belonged to migratory birds, or they were occasionally observed on the nesting site in separate pairs or small groups. In this regard, despite the sharply increased diversity of the bird fauna of Eastern Siberia, the total abundance of new species was insignificant. It was mastered by birds in all directions, but in each large region (Cisbaikalia, the basin of Lake Baikal, Transbaikalia), two or three leading streams were distinguished. The main directions of movement varied by periods of evictions and individual seasons, depending on the localization of areas covered by long dry periods.
It is necessary to pay attention to the fact that droughts and long dry periods covered the desert and steppe natural zones of Inner Asia. Consequently, the birds of these zones should be the most numerous among the settling species. At the same time, it is well known that the diversity of birds in these zones, due to the monotony of the relief and the high severity of living conditions, is low [5,[12][13][14]23]. However, the overall bird diversity of this vast geographic region, due to its southern position and high complexity of the relief, is higher than in temperate latitudes. This ensures a high species diversity of migrating birds. The primordial steppe and desert birds, despite the fact that a number of their species are nomadic, have demonstrated very high resistance to dry periods. Undoubtedly, this is due to the formation of special adaptations for their existence in these natural zones. High dynamics of habitats and species diversity is characteristic of birds using intrazonal habitats of these natural zones, which, first of all, include wetland ecosystems.
The complete bird fauna of Eastern Siberia for the end of the 19th -first half of the 20th century, taking into account the new systematic status of a number of species (some subspecies have been converted to species), includes 376 bird species. Their modern fauna (the second half of the 20th and the beginning of the 21st centuries) is formed by 486 bird species, i.e. it has increased by 110 species (22.6%) over the past 70 years. The process of the formation of the bird fauna of Eastern Siberia continues and each year of research brings new species.
Conclusion
Thus, mass migrations of birds to the northern boundaries of their ranges as a result of a pronounced warming of the climate are characteristic only of coastal birds that colonize intrazonal habitats that are found in all natural zones and mountain belts -Wetlands ecosystems. Departures of birds from desert and steppe zones (arid territories) covered by severe droughts, with the exception of birds from wetland ecosystems, are limited to isolated cases of flights to the northern boundaries of their ranges, but the frequency of such flights is gradually increasing. The new species are based on migratory southern birds, with isolated cases of episodic nesting of individual pairs and small groups. In this regard, despite the sharply increased species diversity of birds in Eastern Siberia, their total abundance changed insignificantly. Everywhere there is a high instability of habitats and a shift of their boundaries to the north, as well as a noticeable exchange between the bird faunas of different regions, going in all directions, incl. and from the north. Zoogeographic boundaries have also largely lost their significance -birds easily overcome them.
|
2021-07-22T20:05:40.942Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "454f6b767713d3106bb1b5c50ccfff02938a47d4",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/817/1/012066",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "454f6b767713d3106bb1b5c50ccfff02938a47d4",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Physics"
]
}
|
235221131
|
pes2o/s2orc
|
v3-fos-license
|
Psychometric properties of the Inventory of Life Quality in children and adolescents in Norwegian Sign Language
Background Several studies have assessed the Quality of Life (QoL) in Deaf and hard-of-hearing (DHH) children and adolescents. The findings from these studies, however, vary from DHH children reporting lower QoL than their typically hearing (TH) peers to similar QoL and even higher QoL. These differences have been attributed to contextual and individual factors such as degree of access to communication, the participants’ age as well as measurement error. Using written instead of sign language measures has been shown to underestimate mental health symptoms in DHH children and adolescents. It is expected that translating generic QoL measures into sign language will help gain more accurate reports from DHH children and adolescents, thus eliminating one of the sources for the observed differences in research conclusions. Hence, the aim of the current study is to translate the Inventory of Life Quality in Children and Adolescents into Norwegian Sign Language (ILC-NSL) and to evaluate the psychometric properties of the self-report of the ILC-NSL and the written Norwegian version (ILC-NOR) for DHH children and adolescents. The parent report was included for comparison. Associations between child self-report and parent-report are also provided. Methods Fifty-six DHH children completed the ILC-NSL and ILC-NOR in randomized order while their parents completed the parent-report of the ILC-NOR and a questionnaire on hearing- and language-related information. Internal consistency was examined using Dillon-Goldstein’s rho and Cronbach’s alpha, ILC-NSL and ILC-NOR were compared using intraclass correlation coefficients. Construct validity was examined by partial least squares structural equation modeling (PLS-SEM). Results Regarding reliability, the internal consistency was established as acceptable to good, whereas the comparison of the ILC-NSL with the ILC-NOR demonstrated closer correspondence for the adolescent version of the ILC than for the child version. The construct validity, as evaluated by PLS-SEM, resulted in an acceptable fit for the proposed one-factor model for both language versions for adolescents as well as the complete sample. Conclusion The reliability and validity of the ILC-NSL seem promising, especially for the adolescent version, even though the validation was based on a small sample of DHH children and adolescents. Supplementary Information The online version contains supplementary material available at 10.1186/s40359-021-00590-x.
children with cochlear implants. However, as Hintermair [1], points out, several aspects make it difficult to compare these studies. Among these are differences in the definition of QoL, ranging from Health-Related QoL (HRQoL) to social well-being, different types of assessments (generic QoL measures, ad-hoc tools designed for specific studies, and parents' qualitative reports after their children's cochlear implantation), and different informants (parents and children) as well as differences in access to communication and peers. Researchers such as Warner-Czyz et al. [2] have demonstrated the importance of including both parents' and children's perceptions. They found that 4-7-year-old DHH children in their study reported better QoL than their parents. Chmiel et al. [3] support this necessity based on parents reporting better QoL for their 3-20-year-old DHH children and adolescents after cochlear implantation when compared with their children's self-report. Fellinger et al. [4] also report low agreement between parents and their 6-16-year-old DHH children and adolescents on the Inventory of Life Quality in Children and Adolescents (ILC). Parents report the same level of QoL for their DHH children as parents of a typically hearing (TH) normative sample. The DHH children themselves report being less satisfied with play/ hobbies when alone, as well as physical health, compared with TH normative data. The same DHH children report better QoL related to school and family. Other researchers such as Pardo-Gijarro et al. [5], on the other hand, find moderate agreement between Spanish DHH children and adolescents and their parents when using a written and a Spanish sign language version of the KIDDSCREEN27, with correlations between 0.377 and 0.753. Discrepancies between child-and parent-report have also been reported for TH children and adolescents [6,7]. Therefore, the multi-informant approach has been emphasized for accessing QoL. Other factors that are likely to have contributed to differences in DHH children and adolescents' QoL are variations in participants' age, their preferred mode of communication and degree of hearing loss. It has previously been found for both TH and DHH children that older adolescents report lower QoL [5,[8][9][10]. The development of reliable and valid QoL instruments in sign language will help gain more accurate reports from DHH children who use sign language as their preferred language, thus eliminating one of the sources for the observed differences in research conclusions. In the present study, the term "children" is used for those aged 11 and younger, whereas "adolescents" refers to those aged 12 and older.
In their systematic review Roland, Fischer, Tran, et al. [11] report that 11 of 16 studies based on DHH children and adolescents and validated QoL measures find significantly lower QoL when compared with normative scores or TH controls, whereas five studies do not identify such differences in QoL. Their meta-analysis reveals that DHH children and adolescents report decreased QoL in the social and school domains based on the Pediatric Quality of Life Inventory (PedsQL). Unfortunately, there are some issues with this systematic review [11]. One problem is the lack of information about the informants for the specific studies.
Another issue with Roland, Fischer, Tran, et al. 's [11] systematic review is that Hintermair's [1] and Fellinger, Holzinger, Sattel, et al. 's [4] results are cited wrongly, that is, a maximum of 9 out of 16 studies (not 11 out of 16 as the authors state) find significantly lower QoL when compared with normative scores or TH controls. Hintermair [1] finds that mainstreamed DHH children and adolescents report better QoL based on the total QoL score, as well as in the domains of school, physical health, mental health, and global QoL, on the ILC than a normative TH sample. The effect sizes for the reported differences were small to moderate. Fellinger, Holzinger, Gerich, et al. [12] and Hintermair [1] report QoL being unrelated to the type and degree of hearing loss in DHH adults, children and adolescents respectively, whereas others such as Tsimpida, Kaitelidou, and Galanis [13] find that DHH adults with a higher degree of hearing loss report lower QoL. Kushalnagar, Topolski, Schick et al. [14] demonstrate that adolescents (11-18 years old) report higher QoL when they perceive that they understand most of their parents' expressive communication. This was not dependent on their preferred communication modality or degree of hearing loss. Adolescents with a preference for a combination of sign language and speech, however, reported experiencing less stigma than those with a strong preference for speech only [14].
Assessing QoL in DHH children and adolescents
Language and communication are essential for assessing QoL. Sign languages are natural languages that share many linguistic characteristics with spoken languages but also have specific features due to their manual-visual nature [15]. Studies have also shown that cultural context influences the understanding of seemingly identical wordings, especially when translating from written text to sign language [16,17]. The acknowledgment of sign languages as natural languages has helped lead to a shift from viewing DHH people in a medical and disability perspective to a socio-cultural one, appreciating deaf culture with its language, history, traditions, art and values [18,19]. For several DHH children and adolescents written language is considered as their second language. Studies have reported reading difficulties for many DHH children and adolescents [20][21][22], which in turn are likely to affect their ability to complete written forms, compromising the validity of assessments based on written forms. When assessing symptoms of mental health problems in DHH children and adolescents, it has been confirmed that the use of written self-report measures can lead to underestimating symptoms [23,24]. Most measures are designed for assessing TH people. A common solution in clinical practice is the use of sign language interpreters, who will provide on-the-spot translations, which will be influenced by their training and experience and therefore vary across settings and children [25]. Pardo-Guijarro, Martínez-Andrés, Notario-Pacheco et al. [5] emphasize the need to translate valid and reliable generic QoL measures into sign language to assess QoL in DHH children and adolescents and compare them to their TH peers' QoL. Assessment tools for QoL exist in some sign languages so far-American [26], Austrian [27], and Spanish Sign Language [5]. To the best of our knowledge, there is a lack of such instruments and a lack of studies on QoL in Norwegian DHH children and adolescents.
The Inventory of Life Quality (ILC)
The ILC is a brief measure to assess QoL in children and adolescents. The measure is based on the concept of the individual's perception of their position in life, including their health, functioning, and participation in routines and activities as compared to their peers [6,7]. It consists of seven items. One item for Global QoL and six items addressing the child's physical and mental health, school and family functioning, social contact with peers as well as play/hobbies when alone. The ILC is a multi-informant assessment and can be completed by children, adolescents, and young adults aged 6-21 and their parents. For children aged 6-11, the self-report is administered as an interview. Achenbach, McConaughy and Howell [28] among others, emphasize the importance of multiinformant assessments for capturing the unique perspectives held by each informant.
The original German validation found acceptable internal consistency (α = 0.63 self-report and α = 0.76 parent report) and test-retest reliability (r = 0.72 self-report and r = 0.80 parent report) for the QoL score (LQ 0-28 ) for community samples. Convergent validity with the Kinder Lebensqualität Fragebogen (KINDL) was shown to be moderate. Construct validity based on Principal Component Analysis was found to be acceptable for the one-component model in a community sample (self-and parent-report; N = 9292 and N = 1109) and a two-component model in a clinical sample (self-and parent-report; N = 605 and N = 568) [7]. For the two-component model, one component consisted of one item only (play/hobbies when alone) and the other component of the other six items. Based on the low number of items as well as the nature of the clinical sample and the relatively lower number of participants, the authors concluded that the one-component model fit the theoretical model best [7]. The importance of examining psychometric properties for measures of QoL in both community and clinical samples has been demonstrated by Jozefiak, Mattejat and Remschmidt [6] amongst others when examining the relationship between depression and QoL. The validation of the Norwegian self and parent report [6] found satisfactory internal consistency for adolescents aged 11 and older (self-report: Cronbach's α = 0.80-0.82, parent report: α = 0.78). For children aged ten and younger, internal consistency was somewhat lower (α = 0.64). The two-week test-retest reliability for the self-report was found to be high (r = 0.86). The one-factor model of the ILC based on confirmatory factor analysis demonstrated good fit in three community samples and acceptable fit in the fourth (clinical) sample. Moderate correlations between the KINDL and ILC self-report were found, supporting convergent validity [6]. A systematic Norwegian review based on five studies of the psychometric properties of the ILC confirmed these findings [29].
To the best of our knowledge, the ILC has only been used to study QoL in DHH in Germany, Austria, and Norway. Construct validity for DHH children and adolescents has only been studied in Germany [1]. In this sample, the DHH children and adolescents were all mainstreamed, indicated a preference for spoken language, and were assessed with the original written version. Hintermair [1] finds satisfactory internal consistency (α = 0.71) for the ILC in this German DHH sample with 212 participants; interitem correlations showed the same pattern as for TH children and adolescents with the items "Mental Health" and "Global QoL", demonstrating the highest correlations with the QoL score (LQ 0-28 ). A principal component analysis with subsequent varimax rotation resulted in the best fit for the two-component solution, "Family" and "Alone (play/hobbies), " constituting one component, while the other five items constituted the other component. Hintermair [1] concludes that these results support the use of the ILC for DHH mainstreamed children and adolescents with a preference for spoken language.
Except for the pilot study by Aanondsen et al. [8], there are hardly any studies on Norwegian DHH children and adolescents' QoL, and no studies validating assessment tools in NSL for assessing QoL in DHH children and adolescents. Norway is unique in offering the parents of DHH children and adolescents 40 weeks (i.e., 2-4 weeks/ year) of NSL classes over the course of 16 years, with all expenses covered. Therefore, one might expect a higher level of sign language skills among Norwegian DHH children and adolescents and their parents. This, in turn, may have a positive influence on their QoL. The inconsistencies in previous studies regarding DHH children and adolescents' QoL necessitate valid tools, both written and in sign language, to bridge the gap. The present study contributes to this by both translating the ILC to NSL as well as providing psychometric properties for the Norwegian version of the ILC self-report (ILC-NOR) and the NSL version (ILC NSL). The ILC NSL is the first instrument translated to NSL for assessing QoL in Norwegian DHH children and adolescents.
Aims
The main aims of the present study were to translate and validate the ILC self-report in NSL (ILC-NSL) and compare it with the ILC-NOR in Norwegian DHH children and adolescents. Both self-reports of the ILC were compared with the parent report. Finally, the usability of the ILC-NSL for signing DHH children and adolescents was assessed from the children and adolescents' perspective.
We addressed the following research questions.
1. What is the internal consistency of the ILC-NSL and ILC-NOR for DHH children and adolescents? 2. What are the correlations between the total scores and items between the self-report ILC-NSL and ILC-NOR? 3. What is the construct validity of the ILC-NSL and ILC-NOR for DHH children and adolescents?
4. What are the correlations between the QoL score (LQ 0-28 ) and items between the self-reports (ILC-NSL and ILC-NOR) and parent report? 5. What do DHH children and adolescents think about the usability of the ILC-NSL and ILC-NOR?
Participants
Caluraud, Marcolla-Bouchetemblé, de Barros et al. [30] report that hearing loss (HL) of > 40 dB affects 1.4 per 1000 infants (mild HL in 13%, moderate HL in 50%, severe HL in 17%, and profound HL in 20%). In central and northern Norway, this amounts to 266 children and adolescents with a HL of > 40 dB, that is, 45 with severe and 53 with profound HL based on a population of 189,737 children and adolescents aged 6-18. DHH children and adolescents aged 6-17 were recruited from the part-and full-time students at A.C. Møller school, a Deaf school for central and northern Norway during the school year of 2016/17. DHH adolescents aged 15-20 attending Tiller upper secondary school in central Norway with NSL as their first or second language were also invited. The overall response rate for the combined subsamples was 87% (60/69) (see Fig. 1).
Two children were excluded because of a lack of fluency in Norwegian sign language. Apart from fluency in both written and signed Norwegian (NSL), we applied Did not agree to participate: N = 9 Hearing-and language-related information for the participants in the current study can be found in Tables 1 and 2.
Sociodemographic and hearing-related information
A questionnaire completed by the parents was used to assess the participants' age, sex, type and severity of HL, type of education, and parents' attendance of sign language classes. The same questionnaire was also used in a previous study by the same authors [31].
Language-related information
Spoken language skills Categories of Auditory Performance (CAP; Archbold, Lutman and Marshall [32]) and Speech Intelligibility Rating (SIR, Allen, Nikolopoulos, Dyar et al. [33]) were used to assess participants' speech intelligibility and listening skills. The CAP is a single-item scale with a range of 0-7. Level 0 is "no awareness of environmental sounds", and Level 7 "uses a telephone with a known speaker. " The SIR is also a single-item scale with a range of 1-5. Level 1 is "connected speech is unintelligible", and 5 "connected speech is intelligible to all listeners. " The interrater reliability of the Danish version is based on the reports of two teachers and was reported as good (CAP: kappa = 0.785; SIR: kappa = 0.848; Dammeyer [34]). The Norwegian versions of the CAP and SIR were recently used in a study by Aanondsen, Jozefiak, Heiling et al. [31] for a similar group of participants. The scores of CAP and SIR were combined to form the Spoken Language Skills Score.
Sign language skills The Norwegian versions of the Sign Language Production Scale (SPS) and the Sign Language Understanding Scale (SUS) were used to assess Sign Language Skills [34]. The SPS and SUS were designed as as a short screening of sign language skills for research purposes and have previously been used in Norway [31]. SUS and SPS are based on the structure and range of CAP and SIR. The SPS is a single-item scale with a range of 1-5. Level 1 is "the child does not produce real signs" and Level 5 "the child uses fluent and almost conventional correct sign language. " The SUS is a single-item scale with a range of 0-7. Level 0 is "does not react to or does not comprehend signs" and Level 7 "is able to participate in long and complex conversations in sign language. " The interrater Table 1 Hearing-related characteristics (parent report) a All children attend both mainstream and deaf school b Children attending the deaf school for 1-2 days a week combine this with two or more week-long stays during the school year; that is, total number of answers is greater than the number of participants c Based on reports of ever having used a hearing aid [36] was used to assess the validity of the SUS. The SUS and the sign language receptive skills test correlated significantly (Spearman rank correlation coefficient = 0.905, p < 0.000; [37]). The validity of the SPS could not be evaluated due to the lack of a comparable assessment. The scores of SPS and SUS were combined to form the "Sign Language Skills Score".
Cognitive abilities
The Leiter International Performance Scale -Third Edition (Leiter-3) was used to assess nonverbal intelligence. It includes the following subtests: Figure Ground, Form Completion, Classification/Analogies, and Sequential Order. The sum of the scaled scores for these subtests constitutes the composite score of nonverbal IQ and is converted to the standard score [38].
Quality of life (QoL)
The Inventory of Life Quality in Children and Adolescents-ILC [6,7] is a multi-informant assessment for QoL based on seven items. One item assesses overall QoL, and six items address the child's physical and mental health, school and family functioning, social contact with peers, play/hobbies when alone. Items are rated on a 5-point Likert scale from 1 = "Very Good" to 5 = "Very Bad. " The QoL score (LQ 0-28 ) is calculated by multiplying the mean of the seven items by seven and subtracting 35, thus obtaining absolute values with a range of 0 to 28; higher scores representing better QoL (LQ 0-28 ) and lower QoL scores reflecting poorer overall QoL [6,7].
In the current study, we administered the written parent report (ILC-NOR) and the self-report versions for children (6)(7)(8)(9)(10)(11) and adolescents (12 and older) in both written and signed Norwegian (ILC-NOR and ILC-NSL), according to the manual [6]. Because of the differences reported [6] in internal consistency between the adolescent (Cronbach's α = 0.81) and the child version (Cronbach's α = 0.64), psychometric properties will be reported separately for the child and the adolescent versions, as well as for the complete sample (CA).
The translation process
The translation of the ILC was conducted based on the guidelines for cross-cultural adaptation of written self-report measures by Beaton, Bombardier, Guillemin et al. [39] with adaptations suggested by Roberts, Wright, Moore et al. [25]. Suggestions were based on the differences in syntax, morphology and prosody of sign languages and their visual nature. The same translation process was applied and described in this study by Aanondsen, Jozefiak, Heiling et al. [31]. The ILC-NOR went through two independent forward and backward translations from written Norwegian to NSL. Two bilingual deaf native NSL users with university degrees in teaching conducted and recorded these. The semantic, conceptual, lexical, and cultural differences were discussed by a panel. Members of the panel were the translators, a clinical psychologist, a colleague with a graduate degree in medicine specializing in child and adolescent psychiatry, and a consultant with a master's degree in language and communication and fluency in NSL. Based on these discussions, the panel developed a consensusbased forward translation that was filmed. Teachers from the local deaf school were used as a focus group. Best practice recommends including DHH children and adolescents in these focus groups. Due to constraints related to time and access to children of the right ages, teachers, who meet DHH children and adolescents with varying degrees of NSL and ages were recruited instead. The teachers (Deaf, hearing, and CODA, that is, a TH person raised by deaf parents) were asked to evaluate whether DHH children and adolescents with a mixture of language experiences and levels of fluency would be able to understand the translation. Based on the feedback of the focus group, the consensus version was adjusted and filmed again. Two hearing sign language interpreters, one with a background as a CODA and a master's degree in language and communication conducted the backward translations of the final consensus version. These were The author of the Norwegian version of the ILC, Thomas Jozefiak, approved the items and made suggestions for those not approved on behalf of the copyright holders (Hogrefe). These items went back through the translation cycle until final approval was achieved. After the final approval, the ILC-NSL was filmed professionally and prepared for interactive online administration using Select Survey.
Procedures
The enrolled children and adolescents and their parents received oral/signed and written information about participating in the study during their first attendance at the school after the survey had been initiated. Written informed consent was obtained from the adolescents and parents prior to inclusion, according to the study's survey procedures. The participating children and adolescents responded to the web-based ILC-NSL and ILC-NOR as well as a question about the usability of the two language versions and completed a nonverbal cognitive assessment. The nonverbal cognitive assessment was administered by a psychologist experienced in working with DHH children in mental health services and fluent in NSL. The administration of the ILC-NSL and ILC-NOR were conducted on two separate occasions with an interval of two to three days. The order of these two administrations was randomized. Parents also responded to a questionnaire on socioeconomic status, as well as questionnaires assessing their children's mental health, communication skills in spoken and signed Norwegian, and hearing status. DHH children and adolescents had access to their teacher and a psychologist, both of whom were fluent in NSL, during data collection. When the children and adolescents asked for help with the ILC-NSL, they received support in NSL, whereas the children and adolescents replying to the ILC-NOR were assisted in spoken Norwegian or sign-supported speech.
Statistical analyses
Missing values on five cases with ≤ 3 missing item values were substituted using expectation maximization (EM; [40]). Gender differences in item and scale mean scores were analyzed using independent samples t-tests. Mean differences were calculated. Bootstrapped confidence intervals were calculated using the bias corrected and accelerated method (BCa) and B = 1000 bootstrap samples. Differences between spoken and sign language skills were analyzed using paired sample t-tests for both age groups.
Dillon -Goldstein's rho (DG rho) was used to assess internal consistency because of the limitations of Cronbach's α, such as assumptions of uncorrelated errors, tau-equivalence and normality [41]. As most authors, however, report internal consistency based on Cronbach's α, we also calculated Cronbach's α, including bootstrapped confidence intervals for comparison. DG rho and Cronbach's α were interpreted as acceptable internal consistency at 0.6-0.7, and as good internal consistency when > 0.7. Intraclass correlation coefficients (ICC) based on a two-way mixed effects model with absolute agreement were used to evaluate associations between the scale and item scores of the two self-reports (ILC-NSL and ILC-NOR). Intraclass correlation coefficients (ICC) were calculated for each of the seven items and the QoL score LQ 0-28 to compare the two language versions of the self-report. We calculated Spearman's rank correlations to assess multi-informant correlations between the QoL scores on the parent and self-reported versions (NSL and NOR).
Partial least squares structural equation modeling (PLS-SEM) is a robust method when dealing with small sample sizes because it is nonparametric and makes fewer distributional assumptions. PLS-SEM, however, is mostly used for exploratory purposes because it lacks goodness of fit measures. Because of the small sample size, we primarily used PLS-SEM to establish factor loadings and discriminant validity (average variance extracted (AVE)) as suggested by Hair, Hult, Ringle et al. [42]. Standardized factor loadings greater than 0.4 were considered acceptable [43]. Factors with AVE scores greater than 0.5 were regarded as satisfactory for convergent/ discriminant validity. Fornell and Larcker [44], however, argue that AVE > 0.4 can be treated as acceptable if composite reliability is above 0.6.
As a supplementary analysis of the confirmed ILC factor structure, we used confirmatory factor analysis (CFA) with the weighted least squares means and variances adjusted (WLSMV) estimation method for categorical variables. The chi-square test, the normed chi-square (χ 2 /df ), the root mean square error of approximation (RMSEA), comparative fit index (CFI) and Tucker-Lewis Index (TLI) were used to assess model fit. A non-significant chi-square test, CFI and TFI > 0.9, RMSEA < 0.1 were considered indicators of acceptable goodness of fit according to Mehmetoglu and Jakobsen [43], whereas CFI and TFI > 0.95 and RMSEA < 0.05 were considered as indicators of good model fit [45]. A normed chi-square of < 2.0 was considered as good for this study, and ratios of < 5.0 as acceptable [46]. Standardized factor loadings greater than 0.4 were considered acceptable [43]. Hair, Hult, Ringle et al. [42] point out that a small sample size can cause problems with underidentified models and nonconvergence in CFA. The estimator WLSMV has been shown to overestimate interfactor correlations when the sample size is relatively small [47]. Due to these problems, the CFA was used as a supplementary analysis only and can be found in Additional file 1: Appendix C. All analyses were conducted separately for the child and the adolescent versions, as well as for the complete age sample, that is, both the child and adolescent versions combined (CA).
The CFA was conducted in MPlus version 8. All otheranalyses were conducted in Stata/SE 14.2 for Windows. PLS-SEM, including AVE, was conducted in Stata by applying the module for PLS-SEM [48]. For all analyses, two-sided p-values < 0.05 were considered statistically significant.
Ethics
Written informed consent was obtained from the parents and adolescents older than 16 prior to inclusion, as well as oral/signed informed consent from the children and adolescents under the age of 16. Study approval was given by the Regional Committees for Medical and Health Research Ethics (reference number: 2015/1739/ REK midt). Table 3 presents the means and standard deviations for the DHH participants on the self-report of the ILC (ILC-NSL and ILC-NOR). A table with mean differences for all items and bootstrapped confidence intervals can be found in Additional file 1: Appendix A. The full distribution of all items and QoL score for both self-reports is reported in Additional file 1: Appendix B.
Internal consistency
As can be seen in Table 4, internal consistency based on DG rho and Cronbach's α was found to be good for all scales and age versions, except for the ILC-NSL child version, which demonstrated acceptable internal consistency based on Cronbach's α and good internal consistency based on DG rho.
Comparison of the ILC-NSL and ILC-NOR
To compare the ILC-NSL with the ILC-NOR selfreport, intraclass correlation coefficients (ICC) were calculated for each of the seven items and the QoL score ( Table 5).
The ICCs between the LQ 0-28 of the ILC-NSL and ILC-NOR were highly significant at p < 0.001 for the complete sample, as well as for the adolescent version, but not for the child version.. The items on the adolescent versions were all significantly correlated, moderately to strongly (0.441-0.867), while none of the items on the child versions correlated significantly.
Construct validity
The standardized factor loadings and AVE of the one-factor model are displayed in Table 6 for the ILC-NSL and ILC-NOR. All factor loadings were above the recommended 0.4 for both adolescent versions and the complete sample. The factor loading for "Family" on the ILC-NOR child as well as those for "Alone" and "Physical Health" on the ILC-NSL child were lower than recommended. AVE was above the acceptable 0.5 for the ILC-NOR CA and ILC-NOR child. Fornell and Larcker [44], however, argue that AVE > 0.4 can be treated as acceptable if composite reliability, in this case, DG's rho, is above 0.6. This was the case for the complete sample as well as the child and adolescent versions of both the ILC-NSL and the ILC-NOR.
between ILC-NSL and ILC-NOR self-report
The Inventory of Life Quality in Children and Adolescents (ILC); QoL score (LQ 0-28 ); CA: children and adolescents-complete sample 1 ICC intraclass correlation coefficients based on a two-way mixed effects model with absolute agreement Supplementary analyses based on CFA support these findings and can be found in Additional file 1: Appendix C.
Multi-informant correlations
Multi-informant correlations between the LQ 0-28 scores of DHH children and adolescents and their parents on the self-report ILC-NSL and ILC-NOR are presented in Tables 7 and 8. Correlations between the self-and parent-reported QoL score (LQ 0-28) were not significant for any of the versions. There was a moderate correlation for LQ 0-28 of the adolescent ILC-NSL and the parent ILC. Analysis of the multi-informant correlations at the item level did not demonstrate significant correlations for any of the versions.
Usability
The DHH children and adolescents' preferences for the presentation of the ILC are presented in Table 9.
During administration of the ILC-NSL and ILC-NOR, some of the children and adolescents commented that they spent more time completing the ILC-NSL because it took longer to view the video clips of the signed items than to read the items.
Discussion
Internal consistency was established as good for both language and age versions. A comparison of the two language versions showed that the adolescent version corresponded closely for both item and total scores, whereas the child version did not correspond well between the languages. Construct validity based on PLS-SEM was found to be acceptable for the proposed one-factor model for both language versions and all ages.. This is also in line with the previously confirmed one-factor model based on the original theoretical concept of QoL that the ILC is based on [6,7].
The ILC-NSL and ILC-NOR demonstrated similar psychometric properties to those reported for the ILC in other studies both for TH [6,7] and DHH children and adolescents [1]. The ILC-NSL demonstrated the same pattern as the original Norwegian validation (ILC-NOR) with lower internal consistency based on Cronbach's α for the child version than the adolescent version [6]. The relative cognitive immaturity in younger children or the significantly lower NSL skills may be a possible explanation for this. Associations between the two language versions of the self-report were high for both item and scale scores for the ILC adolescent version. They were higher than we expected based on other studies comparing written and sign language versions of mental health assessments [23,24]. This may indicate a close correspondence between the ILC-NSL and ILC-NOR because of equivalent phrasing in written Norwegian and NSL. Other reasons for the close correspondence may have been the high number of children and adolescents with a spoken language preference among this DHH sample or possibly good literacy, which was not assessed. The associations between the two language versions of the child self-report, however, were much weaker, indicating problems with the translation, literacy, or Norwegian sign language skills. As no DHH children or adolescents were included in the focus groups during the translation process, it is possible that the translation was not clear or not at an appropriate level for DHH children with varying NSL skills. Including them in the focus group, however, would have decreased the number of potential participants for this study. Literacy was not assessed in the current study; therefore, it is difficult to conclude on this matter. Other possible reasons for this finding might be that the child version is constructed for individual administration but was administered in groups in the current study. The individual administration is designed as a conversation with the child and contains longer sentences and explanations than the adolescent version. As the younger participants have attended deaf school less than the adolescents and their parents have received fewer sign language lessons, the children's sign language skills might not enable them to cope with the longer sentences. Therefore, they might have benefitted from the adolescent version with its shorter and simpler sentences. Consequently, we suggest that a validation study be carried out for younger DHH children using the adolescent version of the ILC-NSL after having included DHH children in focus groups on this NSL version and making adjustments if necessary.
There was a moderate, but not significant, correlation between adolescent self-reports (ILC-NSL and ILC-NOR) and parent reports for QoL scores LQ 0-28 whereas the two language versions of the child selfreport showed no associations with the parent reports. This is somewhat in contrast to the significant, but low informant agreement reported previously [6] for TH children and adolescents, whereas other researchers on DHH child and adolescent QoL report similar low agreement with parent reports [2,3,49] as seen in our study. Pardo-Guijarro, Martínez-Andrés, Notario-Pacheco et al. [5], reason that hearing parents experience the impact of their children's deafness on QoL to a larger degree than their children. Warner-Czyz, Loy, Roland et al. [2] argue that several aspects of QoL are less observable for parents, such as self-esteem, family, and friends. Others [4,50] have suggested that DHH children and adolescents not sharing the same mode of communication with their parents might lessen the parents' insight into their children's subjective world, including QoL. Aanondsen, Jozefiak, Heiling et al. [31] find parent-DHH child correlations for the Strengths and Difficulties Questionnaire (SDQ) assessing mental health, close to those reported in another study [51] for TH children and adolescents. The difference in parent-child agreement between the SDQ and ILC might be related to the different nature of the items describing QoL compared with mental health symptoms (SDQ), which are more easily observed by others. This illustrates the definition of QoL as a subjective concept. The low agreement between parents and DHH, as well as TH children and adolescents, emphasizes the need to consider the self-report as the authentic QoL report, whereas the parent report should be used as supplemental information from a more remote informant [52]. This conclusion enhances the importance of developing sign language versions of generic QoL instruments for capturing DHH children and adolescents' own views. This does not, however, lessen the importance of assessing parents' perspective on their children's QoL as is also emphasized by the authors of the ILC [6,7].
Most of the DHH children and adolescents reported preferring the written instrument (ILC-NOR), and this preference was more pronounced for the adolescents than the children, possibly reflecting the lower NSL competence among children and their parent-reported preference for spoken Norwegian. There may have been subsamples based on spoken or sign language proficiency that could have influenced these results. These were not examined, however, due to the small sample size. Spontaneous feedback during administration indicated that the preference for the written version (ILC-NOR) was related to the less time-consuming nature of this version. Greater mastery of literacy in DHH adolescents could explain their preference for the written version of the ILC. The preference of the written version, however, is somewhat surprising given that other studies report reading difficulties to be frequent in many DHH children and adolescents [20,22,53] and their preference for sign language. As we only assessed spoken and sign language skills but not literacy, we could not test this.
Strengths and limitations
A major strength of the current study is the use of a generic assessment tool for QoL that was translated into NSL, and that also examined psychometric properties for both written and sign language for DHH children and adolescents. A further strength of the choice of the ILC is the multi-informant perspective. Both these factors have been found necessary to solve some of the current inconsistencies in findings on the QoL of DHH children and adolescents.
A major limitation of the present study is the small sample size due to the limited number of signing DHH children and adolescents in the population. The sample size here was smaller than the minimum number of cases recommended for multivariate analyses based on covariance, especially when analyzing the child and the adolescent versions separately. This, in turn, poses a problem for a thorough psychometric evaluation of the ILC-NSL and ILC-NOR for DHH children and adolescents. Alternatively, the hypotheses could have been framed more precisely and tailored to the expected small sample size, in turn choosing statistical procedures more in line with these. By reporting the confidence intervals for the results, we have attempted to partly compensate for this. To offset the effects of small sample size, we have also used the PLS-SEM, which is known to be robust for such situations [42]. The combination of analyses used here was chosen as the best practical solution for the small sample size but leaves room for uncertainty regarding the conclusions.
A further limitation is the short interval of two to three days between the administration of the two language versions. This may have led to participants remembering their former answers and creating a bias. The randomized order of administration of the two versions was conducted to counteract this.
The lack of including the target population for the ILC-NSL in the focus group for the translation is a further limitation as well as the use of single-item measures to assess spoken and sign language skills which cannot be regarded as a complete assessment of the participants' communication skills. A minor limitation is the absence of a gold standard for establishing convergent validity for QoL in DHH children and adolescents. The use of a written instrument, such as the KIDSCREEN, as a gold standard, however, would not have been reliable or valid because of the evidence showing that many DHH children and adolescents have reading difficulties [20,22,53] even though this did not seem prominent in our sample. Another translation cycle into NSL and validation of this translation would have been necessary and too time-consuming for the scope of the current study. A further limitation is the lack of test-retest reliability.
Conclusion
The evaluation of the psychometric properties of the self-report ILC-NSL is promising. The use of the selfreport ILC-NSL for assessing QoL in DHH children and adolescents is essential given its subjective nature. For children younger than the age of 11, the use of the ILC-NSL is more questionable, possibly because of their lower sign language skills. Until better alternatives are developed, we suggest that the psychometric properties of the written and NSL adolescent versions are studied for DHH children after focus groups are conducted, including representatives for the target population. Alternatively, that it is investigated whether individual rather than group administration may result in better usability and validity of the child ILC-NSL and ILC-NOR. Based on the children and adolescents' feedback, we recommend presenting both the written and NSL versions in combination to evaluate QoL among DHH children and adolescents rather than using only one language. Further research on DHH children and adolescents is needed to solve the current inconsistencies in the findings related to QoL. Because of the small number of signing DHH children and adolescents in the population, cross-cultural studies should be encouraged; this would increase the possibility of conducting research on larger samples, as well as allowing for an examination of cross-cultural similarities and differences.
|
2021-05-28T13:23:13.851Z
|
2021-05-27T00:00:00.000
|
{
"year": 2021,
"sha1": "5af324c577001d52cd677d69c30774b98fbe585b",
"oa_license": "CCBY",
"oa_url": "https://bmcpsychology.biomedcentral.com/track/pdf/10.1186/s40359-021-00590-x",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5a607bd55ef5a4cc4d06eaa61e1597dc4077e1d2",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
268417285
|
pes2o/s2orc
|
v3-fos-license
|
Can Stanza be Used for Part-of-Speech Tagging Historical Polish?
The goal of this paper is to evaluate the performance of Stanza, a part-of-speech (POS) tagger developed for modern Polish, on historical text to assess its possible use for automating the annotation of other historical texts. While the issue of the reliability of utilizing POS taggers on historical data has been previously discussed, most of the research focuses on languages whose grammar differs from Polish, meaning that their results need not be fully applicable in this case. The evaluation of Stanza is conducted on two sets of 10286 and 3270 manually annotated tokens from a piece of historical Polish writing (1899), and the errors are analyzed qualitatively and quantitatively. The results show a good performance of the tagger, especially when it comes to Universal Part-of-Speech (UPOS) tags, which is promising for utilizing the tagger for automatic annotation in larger projects, and pinpoint some common features of misclassified tokens.
Introduction and Background
Annotated data for historical or otherwise nonstandard variants of language can be difficult or resource-consuming to obtain but is nevertheless necessary for certain linguistic inquiries.One of the possible methods of alleviating this issue is attempting to use tools developed for a contemporary standard language for automated annotation.However, the data in question differing from the standard may pose problems.Consider the example presented in Table 1, a sentence from a 19 th -century Polish memoir: the differences between the original and the modern version of the same sentence pertain not only to spelling but also word order and vocabulary -but the extent to which these seemingly large differences affect the performance of modern tools is not clear.This paper aims to address this question and estimate what kinds of variation have the largest negative impact on tagging accuracy.(Szawerna, 2023) in the original, with modernized spelling, modernized language, and in English.
English translation
He drove away to Lviv -he was supposed to return the day after and that he did, but in a coffin.
A considerable amount of research has already been conducted on the evaluation of various pretrained part-of-speech (POS) taggers on historical texts to establish their effectiveness at annotating such texts.POS taggers trained on contemporary data tend to struggle with historical texts for a variety of reasons, such as out-of-vocabulary items, variation in spelling, capitalization, and punctuation, as well as differences in morphology and syntax and semantic shifts, but large performance improvements can be observed when relatively simple pre-processing methods such as spelling correction, spelling simplification, punctuation removal or normalization are used (Rayson et al., 2007;Scheible et al., 2011;Adesam and Bouma, 2016;Hupkes and Bod, 2016).A summary of the performance of various POS taggers when tested on historical data from various studies can be seen in Table 2.While taggers based on neural networks (NNs) have been shown to outperform other methods, much of the research predates those and is based on older architectures (Yang and Eisenstein, 2016;Adesam and Berdicevskis, 2021).
While most of the previously mentioned studies focus on languages from the Germanic family, this paper aims to evaluate a POS-tagger for modern Polish on historical texts.Given the differences be- 2 was conducted on texts from not only various languages but also various periods.Waszczuk et al. (2018) evaluated the performance of a tagger on historical Polish data and reported quite high performance on texts from the 17 th -20 th -century, which is promising.However, the tool that they are reporting on, Morfeusz2, is a CRF-based tagger, which could mean that an NN-based tool could potentially perform even better.While the research presented by Szawerna (2023) includes various performance measures for several tools, the focus of that research was on identifying variation and not utilizing the tools for automated annotation; importantly, though, Szawerna (2023) does present a comparison of the performance of various tools, with Stanza performing better on historical data than Morfe-usz2 which utilizes a combination of rule-based morphological analysis and CRF (conditional random fields) for tagging; Morfeusz2 did, however, outperform Stanza on modern texts (Kieraś and Woliński, 2017).While a fine-tuned BERT model did outperform Stanza, the latter is more of an outof-the-box tool and is therefore more likely to be used in a pipeline, warranting the analysis of its performance on nonstandard data.This paper builds upon the research presented in Szawerna (2023) and investigates the performance of a single tagger on a memoir from 1899 which also contains dialectical variation.Given the age of the data, the accuracy is expected to be aro-und 90% accuracy3 , with Universal Part-of-Speech (UPOS) tagging performing better than tagging using language-specific (XPOS) tags.The tagger is expected to struggle with nonstandard spelling or capitalization, out-of-vocabulary items, and other previously mentioned issues.
Materials and Methods
The tagger used in this project is that provided by Stanza, a Natural Language Processing (NLP) toolkit featuring models for a large number of languages (Qi et al., 2020).The default model for Polish was trained and evaluated on the Polish Dependency Bank treebank (Wróblewska, 2018;Stanza, n.d.).It is also that corpus's test set that is used to exemplify the tool's performance on modern Polish in this paper, although it represents genres different from the historical texts.The main reasons for selecting this tagger are its ease of use and high reported accuracy on modern data.
The data used for testing the tagger in this project comes from the memoir of Juliusz Czermiński, who lived in the 19 th century in the area corresponding to nowadays Eastern Poland and Western Ukraine.The original manuscript was composed in 1889, retyped on a typewriter, and recently digitized.No intentional alterations were made to e.g.seemingly misspelled tokens.This data was first presented by Szawerna (2023), where its divergence from modern Polish was asserted, especially when it comes to features typical for the dialects of that region (Kurzowa, 1983).According to Polański (2004), there was no singular universally accepted spelling convention around the time of the memoir's creation.Therefore, the text should not be considered to be representative of historical Polish of its time, both due to its dialectical features and spelling which is not representative of the bulk of the contemporaneous writing.
In its entirety, the data consists of 37,405 tokens.Out of those, the first 10286 tokens were manually annotated using Universal Dependencies' universal POS tags (UPOS tags).A subset of 3270 tokens was further annotated using XPOS tags.Both of these tagsets are utilized by Stanza.The only changes to the original text include the splitting of the "mobile inflection" as per the UD guidelines and removing any punctuation from inside numbers (Szawerna, 2023;Universal Dependencies, n.d.).This previously conducted manual annotation of the tokens has been reviewed, and a few corrections have been made.
Evaluation measures were calculated for both kinds of annotation.The results were also subjected to a qualitative analysis, the goal of which was to determine what kinds of errors are the most prevalent, which could give insights into what kinds of potential pre-processing could eliminate that problem.The misclassified examples were saved and manually annotated for the error type before being processed to obtain the relevant statistics.
Results
Stanza exhibits very good performance on modern Polish data and relatively good performance on historical data.Table 3 shows the accuracy achieved by the model on the respective datasets and tagsets.A more detailed evaluation was obtained for the UPOS tagset.Figure 1 and Figure 2 visualize the per-class performance of the model for each dataset, with the counts for each class being normalized by the true positive count for that class (therefore, the values on the diagonal correspond to recall).It is worth pointing out that tags like INTJ and SYM were absent from the historical data altogether.What can be noted is that with the exception of many SYM and INTJ classes, the tagger shows more consistent performance on modern data than on historical.While for categories such as ADJ, ADV, AUX, DET, NUM, SCONJ, and X the results on historical data are visibly lower, the overall performance on historical data is still rather good.The XPOS tagset is much larger, in the order of hundreds of tags, making a similar visual comparison uninformative, and a more detailed analysis is beyond the scope of this paper.
Another method of inspecting the tagger's performance is investigating the erroneously labeled tokens.Table 4 and Table 5 illustrate the frequency of specific kinds of errors present among the mistakes made by Stanza in the memoir, following the general annotation utilized by Szawerna (2023).While the exact proportions differ between the two tagsets, spelling, ambiguous, and unidentified type errors are the most common for both.Noticeably, UPOS tagging fails when it comes to tokens with unusual spelling, including capitalization, which seems to be relevant for identifying PROPN and the replacement of the y (/1/) vowel with e, and spelling the /j/ sound with y, which distort various inflectional endings.XPOS tagging struggles more with ambiguity (e.g. when more than one grammatical case uses the same ending), although the spelling variation not related to capitalization still has a non-negligible effect.One relevant type of ambiguous errors, present in both types of tagging, is that related to the sometimes questionable status of verb-derived nouns and adjectives.For example, the word bombardowanie 'bombing' is considered an established noun, but the tagger classifies it as a gerund (WSJP Editorial Team, 2014;nkj, n.d.), likely because of the form.Interestingly enough, among the annotated XPOS errors there are also several examples of the vocative case being ignored or the model defaulting to assigning the masculine grammatical gender to a pronoun despite the context implying that it should be feminine.There are also instances of verbs in the impersonal past form that are consistently misclassified.
Discussion
The results of the quantitative evaluation show a good performance of the tagger, exceeding most of the previously reported ones, including the results reported for the same data and tagger by Szawerna (2023),4 possibly due to improvements that have been made to Stanza's model.On the other hand, Waszczuk et al. (2018) still achieve a better performance on XPOS tags using a CRF-based model.However, they use a more diverse and larger dataset which may consist of more standard Polish than the data investigated in this paper.Nevertheless, Stanza's performance on this test data is only around 4 (UPOS) and 7 (XPOS) percentage points below the accuracy it has shown on its own test set.Interestingly enough, the performance on the PDB test set is slightly higher than reported by Stanza (n.d.), possibly due to the corpus being pre-tokenized before being fed to the model.A qualitative error analysis has approximated what the tagger struggles with when it comes to the test data.Previous studies have shown that variations in spelling, capitalization, punctuation, differences in morphology and syntax, and semantic shifts are some of the factors that make accurate tagging of historical texts using modern taggers difficult (Rayson et al., 2007;Scheible et al., 2011;Adesam and Bouma, 2016;Hupkes and Bod, 2016).In the case of Stanza, some of those issues, such as nonstandard capitalization, archaic vocabulary, and spelling have negatively impacted the tagger's performance.This is particularly prominent as far as UPOS tagging is concerned.As far as XPOStagging goes, issues pertaining to the inflectional morphology have been highlighted, such as confusing word endings or problems with words the class of which is ambiguous.Additionally, issues such as the possible underrepresentation of rarer classes in the training corpus could be noted, leading to biases concerning feminine pronouns and issues identifying the vocative case.
Conclusions and Future Work
Within this paper, a modern Polish POS tagger, Stanza, has been evaluated on historical and modern data, and some of the issues causing the drop in its performance on historical texts have been successfully identified.It has been shown that it can perform quite well on non-standard, historical Polish data from the late 19 th century, and this can possibly be improved using some preprocessing methods, making it a promising candidate for at least assisting the annotation of historical texts, if not completely automating it.Many of the misclassified tokens were problematic due to issues previously identified in the literature in the field; however, some problems seemed to stem from the inflectionality of the language or be inherent to the tagger itself.Potential biases stemming from the under-representation of certain classes in the training data for the tagger have also been shown.
In the future, it would be interesting to test the influence of various factors, such as e.g.punctuation or lowercasing, on the quality of tagging.Another possibility could be comparing the performance of multiple different taggers or tagging architectures on the same data, or testing the same tagger on data from different periods.Alternatively, one could juxtapose the results presented in this paper to those from tagging a very recent, nonstandard text, e.g.sourced from the web, to see to what extent the same issues are causing tagging problems.Finally, developing some methods for the pre-processing of texts from this period for subsequent tagging could also be quite useful.It would also be interesting to compare how the models for other languages included in Stanza perform on samples of historical texts from their respective languages.
As far as the data itself is concerned, it would be interesting to complete and review the annotation of the entire memoir, and see how the results of an analysis such as the one presented in this paper would change; this would also open up the opportunity for different kinds of research on the text.
Figure 1 :
Figure 1: Normalized confusion matrix for UPOS tagging of the modern data.
Figure 2 :
Figure 2: Normalized confusion matrix for UPOS tagging of the historical data.
Table 2 :
Test results on raw and preprocessed data in other experiments (some results are for more than one tagger or data from various periods).
Table 3 :
Stanza's accuracy per text type and tagset.
Table 4 :
Frequency of errors by type for UPOS tagging.
Table 5 :
Frequency of errors by type for XPOS tagging.
|
2024-03-16T13:05:45.728Z
|
2024-01-01T00:00:00.000
|
{
"year": 2024,
"sha1": "968fd5ee99fe41f4be5236d47c74cc2518276f82",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "968fd5ee99fe41f4be5236d47c74cc2518276f82",
"s2fieldsofstudy": [
"Linguistics",
"Computer Science"
],
"extfieldsofstudy": []
}
|
13905846
|
pes2o/s2orc
|
v3-fos-license
|
Polymorphism analysis in identification of genetic variation and relationships among Stylosanthes species.
A total of 148 accessions representing six important species of the genus Stylosanthes, including S. guianensis, S. hamata, S. scabra, S. seabrana, S. macrocephala, and S. capitata, were used to evaluate genetic variation and relationships using sequence-related amplified polymorphism markers. The results showed that the 18 selected primer pairs generated 138 distinct fragments. The fragment sizes ranged from 150 to 2000 bp. Genetic similarity coefficients among the 148 accessions ranged from 0.51 to 0.99, with an average of 0.79. The effective allele number (ne) generated by the 18 primer pairs averaged 1.3552 and ranged from 1.2069 to 1.6080; Nei's gene diversity (He) ranged from 0.1304 to 0.3207, with an average of 0.2070; and Shannon's information index (I) averaged 0.3213 and ranged from 0.2233 to 0.4582. The unweighted pair-group method with arithmetic averages at the 0.69 similarity level separated the 148 accessions into two distinct groups. One group belonged to S. guianensis, and the other group belonged to the non-S. guianensis type. This study verified that Stylosanthes have rich genetic variation, which is an excellent basis for Stylosanthes breeding for new cultivars. This study demonstrates that the SRAP technique is a reliable tool for differentiating Stylosanthes accessions and for discerning genetic relationship among them.
Introduction
The genus Stylosanthes contains approximately 48 species and is naturally distributed in the tropical, subtropical, and temperate regions of the Americas, Africa, and Southeast Asia (Costa and Ferreira 1984). The genus has two foci of diversity, the more important of which is located in central Brazil. This includes 45% of all Stylosanthes species and exhibits the greatest degree of phenotypic variation and endemism. Mexico and the Caribbean Islands are also major centers of Stylosanthes diversity (Stace and Cameron 1984). The plant plays a significant role in providing nutritious forage for animals, improving soil fertility, and restoring degraded land. Four species of this genus, namely, S. scabra, S. hamata, S. guianensis, and S. humilis, have been widely used as tropical forage legumes. Each species has rich variations in morphology, physiology, and genetics. S. guianensis is the most widespread Stylosanthes species and exhibits remarkable phenotypic variability (Williams et al. 1984;Vieira et al. 1993). This species is one of the most important tropical forage legumes currently known and is native to South and Central America and Africa, where it is widely distributed. It is used for grazing cattle, for making leaf meal for livestock, for improving soil fertility in fruit-tree and rubber plantations and for cover crops in Australia, South America, and South China (Burt and Miller 1975).
Because of their adaptation to acidic and infertile soils in semiarid environments, Stylosanthes have been introduced to many countries, including Australia, India, the Philippines, Thailand, and China, to improve animal production and restore depleted soil nitrogen . Introduction of Stylosanthes from Australia, Africa, and South America to China began in the late 1960s and has continued to the present. Stylosanthes is principally welladapted to the environment conditions of Guangdong and Hainan province of China, and 12 Stylosanthes cultivars have been developed through selective-and mutationbreeding by Chinese scientist. These include S. guianensis cv. Reyan No. 2 in 1991, S. hamata (L.) Taub. cv. Verano in 1991, S. guianensis Sw. cv. 907 in 1998, S. guianensis Sw. cv. Graham in 1998, S. guianensis cv. Reyan No. 5 in 1999, S. guianensis cv. Reyan No. 10 in 2000, S. guianensis cv. Reyan No. 7 in 2001, S. scabra Vog. cv. Seca in 2001, S. guianensis cv. Reyan No. 13 in 2003, S. guianensis (Aubl.) Sw. cv. Reyin No. 18 in 2007, and S. guianensis cv. Reyan No. 20 in 2009, and S. guianenesis cv. Reyan No. 21 in 2011(Huang et al. 2014. Different DNA markers have been used to investigate the genetic diversity and relatedness of members of the Stylosanthes genus. Techniques included the use of random amplified polymorphic DNA (RAPD), which has been used to assess genetic variation in the five taxonomic groups of the S. guianensis (Kazan et al. 1993) and between S. scabra and S. fruticosa (Glover et al. 1994). Restriction fragment length polymorphism (RFLP) analysis has been used to investigate the genetic relationships between six unclassified taxa and 24 known species of the genus Stylosanthes (Liu et al. 1999) and to identify putative diploid progenitors of allotetraploid S. hamata (Curtis et al. 1995). Sequence-tagged sites (STS) were used to identify progenitor species for S. scabra (Liu and Musial 1997). Investigation of ribosomal DNA internal transcribed spacers (rDNA ITS) has been used to detect variation in the S. guianensis species complex and in Stylosanthes species (Vander Stappen et al. 2003). Simple sequence repeats (SSR) are available for three species of Stylosanthes: S. guianensis Santos et al. 2009a;Santos-Garcia et al. 2012), S. capitata (Santos et al. 2009b), and S. macrocephala (Santos et al. 2009c). Amplified fragment length polymorphism (AFLP) has been successfully employed in assessing genetic variation in Mexican and South American S. humilis (Vander Stappen et al. 2000), and S. viscisa (Sawkins et al. 2001).
Of these methods, RAPD is one of the simplest, but has poor reproducibility RAPD (Williams et al. 1990). Although the AFLP technique has good reproducibility and reveals high levels of polymorphism, its operation is very elaborate and the costs are relatively high (Vos et al. 1995). The method based on analysis of SSR requires prior knowledge of the genome sequences of the organism to design specific polymerase chain reaction (PCR) primers for amplification (Tautz 1989). In comparison, application of sequence-related amplified polymorphism (SRAP) markers overcomes most of these limitations (Li and Quiros 2001). This technique can generate more polymorphic fragments for the assessment of genetic diversity than can SSR, inter-simple sequence repeat (ISSR), or RAPD markers (Budak et al. 2004).
Although previous research has provided preliminary data regarding genetic diversity among the Stylosanthes genus, studies that investigated the levels of variation within Stylosanthes species are limited. Considering the advantages of SRAP markers, we used this method to describe the genetic variability within a group of accessions representing the genetic diversity available in Stylosanthes species germplasm.
Plant materials
A total of 148 Stylosanthes accessions comprise six species. Of these, 132 accessions belong to S. guanensis, seven accessions group to S. scabra, five accessions belong to S. seabrans, two accessions belong to S. hamata, one accession belongs to S. capitata, and one accessions groups to S. macrocephala. Sixteen accessions were from the Genetic Resource Unit of Centro Internacional de Agricultura tropical (CIAT), 16 from Empresa Brasileira de Pesquisa Agropecu Aris (EMBRAPA), six accessions from International Rice Research Institute (IRRI), 12 accessions from the Institute of Guangxi Animal Science (IGAS) of China, and the remainder accessions (98 accessions) from the Chinese Academy of Tropical Agricultural Science (CATAS). A list of the accessions with their codes, accession numbers, places of origin, and source is provided in Table 1.
DNA extraction
Total genomic DNA of each accession was isolated from one plant according to the modified hexadecyltrimethylammonium bromide (CTAB) DNA extraction procedure described by Huang et al. (2014). The quality and quantity of genomic DNA were estimated by measuring absorbance at 260 and 280 nm using a UV spectrophotometer (BioPhotometer D30, Eppendorf, Germany). The integrity of the DNA was verified by agarose gel electrophoresis (Dongre et al. 2011). DNA concentrations were adjusted to 50 ng/lL to facilitate uniformity of PCR amplification. DNA samples were stored at -20°C until use. Table 1 continued Accession no.
SRAP reactions
Ninety distinct primer pairs (nine forward, ten reverse) from Yingjun Inc. (Shanghai, China) were tested for PCR analysis (Li and Quiros 2001). The 90 SRAP primer pairs were screened using three selected accessions respended three Stylosanthes species. Each 10 lL PCR mixture contained 50 ng genomic DNA, 0.5 lM forward primer, 0.5 lM reverse primer, and 5 lL 29 Easy Taq PCR SuperMix (TransGen biotech, Beijing, China). The mixture was overlaid with 20 lL mineral oil before thermal cycling was commenced. Amplification was carried out on a Thermal Cycler Dice TM (Bio-Rad S1000 TM , USA) as follows: initial denaturation at 94°C for 5 min, followed by five cycles of 1 min denaturation at 94°C, annealing at 35°C for 1 min and elongation at 72°C for 45 s. In the subsequent 30 cycles, the annealing temperature was 50°C for 1 min, with a final extension step at 72°C for 30 s, terminating with an elongation step of 7 min at 72°C. The amplified products were stored at 4°C before being loaded onto a gel (Huang et al. 2014). The amplification products were separated by electrophoresis on 1.5% (w/v) agarose gel in 1.09 TBE buffer (0.09 mol/L Tris-H 3 BO 3 , 0.002 mol/L EDTA, pH 8.0) at a constant voltage of 100 V for approximately 1.5 h. GoldView (TransGen biotech Beijing, China) stain (0.5 lg/mL) was added to facilitate UV light visualization. Molecular weights were estimated using a 50 bp DNA ladder (TaKaRa Biotechnology, Dalian, China).
Data analysis
SRAP bands across the gel profiles were scored visually for their presence (1) or absence (0) at least twice for each accession. Only reproducible and unambiguous SRAP fragments were used for scoring. The data were compiled in a binary data matrix using Microsoft Excel and analyzed using the numerical taxonomy and multivariate analysis system (NTSYS) program, version 2.1 (Exeter Software, Setauket, NY, USA). Simple matching coefficients were computed using the SIMQUAL module of the NTSYS program. Cluster analysis based on GSC using the Nei and Li distance was performed according to the UPGMA in the SAHN module of the NTSYS program (Kang et al. 2008). Principal coordinate analysis (PCoA) was performed to estimate the genetic distances among the major groups using the DCENTER and EIGEN modules of the NTSYS program. Effective allele number (ne), Nei's gene diversity (He) and Shannon's information index (I) were used to compute Nei's standard genetic distance coefficients using the Popgene32 program (Nei and Li 1979).
Primer pair screening
Ninety primer pairs were screened three times on three selected accessions respended three Stylosanthes species (S002, S003, and S114) to test the ability of primer pairs to amplify DNA fragments. The most useful primer combinations were considered to be those having the highest polymorphism rate that also generated a reasonable number of clearly detectable total fragments. Of the 90 SRAP primer pairs evaluated for their ability to amplify Stylonsanthes DNA, 72 primer pairs were rejected, because they either yielded no amplification or no polymorphic patterns. Eighteen primer pairs from the original 90 primer pairs were selected for subsequent analysis based on the polymorphic and reproducible bands they generated. The characteristics of the 18 primer pairs are listed in Table 2.
SRAP analysis
SRAP analysis was performed employing the 18 most polymorphic-selective primer pairs for 148 accessions of Stylosanthes. The 18 primer pairs collectively amplified 138 reproducible fragments, ranging in size from 150 to 2000 bp, and varying in the number of amplification bands between 3 and 12 for each primer pair. All these fragments were polymorphic, showing a 100% level of polymorphism on average. The highest number (12) of amplification products was obtained using the primer pairs F08-R01 and the lowest (3) with F02-R07. The average number of fragments among the 18 primer pairs was 7.66. The effective allele number (ne) ranged from 1.2069 to 1.6080 with an average of 1.3552, Nei's gene diversity (He) ranged between 0.1304 and 0.3207 with an average of 0.2070, and Shannon's information index (I) varied between 0.2233 and 0.4582 with an average of 0.3213 (Table 2). An example of the polymorphism detected among some accessions by primer pair F1-R2, as shown in Fig. 1.
Genetic diversity analysis
The GSC values of the 148 Stylosanthes accessions varied between 0.51 and 0.99 with an average of 0.79. Increased genetic distance indicated diminished genotype relatedness between the genotypes. The lowest GSC (0.51) was between accessions S020 and S063, which suggests that these were the least related accessions, whereas the highest GSC was 0.99, detected between accessions S069 and S070, and accessions S073 to S074, indicating a very close relationship. A dendrogram was constructed to cluster the 148 accessions into two major groups at the 0.69 similarity level (Fig. 2), with most accessions from the same species tending to have high genetic similarity and clustering into the same groups or subgroups. One group belonged to S. guianensis, and the other group belonged to non-S. guianensis type, Group 1 included 14 accessions which contained S. hmamata, S. seabrana, S. scabra, S. macrocephala, and S. capitata. The GSC varied from 0.60 to 0.96, and further distinguished two subgroups. One Fig. 2 UPGMA dendrogram of 148 Stylosanthes accessions generated from SRAP data accession (S063) individually separated from other accessions derived from S.capitata. The other subgroup comprised 13 accessions which contained one S. hmamata accession (S001), four S. seabrana accessions (S113, S114, S143, and S144), and seven S. scabra accessions (S002, S067, S110, S111, S112, S116, and S117). Group 2 consisted of 134 accessions of S. guianensis except for an individual S. seabrana accession (S115). The GSC ranged from 0.62 to 0.99, and further categorized six subgroups. Subgroup 1 comprised a single accession S056 (S. guianensis cv. Tardio) from Brazil, belonging to the diseaseresistant cultivars. Subgroup 2 also only contained one accession S104 (S. guianensis cv. Mineirao) from Australia and exhibited high yields. Subgroup 3 contained ten accessions, including S095 (S. guianensis cv. Reyan No. 2), and presented the characteristics of early blossoming and disease resistance. Subgroup 4 included one accession, S142 (S. hippocampoides) from Brazil. Subgroup 5 consisted of 41 accessions, including the Chinese cultivars S105 (S. guianensis cv. Reyan No. 18), S125 (S. guianensis CIAT184), S118 (S. guianensis cv. 907), S147 (S. guianensis cv. Reyan No. 20), and S148 (S. guianensis cv. Reyan No. 21), and the GSC ranged from 0.70 to 0.97. Most of the accessions shared relationship with S. guianensis CIAT184. The anthracnose-resistance cultivar S. guianensis cv. 907 was selected from the population of S. guianensis CIAT184 by mutation breeding, S. guianensis cv. Reyan No. 20 and S. guianensis cv. Reyan No. 21 was selected from the population of S. guianensis cv. Reyan No. 2 by space mutation-breeding, and S. guianensis cv. Reyan No. 2 was selected from S. guianensis CIAT184. Subgroup 6 included the other 80 accessions including the Chinese cultivars S015 (S. guianensis cv. Reyan No. 5), S028 (S. guianensis cv. Reyan No. 10), S051 (S. guianensis cv. Reyan No. 7), and S054 (S. guianensis cv. Reyan No. 13), and the GSC values ranged from 0.72 to 0.99.
PCoA was conducted based on the genetic resemblance matrix to further understand the ecological distribution of different accessions. Figure 3 presents the distribution of the different accessions according to the three principal axes of variation using PCoA. The percentages of variance revealed by principle component 1 (PC1), principal component 2 (PC2), and principal component 3 (PC3) were 79.02, 4.12, and 1.85%, respectively, which were consistent with the results of UPGMA cluster analyses.
Discussion
Our results demonstrate that SRAP analysis effectively and efficiently provided quantitative estimates of genetic relatedness among Stylosanthes accessions. We found a high level of polymorphism (100%) among various accessions, which confirms that the SRAP marker technique generates highly reproducible DNA profiles for Stylosanthes accessions. The extent of polymorphism from the SRAP analysis in the present study was higher than that from RAPD (25.6%) (Kazan et al. 1993), SSR (45.0%) , and AFLP (95.5%) (Jiang et al. 2005). The GCG range (0.51-0.99) in this study was consistent with that found using RAPD (0.55-1.00) as reported by Kazan et al. (1993). However, these values are greater than the range of 0.30-0.90 observed with SSR (Vander Stappen and Volckaert 1999) and 0.30-0.95 for AFLP analysis by Jiang et al. (2005). The GSC values obtained in our study demonstrated that the level of genetic diversity was relatively high among Stylosanthes accessions. The locations of the clusters obtained from PCoA also demonstrated wide genetic variability among the clusters. These results suggest that a high level of polymorphism exists in Stylosanthes accessions.
In summary, the SRAP marker technique has advantages in terms of convenience, high reproducibility, and high polymorphism content and can be used as an a superior method for germplasm identification and genetic diversity studies of Stylosanthes. Furthermore, the molecular relationships generated from this study should be useful in breeding programs for Stylosanthes.
Conclusions
This study demonstrates that the SRAP technique is a reliable tool for differentiating Stylosanthes accessions and determining the genetic relationship among these. A high level of polymorphism among 148 accessions was found. Genetic similarity coefficients ranged from 0.51 to 0.99 indicated that the genetic basis of the accessions was large. The range of genetic similarity coefficients of the species among S. guianensis varied from 0.62 to 0.99 was wider than other species varied from 0.60 to 0.90. This information will be useful to determine optimal breeding strategies. Furthermore, genetic distance between parents should be considered for Stylosanthes breeding programs.
|
2018-04-03T01:20:21.087Z
|
2017-04-24T00:00:00.000
|
{
"year": 2017,
"sha1": "80d57c96515e394a7491fd56f05e90a8a6bf5704",
"oa_license": null,
"oa_url": "https://europepmc.org/articles/pmc5403772?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "7742beb8eb0dfa2d5934633398f11db0acb3362a",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
231791818
|
pes2o/s2orc
|
v3-fos-license
|
Consistent Long-Term Therapeutic Efficacy of Human Umbilical Cord Matrix-Derived Mesenchymal Stromal Cells After Myocardial Infarction Despite Individual Differences and Transient Engraftment
Human mesenchymal stem cells gather special interest as a universal and feasible add-on therapy for myocardial infarction (MI). In particular, human umbilical cord matrix-derived mesenchymal stromal cells (UCM-MSC) are advantageous since can be easily obtained and display high expansion potential. Using isolation protocols compliant with cell therapy, we previously showed UCM-MSC preserved cardiac function and attenuated remodeling 2 weeks after MI. In this study, UCM-MSC from two umbilical cords, UC-A and UC-B, were transplanted in a murine MI model to investigate consistency and durability of the therapeutic benefits. Both cellular products improved cardiac function and limited adverse cardiac remodeling 12 weeks post-ischemic injury, supporting sustained and long-term beneficial therapeutic effect. Donor associated variability was found in the modulation of cardiac remodeling and activation of the Akt-mTOR-GSK3β survival pathway. In vitro, the two cell products displayed similar ability to induce the formation of vessel-like structures and comparable transcriptome in normoxia and hypoxia, apart from UCM-MSCs proliferation and expression differences in a small subset of genes associated with MHC Class I. These findings support that UCM-MSC are strong candidates to assist the treatment of MI whilst calling for the discussion on methodologies to characterize and select best performing UCM-MSC before clinical application.
INTRODUCTION
Cardiovascular diseases are the leading cause of morbidity and mortality worldwide (Benjamin et al., 2019), with ischemic heart disease representing the largest single cause of death in countries of all income levels (Nowbar et al., 2019). Myocardial Infarction (MI) occurs upon blockage of coronary arteries and impaired regional blood supply to the myocardium. As result of nutrient and oxygen deprivation, extensive cardiomyocyte death triggers a sequence of key inflammatory and fibrotic mechanisms that, coupled with the limited regenerative capacity of the adult heart (Sampaio-Pinto et al., 2020), lead to the formation of a non-functional collagen-based scar that negatively impacts on myocardial function and contribute to the development of heart failure. Despite tremendous improvement in the treatment and prognosis of MI, achieved with early reperfusion and optimized pharmacological therapy, some patients with severe and diffuse coronary disease still experience significant ventricular remodeling, myocardial dysfunction and high morbidity and mortality.
Cardiac cell-based therapies aimed at regeneration and/or instructing a more favorable repair have been explored in clinical settings, including skeletal myoblasts, embryonic stem cells (ESCs), bone marrow mononuclear cells (BMMNCs), cardiac stem cells (CSCs), hematopoietic stem cells (HSCs), mesenchymal stromal cells (MSCs), and recently, induced pluripotent stem cells (iPSCs)-derived cardiomyocytes in preclinical studies (Madigan and Atoui, 2018). The existence of CSCs in the adult myocardium has raised controversy in particular what concerns their capacity to generate cardiomyocytes (Valente et al., 2014;Maliken and Molkentin, 2018), as their therapeutic effect has been associated to immunomodulatory and paracrine mechanisms also observed in human MSCs (Wagner et al., 2020). Indeed, envisioning the development of a universal and feasible therapy, MSCs are of particular interest (Ballini et al., 2017). Contrarily to other candidates for cell therapy, MSCs do not express MHC Class II and display low levels of MHC Class I proteins, as such seen as immune evasive (Ankrum et al., 2014) and thus suitable for MHC mismatched allogeneic transplantation. These cells can be procured from a variety of adult sources as the bone marrow and adipose tissue, and neonatal sources, including the placenta, umbilical cord blood or umbilical cord matrix (UCM; Wharton's Jelly). Among these, UCM-MSC are particularly attractive since the source tissue can be obtained in a non-invasive and more efficient fashion, have higher expansion potential, higher differentiation range and were shown to be stronger immunomodulators by repressing Tcell activation and promoting Treg expansion more efficiently (Santos et al., 2013).
Our previous work showed that human MSCs isolated from the UCM [obtained using proprietary technology (UCX R )] preserved cardiac function and attenuated cardiac remodeling 2 weeks after MI through paracrine mechanisms . To date, one phase 1/2 clinical trial was completed to evaluate safety and efficacy of UCM-MCSs specifically in the treatment of acute MI with ST elevation , followed by two others directed to heart failure with reduced ejection fraction (Zhao et al., 2015;Bartolucci et al., 2017). While intravascular delivery of UCM-MSC appears safe and leads to improvement of heart function and other clinical indicators (Thompson et al., 2020), little is known regarding the extent of engraftment and whether donor-to-donor variability may influence the therapeutic potency of these cellular products or their derivatives, e.g., conditioned media and extracellular vesicles. Moreover, donor variability is a concern transversal to all MSCs based therapies, independently of tissue sources, as it could potentially lead to confounding effects when only one donor is selected to represent an experimental group. Studies with multiple donors' functional assays in vivo have highlighted this issue, with reports of umbilical cord blood (UCB)-MSCs donor variability in response to hypoxic preconditioning and amelioration of limb ischemia (Kang et al., 2018), as well as in a rat model of MI in which therapeutic efficacy was positively correlated with n-cadherin expression (Lee et al., 2012). In the latter work, cell-cell contact mediated by n-cadherin induced activation of ERK and upregulation of VEGF as shown by overexpression and blocking approaches (Lee et al., 2012).
In this study, UCM-MSC from two umbilical cords were isolated and their therapeutic efficacy after MI was compared to evaluate consistency and long-term effect. Intramyocardial delivery of both cellular products in a murine MI model attenuated cardiac dysfunction and minimized adverse cardiac remodeling 12 weeks post ischemic injury, supporting a sustained and long-term beneficial therapeutic effect for this cell product. Despite this beneficial effect, donor associated variability in the modulation of cardiac remodeling and activation of survival pathways was evident. In vitro, the two cell products showed equal ability to boost the formation of vessel-like structures and a similar transcriptome in normoxia and hypoxia, apart from expression differences in a small subset of genes associated with MHC Class I. These findings support that UCM-MSC are a strong candidate as add-on therapy for MI whilst calling for the discussion on methodologies to characterize and select best performing UCM-MSC before application in cellular therapies, or alternatives to overcome this limitation.
UCM-MSC Isolation
Human UCM-MSC were isolated according to Martins et al. (2014) and patented proprietary technology (PCT/IB2008/054067; WO 2009044379) developed by ECBio. The procedure includes three recovery phases to ensure a high cell yield and high isolation success rates. Furthermore, the UCM-MSC used in this study were obtained and processed under protocols compliant with a certifiable advanced therapy medicinal product (ATMP), compliant with clinical cell therapy. Modifications include steps for avoiding microbiological and endotoxin contamination of the final cell product, use of clinical grade enzymes, human serum as fetal bovine serum (FBS) substitute, short initial antibiotic/antimycotic decontamination in place of sustained treatment; all can be reviewed in Martins et al. (2014). Isolated UCM-MSC were cultured (up to P8) in Minimum Essential Medium α (α-MEM; Gibco, 2 mM Lglutamine, 1 g/L glucose, 2.2 g/L sodium bicarbonate), buffered with 10 mM HEPES (Gibco), hereafter designated UCM Basal Medium (BM), supplemented with 10% human serum (HS; Lonza; except otherwise stated), in a humidified incubator at 37 • C, 21%O2 and 5% CO 2 . UCM-MSCs characterization procedures can be found in the Supplementary Data.
Myocardial Infarction and UCM-MSC Delivery
Eight to twelve weeks old adult C57BL/6 mice (Charles River), independent of gender, were subjected to MI by permanent ligation of the left anterior descending (LAD) coronary artery, as previously described (Michael et al., 1995;Nascimento et al., 2014). UCM-MSC from two umbilical cord donors (UC-A and UC-B) were thawed in alpha-MEM containing 10% HS and resuspended in phosphate-buffered saline (PBS). UCM-MSC (2 × 10 5 cells/heart) were delivered after LAD ligation by four intramyocardial injections of 5 µl each using a Hamilton syringe (30 Gauge, PST45 • , 1701N, Hamilton Company). A group of control animals injected with vehicle (PBS, n = 6) was subjected to the same surgical procedure and post-operative care. UCM-MSC preparations were kept in ice throughout the surgical procedures and preparations older than 3-4 h were discarded. Analgesia and fluid therapy were performed by intraperitoneal (IP) injection of buprenorphine (Butador; Richter Pharma AG) and subcutaneous injection of 5% w/v Glucose Intravenous Infusion (B Braun), respectively. This procedure was repeated every 12 h up to 72 h after surgery or until full recovery.
Transthoracic Echocardiography
Transthoracic echocardiography was performed 12 weeks after LAD coronary artery ligation and UCM-MSC delivery by using a Vevo 2100 microultrasound platform fitted with a high resolution 38 MHz microscan transducer (both from FujiFilm VisualSonics Inc.) and data analyzed by a blinded operator. Anesthesia was induced with 5% isoflurane, animals were placed on left lateral decubitus position and anesthesia maintained at 2% isoflurane throughout the procedure for data acquisition. Fractional shortening (FS) and ejection fraction (EF) were determined in parasternal long-axis (PSLAX) B-mode, using a modified Simpson's method as previously described (Sampaio-Pinto et al., 2018). Cardiac output was determined by computing stroke volume (SV) in the left ventricle outflow track (LVOT) determined using the Pulse wave (PW)-doppler mode in the subapical view, diameter of the aortic root (B-mode) and Heart Rate (HR). The Myocardial Performance Index (MPI), also known as the Tei index, was determined based on the isovolumetric contraction and relaxation times (IVCT and IVRT) and LV ejection time (LVET), all determined by PW-doppler at the mitral valve level.
Histologic Procedures and Morphometric Analysis
At 12 weeks after surgery, hearts were collected for representative histological sampling as previously described (Valente et al., 2015). Briefly, animals were deeply anesthetized by IP injection of pentobarbital (Eutasil; CEVA, 400 mg/kg). After 4M potassium chloride (Sigma-Aldrich) injection into de left ventricle chamber, diastole-arrested hearts were harvested, briefly washed in PBS, and fixed in 10% formalin neutral buffer (Prolabo; VWR International) up to 16 h at room temperature before paraffin embedding. Representative sampling of the LV was obtained by transverse sectioning (3 µm thick) from the apex to the base of paraffin-embedded hearts with an interval of 300 µm between sections. Infarct-size assessment was performed by staining paraffin sections with modified Masson Trichrome staining (MT), according to the Trichrome (Masson) Stain kit (Sigma-Aldrich), with the following modifications: nuclei were pre-stained with Celestine Blue solution after staining with Gill's Hematoxylin and incubation for 1 h in aqueous Bouin solution to promote uniform staining. Infarcted area, infarcted midline and LV dilation were calculated using the semi-automatic MIQuant Software (Nascimento et al., 2011). LV infarcted wall thickness was determined manually using ImageJ as follows: for each section with a transmural infarction, the thickness of the wall from the epicardial to the endocardial border was measured in five equidistant points and the average for each section was determined. The results shown per heart represent the average of all infarcted sections.
Immunofluorescence
After heat-induced epitope retrieval with Tris-EDTA buffer (95 • C water bath, 35 min, ph = 9.0, Tris 1 mM and EDTA 10 mM), tissue was permeabilized with 0.2% Triton X-100 (Sigma-Aldrich) for 5 min and blocked for 1 h in 4% FBS/1% BSA in PBS. For CD31 detection, sections were incubated overnight at room temperature (RT) with goat anti-mouse CD31 (sc1506; Santa Cruz Biotechnology, Dallas, TX, USA), diluted 1:250 in the blocking solution. Thereafter, sections were incubated with AlexaFluor-568-conjugated donkey anti-goat IgG (A11057; Invitrogen) diluted 1:1000 in blocking solution for 1 h at RT and mounted using Fluoroshield containing DAPI (F6057; Sigma-Aldrich). For quantification of CD31 + cells, fluorescence images of stained sections were acquired with the INCell Analyzer 2000 (GE Healthcare) high-throughput microscope with a 40x dry objective (0.60 NA) and processed semi-automatically using the embedded system software. A study blinded operator established thresholds and criteria for detection and performed the analysis.
Lentiviral Transduction of UCX ®
A premade lentiviral vector encoding a cytomegalovirus (CMV) promoter-driven cassette containing the transgene for firefly luciferase (FCT005; Kerafast) was used to produce UCM-MSC lines from either donor with constitutive bioluminescence capacity, hereafter referred to as UC-A-FLuc and UC-B-FLuc. Vectors also carried a puromycin resistance gene (puro) and a woodchuck hepatitis virus post-transcriptional regulatory element (WPRE) downstream of the transgene. Transduction was performed as described in Lin et al. (2012). Briefly, UCM-MSC were cultured (since P3) in Minimum Essential Medium α with 2 mM L-Glutamine (α-MEM; Gibco) containing 20% FBS (Gibco) and 1% P/S (100 U/ml Penicillin and 100 µg/ml Streptomycin). Cells were sub-cultured at P4 (10 4 cells/cm 2 ) in six-well plates and transduction initiated after 12 h with 0.5 mL/well of complete media containing 100 µg/mL protamine sulfate (P4020; Sigma) with a multiplicity of infection of 5. After 8 h, 0.5 mL of complete media containing protamine sulfate was added to compensate for evaporation. After 24 h of initiating transduction the medium was replaced, cells allowed to recover for 48 h, and sub-cultured in complete medium containing 0.05 µg/mL puromycin up to P7 and UCM-MSC-Fluc cells were cryopreserved in FBS containing 10% DMSO. Non-transduced cells to control purification efficiency we treated with puromycin in parallel.
Whole-Body Bioluminescence Imaging
UC-A-FLuc and UC-B-FLuc were delivered into mice hearts subjected to LAD coronary artery permanent ligation as described above. A group of animals subjected to sham surgery (n = 2, no ligation) was also prepared. Imaging was performed daily from day 1 to day 7, 15 min after subcutaneous injection of 3 mg D-luciferin (BT11, Biothema) in PBS (30 mg/mL). The IVIS Lumina III system was used coupled with the XGI-8 Gas Anesthesia System (both PerkinElmer) to induce anesthesia. Signal intensity analysis was performed in identical circular regions of interest centered in the thoracic cavity expressed as radiance (photons/second/cm 2 /steradian) using Living Image software (PerkinElmer) and normalized in each animal to the value read at day 1 (results presented as percentage of Day 1 for individual animals).
Immunoblotting
Immunoblotting was performed as previously described . Briefly, samples were homogenized in modified RIPA buffer, proteins were separated by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and then electroblotted onto nitrocellulose membranes (Bio-Rad). After blocking, blots were incubated with primary antibodies (Supplementary Table 1), which were subsequently detected with 700 or 800 nm infra-red dye-conjugated secondary antibodies. Protein phosphorylation status was evaluated incubating the membrane simultaneously with host mismatched primary antibodies targeting total and phosphorylated forms, which were identified with different fluorochromecoupled secondary antibodies. Membranes were imaged by scanning at both 800 and 700 nm with Odyssey Infrared Imaging System (LICOR Biosciences). GAPDH was used as internal control.
Hypoxia Induction
UCM-MSC seeded at 1 x 10 4 cells/cm 2 (37 • C, 21%O 2 , 5%CO 2 ) were allowed to adapt to low serum concentrations [5% human serum (HS)] until they reached 90% confluency. At this point, cells were submitted to hypoxic environment (1%O 2 ; Normoxia groups kept at 21%O 2 ) for 24h to mimic oxygen deprivation found upon transplantation into infarcted tissue. After one serum free wash, media was replenished with α-MEM without HS (25 mL to a 175 cm 2 T-flask) and conditioning was carried for more 24 h. Finally, media was collected, concentrated with 3-kDa cut-off spin concentrators, and stored at−80 • C until further use.
Targeted Transcriptome Sequencing
Total RNA from UCM-MSC submitted to normoxia or hypoxia for 24 h was isolated using the RNeasy Plus Mini Kit (QIAGEN). Ion Torrent sequencing libraries were prepared according to the AmpliSeq Library prep kit protocol, and as published (Li and Zhang, 2015). RNA concentration and total RNA integrity number (RIN) were obtained using Qubit 3.0 fluorimeter and Agilent 2100 Bioanalyzer, respectively. Briefly, 10 ng of total RNA with high RIN (Average ± SD for n = 3 was UC-A_N = 8.87 ± 0.58, UC-A_H = 9.10 ± 0.37, UC-B_N = 8.63 ± 0.12, UC-B_H = 9.30 ± 0.41) was reverse transcribed, the resulting cDNA was amplified for 12 cycles by adding PCR Master Mix, and the AmpliSeq human transcriptome gene expression primer pool (targeting 18,574 protein-coding mRNAs and 2,228 noncoding ncRNAs, based on UCSC hg19). Amplicons were digested with the proprietary FuPa enzyme, then barcoded adapters were ligated onto the target amplicons. The library amplicons were bound to magnetic beads, and residual reaction components were washed off. Libraries were amplified, purified and individually quantified using Agilent TapeStation High Sensitivity tape. Individual libraries were diluted to a final concentration of 50 pM and pooled equally, with twelve individual samples per pool for further processing. Emulsion PCR, templating and 550 chip loading was performed with an Ion Chef Instrument (Thermo-Fisher). Sequencing was performed on an Ion S5XL TM sequencer (Thermo-Fisher). Results from 3 independent conditioning experiments were analyzed on Transcriptome Analysis Console and only genes with a fold-change > ±2 and FDR p-value < 0.05 were considered. Heatmaps were done using the Average Linkage Clustering Method and the used distance measurement method was Spearman Rank Correlation. Gene ontology and KEGG pathways for up and downregulated terms were analyzed using Enrichr.
In vitro Tubulogenesis Assay
The tubulogenesis assay was performed as described in Arnaoutova and Kleinman (2010), with slight alterations. Primary human myocardial microvascular endothelial cells (HMVEC-Cs), a potential clinical target of angiogenic mechanisms after MI, were maintained in EGM2-MV media (both from Lonza) and used at Passage 6. Matrigel growth factor reduced (10 µL, Corning) was used to coat a 15-well Angiogenesis µ-Slide (81506; Ibidi) and allowed to polymerize at 37 • C for 30 min. HMVEC-C were suspended in complete media (EGM2-MV), Conditioned Media (CM) or concentrated negative control (a-MEM no cells) diluted in basal media (EBM), with a final CM concentration of 5x, and seeded at 6.5 x 10 4 cells/cm 2 in a total of 50 µL per well. Conditioned media from 3 independent hypoxia inductions was run in parallel, along with technical triplicate wells for each condition. After 7.5 h incubation at 37 • C and 5%CO 2 the center of each well was imaged using phase contrast microscopy on an inverted microscope Axiovert 200 (Carl Zeiss) with a 10x objective. Image analysis was performed on ImageJ (NIH) using the Angiogenesis Analyzer plugin (Carpentier, 2012).
Statistical Analysis
GraphPad Prism 8 was used for statistical analysis. Shapiro-wilk test was used to assess normality of the samples and F test or Brown-Forsythe test to probe equal variances. Datasets following a Gaussian distribution and showing same standard deviation were analyzed using independent sample Student's t-test and one-way ANOVA for two to three or more groups, respectively, followed by Tukey's post-hoc test for multiple comparisons. Statistical significance of non-parametric data was tested with Kruskal-Wallis test, followed by the FDR method of Benjamini and Hochberg adjustment for multiple comparisons.
RESULTS
We had previously shown that UCM-MSC attenuate remodeling after myocardial infarction upon intramyocardial delivery by proangiogenic, antiapoptotic, and endogenous cell-activation mechanisms as observed 2 weeks after MI. Herein, using the same murine MI model, the efficiency of UC-A and UC-B was evaluated in a long-term scenario of 12 weeks. UC-A and UC-B were collected from different donors using proprietary technology and updated protocols compliant with cell therapy in a clinical setting (Martins et al., 2014).
Both cell lines meet the minimal criteria defined by the International Society for Cellular Therapy (Dominici et al., 2006), namely plastic cell-adherence, expression of CD73, CD105, CD90, and CD44 and absence of CD45, CD34, CD31, CD19 and HLA-DR surface markers (Supplementary Figures 1A,B, Supplementary Tables 2, 3). Despite similar MSC profile, UC-B displayed greater proliferation rates as demonstrated by higher levels of histone H3 phosphorylation (ph3) and greater metabolic activity (Supplementary Figures 1B,C). This resulted in 9.6 million cells/cm and 2.3 M/cm of cord in UC-B and UC-A at P2, respectively (data not shown).
UCM-MSC Transplantation Consistently Improve Cardiac Function 12 Weeks After MI
Cardiac function was analyzed by high-resolution echocardiography (n = 6 in the Vehicle Group; n = 5 in UC-A; n = 7 in UC-B). Representative images of PSLAX view of each experimental group demonstrate an attenuation of left ventricle (LV) dysfunction in the transplanted groups, when compared to the vehicle control ( Figure 1A). A consistent improvement of LV functional parameters in animals treated with UCM-MSC was observed. Ejection fraction was improved from 22.3 ± 6.0% in the vehicle treated groups to 40.5 ± 7.5% in UC-A (p = 0.0012) and 45.0 ± 11.8% in UC-B (p = 0.0313) (Figure 1B). Fractional shortening was also significantly improved between the vehicle and UC-B groups (p = 0.0313; from vehicle treated 8.3 ± 2.1% to 14.5 ± 5.5%), although UC-A showed a similar degree of improvement (p = 0.0516; to 14.4 ± 2.7%) albeit not reaching statistical significance potentially due to the small animal numbers on that group (Figure 1C). An overall trend for improved cardiac output ( Figure 1D) and myocardial performance index ( Figure 1E) was also observed in UCM-MSC treated groups compared to control vehicle. Of note, UC-B consistently improved cardiac function compared to UC-A, which induced minor functional benefits.
UC-B Outperforms UC-A in Reducing Adverse Cardiac Remodeling 12 Weeks After MI
In line with the echocardiographic functional evaluation, a beneficial effect of UCM-MSC was observed suggesting an attenuation of MI triggered cardiac remodeling in UCM-MSC treated groups. Masson's Trichrome stained heart sections were subjected to morphometric analysis to evaluate cardiac remodeling (LV dilation and wall thickness) and infarct size was calculated in the MIQuant software (Figure 2A). Infarct extension reduced from 36.8 ± 4.8% in control to 26.1 ± 2.7% in UC-A (p = 0.0277) and to 13.1 ± 8.3% in UC-B (p < 0.0001). Of note, the infarct size in UC-B was smaller compared to UC-A (p = 0.0062) (Figure 2B). Infarct midline showed the same trend for reduction, illustrating the therapeutic efficacy of UCM-MSC, although no significant differences were found between vehicle and UC-A (from 42.9 ± 9.3% to 34.7 ± 6.2%); once more, UC-B outperformed UC-A with a midline infarct size of 11.8 ± 11.2% (p = 0.0024) (Figure 2C). A trend for improvement but no statistically significant differences were observed between control and UC-A treated groups (26.0 ± 10.8% vs. 17.4 ± 4.0%), while UC-B showed less dilated LV with 6.9 ± 4.8% (p = 0.0007 vs. vehicle) ( Figure 2D). Strikingly, no differences were observed for vehicle and UC-A (0.44 ± 0.41 mm vs. 0.41 ± 0.022 mm), with UC-B performing significantly better, with an infarcted wall thickness of 0.74 ± 0.25 mm (p = 0.0248 vs. vehicle and p = 0.0261 vs. UC-A) ( Figure 2E). Increased neovascularization induced by this cellular product was observed previously 2 weeks after MI in the infarct borderzone . Herein, we did not find any differences in the number of endothelial cells in the infarcted area nor the borderzone (Figures 2F-H), indicating that the distinct efficacy of the cords does not relate with increased angiogenic capacity of the tissue in this chronic phase of MI.
UC-A and UC-B Following Myocardial Delivery Are Equally and Transiently Retained in the Myocardium
We hypothesized a key factor determining different efficacy of UCM-MSC could be their ability to reside and survive for substantially different time in the infarcted left ventricle wall undergoing nutrient and oxygen deprivation and inflammation. Moreover, higher proliferation observed in vitro in UC-B could result in a higher number of UCM-MSCs after transplantation of similar cell numbers. To address this, UC-A and UC-B expressing firefly-luciferase under the constitutive CMV promotor after lentiviral transduction, were used in an in vivo longitudinal study to monitor their survival in our xenotransplant in immunocompetent infarcted mice (Figures 3A,B). UC-A-Luc and UC-B-Luc were delivered immediately upon coronary ligation in a group of 6 animals each. Upon administration of D-Luciferin at every 24 h, only viable cells carrying luciferase can produce bioluminescence. After day one, the number of cells reduced sharply and, from this point onward, ∼25% decreased every day, to negligible numbers by day 5 (Figure 3B). The clearance profile observed was the same for both cords. Of note, and albeit for a small number of animals, the decrease in viable cell numbers in sham controls is steeper from day 2 to day 3 and cannot be detected by day 4, suggesting the MI environment might extend their survival in the host tissue.
Activation of Akt Signaling in UC-B Treated Hearts Supports Improved Survival, Metabolism and Proliferation 48 h Post MI
Having established UC-A and UC-B have distinct performances in vivo and considering both persist for a short and similar period of time in the infarcted tissue, we assayed the shortterm therapeutic potential of UC-A and UC-B after MI. For this purpose, LV borderzone was isolated 48 h after MI and UC-A and UC-B delivery (N = 6 per group) and main survival and inflammatory pathways were evaluated ( Figure 3C). Increased Akt phosphorylation (Thr308) was observed in UC-B, suggesting higher Akt activity in UC-B treated hearts. Concurrently, GSK3β, a known repressor of metabolism, proliferation, and survival, showed significant increase in Akt-mediated inhibitory phosphorylation at Serine 9. Phosphorylation of mTOR, another downstream effector of Akt signaling, was also upregulated in the UC-B treated group suggesting increased Akt-mTOR-GSK3β signaling. STAT3 and ERK levels, pro-inflammatory and pro-remodeling pathways, showed similar activation levels in both cords. Furthermore, the expression of ICAM-1 and VCAM-1, regulated by inflammatory processes, and Caspase-3 seemed unchanged.
UC-A and UC-B Display Comparable Transcriptomic Profiles and Adaptation to Hypoxia Stimulus
Aiming at identifying gene expression differences correlating with enhanced therapeutic potential of UCM-MSC in ischemic conditions, the transcriptomic profile of the two cell-lines was compared under normoxia and hypoxic conditions, the latter mimicking environmental changes installed in MI (Figure 4A). Unsupervised hierarchical clustering of the UC-A and UC-B datasets did not show an evident association between paired experiments, neither any effect of the hypoxic treatment ( Figure 4B) suggesting homogenous datasets. Moreover, principal component analysis (PCA) (Figure 4C) showed that datasets overlap on PCA1, explaining 50.1% of the sample's variance. On the PCA2 axis, which contributes 19.7% to dataset variance, both cords shift together under low oxygen levels, indicating an effect of the environmental oxygen levels, and a similar response of both UCM-MSC lines under these conditions. Regarding differential gene expression analysis (Figures 4D-F), UC-A and UC-B subjected to normoxia showed a total of 155 differentially expressed genes (DEG), of which 85 were up-regulated and 70 down-regulated in UC-B. Under hypoxia a comparable number of 153 total genes was found to be altered, with 90 upregulated and 63 downregulated in UC-B (fold change > ± 2, FDR < 0.05). Since, the pro-reparative potential of MSC in the heart has been associated with paracrine signaling , we focused our analysis on matrisome-associated proteins (including ECM-affiliated proteins, ECM regulators and secreted factors). From the reported ∼1000 matrisome-associated genes (Naba et al., 2016), only 15 were differentially expressed, from which 7 and 8 were up-and down-regulated in UC-B, respectively ( Figure 4E). GO enrichment analysis and pathway enrichment analysis (KEGG) of the differentially expressed genes under normoxia ( Figure 4F) suggested increased activity associated with antigen presentation via MHC Class I with HLA-C, HLA-A, and HLA-E upregulated in UC-A. Pathway enrichment analysis (KEGG) supported this evidence, indicating higher transcription of genes associated with allograft rejection and cell adhesion molecules involved in inflammation (HLA-C, HLA-A, HLA-E, and HLA-DAPA1), as well as genes involved in complement and coagulation cascades (complement components, vitronectin and MASP1). Thirty seven genes showed altered expression in response to hypoxia in both UC-A and UC-B, 27 were upregulated and 9 downregulated on both cords (Figures 4B,C,G). The subset of upregulated genes showed an enrichment for processes related with cellular response to hypoxia and glycolysis; enriched KEGG pathways further hinted an adaptation of both cords to hypoxia, with enriched HIF-1 signaling pathway as well as Glycolysis and Gluconeogenesis, most notably, the upregulation of VEGF and Glucose-6-phosphate. As the two cords changed alongside in response to hypoxia, GO enrichment analysis and pathway enrichment analysis (KEGG) of the differentially expressed genes between the UC-A and UC-B after hypoxia retrieved similar results to the ones found in normoxia (data not shown). Overall, and despite similar transcriptomic profiles in normoxia and hypoxia, UC-B and UC-A expression differences were found regarding genes encoding for MHC class I molecules and complement activation-related proteins which are important elements of the inflammatory response.
UC-A and UC-B Present Equivalent Potential to Induce Tubulogenesis in Human Cardiac Endothelial Cells in vitro
We and others have previously indicated angiogenesis as one of the main mechanisms boosted by human UCM-MSC delivery upon MI (Zhang et al., 2013;Nascimento et al., 2014). As such, a classical tubulogenesis in vitro assay was performed to assess the angiogenic potency of the different donor-cord pairs in this study (Figures 5A-C). Human microvascular endothelial cells from cardiac origin were seeded onto a matrigel layer (growth factor reduced), in media conditioned by UC-A and UC-B and allowed to form tubes for 7.5 h. Results shown correspond to 3 independent conditioning experiments, and tubes quantified in triplicate wells for each condition/experiment. When compared to endothelial basal media (EBM), the conditioned media produced by the UCM-MSC, in either normoxia or hypoxia, increased tube number (by 98.2 ± 21.7% in UC-A-Normoxia (N), 94.0 ± 21.7% in UC-A-Hypoxia (N), 106.0 ± 19.6% in UC-B-N, and 106.5 ± 21.1% in UC-B-H), tube length (by 48.6 ± 9.38% in UC-A-N, 46.8% ± 8.15% in UC-A-H, 51.6 ± 5.58% in UC-B-N, and 51.4 ± 2.03% in UC-B-H) and branching points/junctions (by 117 ± 29.1% in UC-A-N, 114.4 ± 27.10% in UC-A-H, 125.1 ± 26.39% in UC-B-N, and 131.3 ± 28.21% in UC-B-H). Notably, no significant differences were found between the two cords nor between UCM-MSC subjected to normoxia or hypoxia environmental conditions.
N-Cadherin Transcriptional Profile Is Identical Between Cords
It was previously suggested that transcriptional and translational levels of N-Cadherin positively correlated with the in vivo therapeutic efficacy of human UCB-MSCs in MI, in a study evaluating MSCs derived from a set of 4 cords (Lee et al., 2012). Based on this evidence, we assessed N-Cad expression levels on UC-A and UC-B at the end of the 3 independent conditioning experiments in an effort to predict therapeutic efficacy of our donor pair (Figure 5D). While we observe a marginal increase in the N-Cadherin levels in UC-B in normoxia, and similar trend between cords was observed in hypoxia, the differences were found to be statistically non-significant, thereby not supporting the discrepant effect observed between UC-A and UC-B.
DISCUSSION
Our previous research has shown that human UCM-MSC attenuate remodeling after MI in mice, albeit functional improvement was not attained likely a result of the limited timeframe (14 days) of the in vivo study. Whether this beneficial effect was sustained in the long term and/or was dependent of donorto-donor variability, was not addressed. In the present study, we challenged that effect in a long-term scenario of 12 weeks and with cords derived from two different umbilical cord donors, UC-A and UC-B. Echocardiographic analysis showed that human UCM-MSC delivery in an acute phase resulted in sustained LV function, smaller infarct size and attenuated cardiac remodeling. To the best of our knowledge, this is the first evidence that transplantation of human UCM-MSC in the ischemic rodent heart provides a durable therapeutic effect at functional and histological levels. These results are consistent with the outcome reported in the preliminary work of López et al. (2013) in which functional improvement was found 32 weeks following UCM-MSC therapy. However, in this study, administered cells were isolated from rats, thus hindering translational relevance, and no systematic assessment of cardiac remodeling was performed. Another report by Hsiao et al. compared the therapeutic potential of human UCM-MSC primed or not with TGF-β2 in the context of MI. While showing attenuation of functional decline in the cell treated groups over the course of 16 weeks, no differences were shown at the endpoint vs. the control group, to support a long-term beneficial therapeutic effect (Hsiao et al., 2016).
We further show that in our model of xenotransplantation into immune competent mice, UCM-MSC transiently persisted in the infarcted myocardium for a period no longer than 5 days despite eliciting a durable therapeutic effect. Myocardial long-term engraftment of human cells has been demonstrated following delivery to immunodeficient/immunosuppressed (Chang et al., 2008;Gaebel et al., 2011;Latifpour et al., 2011;Roura et al., 2012;Im Cho et al., 2017) and immunocompetent animals (Berry et al., 2006;Zhang et al., 2013;Monguió-Tortajada et al., 2017). Of note, all these studies have assessed cellular integration in tissue sections at the experimental end-point by immunofluorescence or by using fluorescent cell tracking dyes. This contrasts with longitudinal studies in which bioluminescence has been used to trace transplanted cells. In these studies, animal derived cells transplanted into animals models showed clearance times similar to what we observed (Deuse et al., 2009;Wu et al., 2017) or at the most up to 30 days (Yan et al., 2013;Tu et al., 2019). Overall, these differences seem to reflect mostly the type of methodology used to assess engraftment. Whist in vivo bioluminescence is less sensitive than immunodetection to identify rare events of persistent cells, it allows reliable quantification of cell clearance/engraftment during the study. Herein, at the endpoint no human cell was detected using an anti-human antibody in histological sections (data not shown), reinforcing complete cell clearance. Moreover, differences in UCM-MSCs proliferation in vitro did not contribute to a higher cell number or longer survival in vivo. While the time for transplant clearance could be different in a human-to-human transplantation scenario, this data demonstrates that a short period of contact is sufficient for therapeutic benefits. This small time-window of engraftment is not compatible with a scenario of MSC differentiation in cardiac cells as has been demonstrated by others (Nagaya et al., 2004;Berry et al., 2006;Chang et al., 2008;Li et al., 2010). Instead, this scenario argues for the paracrine effects described for these cells in multiple reports Yao et al., 2015;Cai et al., 2016) in which immunomodulatory properties, ECM remodeling ability and capacity to promote angiogenesis are the main mechanisms (Guo et al., 2020). Several bioengineering strategies are under development to improve retention, survival, and engraftment of transplanted cells in the myocardium (Jiang et al., 2020). Of interest, a recently developed hydrogel-based combination of UCM-MSC with endothelial cells showed that in vitro maturation prior to transplantation promotes vasculogenic potential and cell survival/retention after transplantation in mice (Torres et al., 2020). Although, this approach may be a valuable delivery alternative for UCM-MSCs, whether longer retention will translate in better therapeutic efficacy still requires further investigation.
While being consistently beneficial and residing in the tissue for the same period of time, we show that the extent of LV function and morphology preservation at 12 weeks exerted by UC-B was superior to UC-A, even though the cells were isolated using a proprietary protocol envisaged to produce a homogeneous product (Martins et al., 2014). Moreover, UC-B delivery resulted in increased Akt-mTOR-GSK3β signaling in the infarcted myocardium 2 days post-MI. These observations are in line with abundant evidence demonstrating that the Akt-mTOR-GSK3β pathway is an important cardioprotection mechanism by promoting cardiomyocyte survival and metabolic homeostasis (Matsui et al., 2001;Shiraishi et al., 2004;Sussman et al., 2011;Lin et al., 2015). Also, the therapeutic effect of MCS delivery to the heart has been shown to encompass the secretion of a panoply of growth factors that activate mechanisms involving PI3K/Akt/mTOR pathway (Arslan et al., 2013;Cai et al., 2016). In agreement with this perspective and our findings, exosomes released by MSC promote cardiac functional restoration and improve remodeling following delivery to the ischemic heart (Arslan et al., 2013;Kang et al., 2015). Altogether, these findings support that Akt-mTOR-GSK3β pathway is a key target for therapy in ischemic diseases (Matsui et al., 2001;Shiraishi et al., 2004;Sussman et al., 2011;Lin et al., 2015).
In retrospect, and in an effort to identify features that could justify the observed differences in therapeutic potency and capacity to activate the Akt-mTOR-GSK3β pathway, we compared the transcriptome of these cells when cultured in vitro. Both cellular products were considered similar in normoxia apart from higher expression of HLA-I genes in UC-A, suggesting altered antigen processing via MHC Class I and allograft rejection. MHC Class I genes are present on all nucleated cells and mediate allogeneic rejection by presenting peptide antigens to CD8 + T cells (Braciale, 1992), thus higher HLA-I expression on MSC following transplantation could increase the risk of rejection by the host. Yet, MSC are able to evade immune surveillance by downregulating HLA-I surface expression, even when primed with IFN-γ (Wang et al., 2019). In our study, given the high immunologic barrier to xenotransplantation, together with the hostile inflammatory milieu triggered by MI, UCM-MSC were cleared from the tissue up to 5 days post-transplant. Moreover, and regardless of having higher expression levels of MHC Class I genes, UC-A persisted in the myocardium for as long as UC-B, indicating that expression differences were not reflected on a faster clearance rate nor could justify the differential therapeutic efficacy of the two cords.
Contrasting with our previous results at 14 days after MI , hearts treated with UCM-MSC displayed a similar vascular network to the vehicle group, 12 weeks after MI. It is possible that neovascularization played a role in containing adverse remodeling and the expansion of the scar by preventing cardiomyocyte death in the border zone of the acute ischemic region and might have resolved to baseline levels at this stage. Our transcriptomic and in vitro angiogenesis functional analysis on human cardiac endothelial cells anticipate a similar angiogenic profile between cords, hence differences in angiogenesis may not be the cause of the observed in vivo variation between donors. Donor variability regarding MSCs therapeutic use have been described and linked mostly to angiogenesis. Kang et al. (2018) described a variable response to hypoxia on a set of UCB-MSCs derived from 7 cords on a panel of 4 genes (ANGPTL4, ADM, CDON, and GLUT3); better responders were associated with higher angiogenic potency in vitro, and showed better performance in vivo with 2 cords when challenged in a model of limb ischemia. Lee et al. (2012) showed different angiogenic potency that correlated with therapeutic potential of four hUCB-MSCs lines in a mouse model of MI, that could be linked to individual differences in the expression of N-cadherin, resulting in overactivated ERK that lead to increased VEGF signaling. Herein, N-cadherin nor VEGF were increased in the two cords, supporting our in vitro functional data regarding equivalent angiogenic induction performance. Regarding specifically MSCs derived from the UCM, Kim et al. (2019) compared the angiogenic capacity of different donor-derived UCM-MSC based of the tube forming assay and advanced four biomarkers (angiogenin, interleukin-8, monocyte chemoattractantprotein-1, and VEGF) to predict the proangiogenic potential of MSC in vivo. In our setting, hypoxiaprimed cords upregulated VEGFA, but their conditioned media in normoxia vs. hypoxia showed equal potential to induce the formation of vessel-like structures by cardiac microvascular endothelial cells, meaning that VEGFA is not a key effector on its own in the angiogenic capacity of UCM-MSC.
CONCLUSION
This work is, to the best of our knowledge, the first evidence that transplantation of human UCM-MSC in the ischemic rodent heart provides a durable therapeutic effect at both functional and histological levels as observed 12 weeks after MI, despite transient engraftment.
Additionally, as far as we know, this is the first report of UCM-MSC donor-related variability in the ischemic heart. However, both donors performed equally good in the tubeforming assay, and therefore none of these assays was able to predict their therapeutic in vivo potential. As such, and despite that angiogenesis is a key mechanism for tissue repair after MI, other assays are needed to prospectively identify the best performing MSC to be used in clinical applications. In our setting, we show that therapeutic potency may not directly link with differential angiogenic potential nor variable response to hypoxia. Instead, we hypothesize other mechanisms may be at play, such as differences in cardiac protection via Akt-mTOR-GSK3β as shown in our protein analysis 2 days after MI.
DATA AVAILABILITY STATEMENT
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: https://www.ebi.ac.uk/ arrayexpress/, E-MTAB-9978.
ETHICS STATEMENT
The animal study was reviewed and approved by IBMC-INEB (Instituto de Biologia Molecular e Celular-Instituto de Engenharia Biomédica) Animal Ethics Committee and Direcção Geral de Alimentação e Veterinária (permit 022793).
AUTHOR CONTRIBUTIONS
TL: study design, data acquisition and analysis, writing original draft. FV-N, RG, and VS-P: data acquisition and analysis, review and editing. PC, HC, JS, and RB: conceptualization, funding, review and editing. PP-d-Ȯ: conceptualization, study design, funding, supervision, review and editing. DN: conceptualization, study design, funding, supervision, data acquisition and analysis, writing original draft, review and editing. All authors contributed to the article and approved the submitted version. The funding bodies other than ECBio had no role in design, in the collection, analysis, and interpretation of data; in the writing of the manuscript; or in the decision to submit the manuscript for publication.
|
2021-02-04T14:09:35.568Z
|
2021-02-04T00:00:00.000
|
{
"year": 2021,
"sha1": "3784a1e1a04ae2ea6e67c8535c4f486ceae25ed5",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fcell.2021.624601/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f9248c5ac6223a6bb809b353918b051adbccfbc6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
3361742
|
pes2o/s2orc
|
v3-fos-license
|
Immunopathology of lung diseases: introduction for the special issue
This issue of Seminars in Immunopathology focuses on lung innate immunity and the recently identified mechanisms contributing to this unique environment. It was in the 1980s that T-cell subsets were first described by the cytokines that induced them and the cytokines they then produced [1, 2]. This segregation allowed more flexibility in the immune system, fine-tuning responses to eliminate very different antigens or pathogens. Sub-division to define optimal function, however, has now expanded to natural killer (NK) cells, innate lymphoid cells (ILCs), dendritic cells (DCs) and macrophages. Though polarising conditions in vitro provide clarity in our understanding of influencing forces, they do not tell us how this occurs in vivo, how it is regulated, or whether it contributes to common pathological conditions. The discovery of tissue-specific regulation of innate and adaptive immunity has transformed our understanding of the immune system, provided clarity for otherwise unexplainable phenomena, and opened up a new area for scientific discovery [3]. Immunity adapts to tissue-specific cues that in turn maintain or restore homeostatic physiology. Forces driving the tissue adaptation in disease alter immune phenotype and reactivity, which can have deleterious consequences. The lung is a prime example of site-specific forces dictating the balance between health and disease. In this issue we examine lung innate immunity in this precise tissue-specific context, the impact of ILCs and neutrophils on health and disease, macrophage and DC adaptation and the consequences of disturbances in chronic lung disease and infection. To some extent, tissue-specific influences on innate immunity should depend on what the tissue requires of that innate immune cell. Alveolar macrophages reside in a prime location for interaction with environmental and commensal microorganisms. Their role appears to be to sift the harmless from the dangerous and also perform domestic duties, keeping airspace clutter within tolerable margins [4]. These duties include removal of surfactant proteins, cellular and matrix debris, and apoptotic cells. Cell turnover by apoptosis is necessary, and removal of apoptotic bodies is essential to prevent inflammation-inducing secondary necrosis. In this issue, Grabiec andHussell detail themechanisms and impact of apoptotic cell uptake (efferocytosis) on airway macrophage function. Clearly, clearance of self-cells or proteins must be performedwithout activating themacrophage, otherwise peripheral tolerance will be overcome, leading to chronic inflammation or autoimmunity. The process of efferocytosis therefore triggers anti-inflammatory cascades. On the other hand, clearance of pulmonary pathogens by airway macrophages may require assistance from other cells, which necessitates macrophage production of chemokines that culminates in inflammatory cell recruitment. Eventually these recruited cells will themselves undergo apoptosis and require efferocytosis. Here we have a conundrum where two opposing functions are requested of airway macrophages: efferocytosis and inflammation. Writing in this issue, Robb and colleagues discuss the impact of neutrophil and eosinophil apoptosis and their clearance from the airways on the resolution of the inflammatory response. Despite the considerable advances made in recent years in our understanding of airway macrophage origin, heterogeneity, longevity and turnover, there are still many unanswered questions [5]. Do original airway macrophages perform This is the introduction for Immunopathology of Lung Diseases Drs Tracy Hussell and Aleksander M. Grabiec
INTRODUCTION
Immunopathology of lung diseases: introduction for the special issue This issue of Seminars in Immunopathology focuses on lung innate immunity and the recently identified mechanisms contributing to this unique environment. It was in the 1980s that T-cell subsets were first described by the cytokines that induced them and the cytokines they then produced [1,2]. This segregation allowed more flexibility in the immune system, fine-tuning responses to eliminate very different antigens or pathogens. Sub-division to define optimal function, however, has now expanded to natural killer (NK) cells, innate lymphoid cells (ILCs), dendritic cells (DCs) and macrophages. Though polarising conditions in vitro provide clarity in our understanding of influencing forces, they do not tell us how this occurs in vivo, how it is regulated, or whether it contributes to common pathological conditions.
The discovery of tissue-specific regulation of innate and adaptive immunity has transformed our understanding of the immune system, provided clarity for otherwise unexplainable phenomena, and opened up a new area for scientific discovery [3]. Immunity adapts to tissue-specific cues that in turn maintain or restore homeostatic physiology. Forces driving the tissue adaptation in disease alter immune phenotype and reactivity, which can have deleterious consequences. The lung is a prime example of site-specific forces dictating the balance between health and disease. In this issue we examine lung innate immunity in this precise tissue-specific context, the impact of ILCs and neutrophils on health and disease, macrophage and DC adaptation and the consequences of disturbances in chronic lung disease and infection.
To some extent, tissue-specific influences on innate immunity should depend on what the tissue requires of that innate immune cell. Alveolar macrophages reside in a prime location for interaction with environmental and commensal microorganisms. Their role appears to be to sift the harmless from the dangerous and also perform domestic duties, keeping airspace clutter within tolerable margins [4]. These duties include removal of surfactant proteins, cellular and matrix debris, and apoptotic cells. Cell turnover by apoptosis is necessary, and removal of apoptotic bodies is essential to prevent inflammation-inducing secondary necrosis. In this issue, Grabiec and Hussell detail the mechanisms and impact of apoptotic cell uptake (efferocytosis) on airway macrophage function. Clearly, clearance of self-cells or proteins must be performed without activating the macrophage, otherwise peripheral tolerance will be overcome, leading to chronic inflammation or autoimmunity. The process of efferocytosis therefore triggers anti-inflammatory cascades. On the other hand, clearance of pulmonary pathogens by airway macrophages may require assistance from other cells, which necessitates macrophage production of chemokines that culminates in inflammatory cell recruitment. Eventually these recruited cells will themselves undergo apoptosis and require efferocytosis. Here we have a conundrum where two opposing functions are requested of airway macrophages: efferocytosis and inflammation. Writing in this issue, Robb and colleagues discuss the impact of neutrophil and eosinophil apoptosis and their clearance from the airways on the resolution of the inflammatory response.
Despite the considerable advances made in recent years in our understanding of airway macrophage origin, heterogeneity, longevity and turnover, there are still many unanswered questions [5]. Do original airway macrophages perform housekeeping, inflammatory and repair roles? Are these functions catered for by different macrophage populations, e.g. resident versus recruited? Does function actually drive macrophages down specific developmental paths? For example, efferocytosis of apoptotic cells or engulfment of matrix products may induce a wound-healing macrophage phenotype; function driving form. DCs have also been extensively subdivided based on their origin and on the mechanisms by which they are activated and which they may use to coordinate downstream inflammatory responses in the lung (reviewed in this issue by Cook and MacDonald). However, innate immune cell polarisation may also be driven by cellular connections within the local environment. Location is likely to be of considerable importance in determining cell sub-type dominance, as required functions will be very different at different sites. Consider macrophages or DCs in lymph nodes versus those in the tissue or mucosal lumen, for example [6]. In this issue, Bhattacharya and Westphalen discuss the influence of the bronchial epithelium on airway macrophage function in homeostasis and the importance of this interaction in orchestrating the macrophage-driven immune response.
Balanced interplay between innate and adaptive immune responses is required for timely removal of pathogenic microorganisms infecting the lung, resolution of the inflammatory process and repair of the damaged tissue. Respiratory viral infections are the most prominent example of how the balance between pathogen elimination and prevention of immunemediated lung injury is achieved [7]. Understanding the immune mechanisms underlying an effective, but self-limiting anti-viral response is particularly important in the context of the emergence of novel respiratory viruses, such as SARS and MERS coronaviruses, and the constant threat posed by pandemic influenza. Recent advances in this field are reviewed in this pages by Newton and colleagues.
Allergic lung inflammation and asthma are generally associated with uncontrolled T helper 2-cell-mediated inflammation caused by the adaptive immune response to airway allergens. However, the recent discovery of ILCs, a novel innate cell population, has greatly improved our understanding of the importance of innate immunity in these conditions [8]. Van Rijt and colleagues discuss the involvement of type 2 ILCs in allergic asthma, in particular their role as an early innate source of type 2 cytokines, such as interleukin (IL)-13 and IL-5, which are main drivers of allergic inflammation. Disruption of both innate and adaptive immune responses also contributes to chronic obstructive pulmonary disease (COPD), which is among the leading causes of morbidity and mortality worldwide [9]. Also in this issue, Caramori and colleagues detail the roles of innate and adaptive immune cell populations in COPD immunopathology in its stable phase and during exacerbations. The authors also highlight the contributions of altered cytokine networks to chronic inflammation and airway remodelling in COPD.
Uncontrolled production of inflammatory cytokines, predominantly by innate immune cells and resident cells of the respiratory tract, plays a key role in driving airway damage not only in COPD, but also in many other lung diseases [10]. While the contributions of IL-1β to several lung immune pathologies were identified more than a decade ago, experimental evidence has accumulated in recent years showing that other pro-and antiinflammatory members of the IL-1 family, including IL-33, IL-18 and IL-37, also play pivotal roles in controlling these pathological processes. In this issue, Borthwick details the impact of this cytokine family on lung immunopathology, with special focus on lung fibrosis.
In summary, this issue of Seminars in Immunopathology will provide the readers with an overview of key processes controlling the immune homeostasis of the lung and the consequences of dysregulation of these processes, which leads to chronic inflammation, lung tissue injury and/or fibrosis. Great advances have been made in recent years in our understanding of the molecular mechanisms underlying immune-mediated lung pathologies due to the introduction of mouse models mimicking certain aspects of human disease. However, future translational work on primary patient material is required to validate these findings and to tackle the challenge posed by the growing impact of chronic and acute lung diseases on public health.
|
2017-08-02T19:05:47.496Z
|
2016-05-19T00:00:00.000
|
{
"year": 2016,
"sha1": "42940d27e3564398d751b1e840bd400f939d9923",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00281-016-0572-2.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "42940d27e3564398d751b1e840bd400f939d9923",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
209084649
|
pes2o/s2orc
|
v3-fos-license
|
Research on Routing Algorithm in Intelligent Meter Reading
The introduction to broadband micro-power wireless technology in the national network intelligent meter reading system can increase the bandwidth, improve the meter reading performance, and make up for the narrow bandwidth and low speed in narrow-band micro-power wireless and PLC technology. In the smart meter reading network, the generation and maintenance of routes are the key. This paper improves the AODVjr algorithm and improves it from full-network broadcasting to semi-directional directed broadcasting, calculating channel quality based on weight, making it suitable for real-time routing repair in broadband micro-power wireless intelligent meter reading. The improved algorithm minimizes route repair time and routing overhead while ensuring communication success rate. The simulation results show that with the increase of the number of nodes, the semi-directional broadcast has much better performance in routing request packet transmission success rate, average hop count, communication success rate, and route repair delay than the whole network broadcast.
Introduction
In the more than 100 years of power grid development, based on the initial power generation, transmission and distribution functions, the communication technology, network technology and control technology have been continuously combined to make the power grid gradually become intelligent and digital. Smart grid is a basic application of NB-IoT, and smart meter reading technology is also widely used in smart grid [1][2][3]. Power line carrier (PLC) technology is the earliest application in smart meter reading because of the existing power line transmission medium, but it is greatly affected by random impulse noise. There may be a short distance between two nodes but the distance of the information transmission line is longer. In order to solve the problems of PLC, the industry has developed research of narrow-band micro-power wireless. Because of its low power consumption and flexible network deployment, it is widely used to develop dual-mode communication technology combined with PLC [4]. However, due to its narrow bandwidth, low data transmission rate and weak anti-interference ability, state grid is now studying broadband micro-power wireless communication technology [5]. Its wide bandwidth, including the chirp modulation bandwidth up to 3.6mhz, has strong anti-interference ability and large channel capacity, which largely improves the shortcomings of narrow-band micro-power wireless and PLC.
In the intelligent meter reading communication network, route planning and repair are a major problem in the intelligent meter reading network. The efficient routing algorithm can greatly reduce the routing overhead and improve the communication success rate. Literature [6] evaluated a series of intelligent algorithm-based routing protocols, such as reinforcement learning (RL), ant colony algorithm (ACO), genetic algorithm (GA), etc, and further pointed out its application scenarios. In literature [7], a Q-routing model based on Q-learning is proposed. The Q-value is continuously updated according to the state-actor pair, and then the next hop node is selected according to the minimum Q-value. In literature [8], the cluster is used to divide the network into multiple subnetworks. The improved AODVjr algorithm is used between the sub-networks, and the tree routing algorithm is used in the sub-network to find the destination node. Literature [9] proposes an F-AODVjr algorithm to determine whether to forward RREQ packets based on the remaining energy of the nodes. In literature [10], in the tree routing algorithm, the next hop route is selected by means of the node neighbor table, and the invalid nodes in the neighbor table are deleted in the plan to reduce invalid forwarding. There are also many studies based on game theory, LoRa and other technologies to optimize routing, improve efficiency, increase transmission distance, and extend network life [11][12][13][14][15].
This paper improves an AODVjr algorithm for real-time routing repair of broadband micro-power wireless intelligent meter reading network. In the tree network topology, the improved AODVjr algorithm is oriented half-direction broadcasting, and the tree distance d(u, v) is used to determine the forwarding radius of the data packet. Each time the data packet is forwarded, the forwarding radius is reduced by 1, and r = 0 is no longer forward. At the same time, relay node recalculates the tree distance. If r >= d(u, v), node continue forwards to find the destination node, otherwise discard. During the forwarding process, the link quality is updated according to the weight. The improved algorithm(AODVjr-Pro) and the algorithms in [8] and [9] show that the former has better performance.
The structure of this paper is as follows: The first part describes the intelligent meter reading network topology model, the second part describes the classical AODVjr algorithm and points out its inadequacies, the third part proposes an improved algorithm for the inadequacies of the AODVjr algorithm and applies it to the broadband micro-power wireless intelligent meter reading network, the fourth part analyzes the proposed improved algorithm in the network scenario of meter reading, the fifth part summarizes the paper.
Intelligent meter reading network topology model
In the broadband micro-power wireless intelligent meter reading network, in order to realize dual mode in combination with broadband PLC, the tree network topology specified in the broadband PLC protocol is adopted. As shown in Figure 1, the connection represents the communication link.
The nodes in Figure 1 have three roles: the central coordinator (CCO) controls the data transmission and network status of the entire network; the proxy coordinator (PCO) is the relay node, which has simple data processing functions to forward data; station (STA) is a terminal smart meter.
AODVjr routing algorithm
The AODVjr algorithm is a lightweight and on-demand routing algorithm that is improved by the AODV algorithm. In the AODVjr algorithm, three messages are involved: a route request message (RREQ), a route reply message (RREP), and a route error message (RERR). It removes the serial number, hello message and predecessor list in the AODV algorithm. Since there is no predecessor list, so only the destination node can send the RREQ packet, and the destination node directly processes the first RREQ packet that received.
The AODVjr routing algorithm is divided into two phases: the route discovery phase and the route repair phase. In the route discovery phase, it is assumed that the route from node S to node D in Figure 1 is disconnected, 1) the source node S generates an RREQ packet to broadcast over the entire network; 2) After the next node receives it, it is judged by the process shown in Figure 2; 3) Repeat the steps of 2) until the destination node D is found. Figure 2. AODVjr algorithm route discovery process In the route repair phase, the link state between the two nodes is determined at the MAC layer, and the result is uploaded to the network layer. Route repair is a full-network broadcast of RREQ packet, and the routing overhead is large, so it is necessary to judge whether the link is actually disconnected or temporarily disconnected due to accidental interference. In ZigBee, the node sets a counter to count the number of failed information transmissions. When the number reaches the threshold, the link is considered to be disconnected and the repair process is initiated.
AODVjr algorithm routing repair is a full-network broadcast of RREQ packet, which is easy to cause broadcast storms, increase routing overhead and even network congestion. In addition, the route repair is transparent to the source node that send information, and the source node sends the data packet normally. If the route repairing initiating node S has no storage capacity or insufficient storage capacity, the packet loss rate is increased.
Tree distance
In the broadband micro-power wireless intelligent meter reading network, making some improvement based on the shortcomings of AODVjr algorithm, and an AODVjr-Pro algorithm is proposed to improve the communication success rate, shorten the route repair time and reduce the routing overhead.
The broadband micro-power intelligent meter reading network adopts a tree network topology to form a distributed route. Based on the relationship of size of the tree distance d(u, v) and the routing request packet forwarding radius r, we improve the whole network broadcast to a half direction forward. The tree distance d(u, v) represents the minimum hops between two nodes(u, v), and the formula is expressed as: Where depth(u) represents the depth of the route repair initiating node u, depth(u) represents the depth of the destination node v, and the common parent node (generally node u) represented by lca(u,v).
The route repair initiating node u calculates d(u, v) and set up r = d(u, v). Each time the route request message is forwarded, its radius r is decremented by 1, and when r = 0, it is not forwarded; At the same time, the node recalculates d(u, v) once it receives the route request message. If r >= d(u, v), it continues to forward the route request message to find the destination node v, otherwise it is discard and not be processed.
Signal to noise ratio
In order to enhance the communication success rate, we use the signal-to-noise ratio (RSN) to measure the link quality. The threshold of RSN is -3.8dB. If the link quality is greater than this threshold, it will be used as the communication link. Otherwise, the communication link will be discarded.
The value of SNR threshold needs to be based on the size of bit error rate (BER). The simulation conditions of SNR are described as follows: Channel environment: band limited AWGN channel Here, the simulation rate of 600Kbps is mainly taken, and the BER-SNR relationship corresponding to different sampling points is compared. The simulation rate formula is expressed as follows: s poin sampling rate sampling rate t = (2) Figure 3 shows the relationship between bit error rate (BER) and signal to noise ratio (SNR) : When the bit error rate reaches 10e-3, the signal-to-noise ratio threshold of 600Kbps is -3.8dB. When receiving the same route request packet from the same source node, to avoid accidental interference, update the link quality by weight. The formula is as follows: Where i, j represents the weight, and i + j = 1. old_RSN represents the stored RSN, and new_RSN represents the RSN calculated from the received message.
AODVjr-Pro algorithm steps
The specific steps of the algorithm are described as follows: First step: When the PCO forwards the service data packet, The route repair process is triggered when the periodic evaluation route is invalid or there is no route. First, the route repair initiating node calculates the tree distance d(u, v) according to formula (1), and assigns the value to the message forwarding radius r=d(u, v). The routing repair initiating node generates a routing request packet, broadcasts in a half direction, and searches for the destination node.
Second step: After receiving the route request message, the intermediate node determines whether forwarding is required according to the process of Figure 4. Specific steps are as follows: Table 1. Relay node forwarding step.
Step1: After receiving the route request packet, first evaluate whether the RSN meets the threshold. If yes, go to step 2; otherwise, discard the message.
Step2: Judging whether the node is the destination node. If yes, generate a route reply message and send it; otherwise, go to step 3.
Step3: Judging whether the message is from the same source node. If yes, update the RSN according to formula (3); otherwise, go to step 4.
Step4: Reducing the forwarding radius r by 1, and then Judging whether r is zero. If yes, discard it and do not forward it; otherwise, go to step 5.
Step5: Recalculating d(u, v) and then compare the size of d(u, v) and r. If r >= d(u, v), a new route is added to the routing entry, the value of the RSN is stored, and the route request packet is forwarded; otherwise, the packet is discarded. Figure 4. AODVjr-Pro route repair discovery phase Third step: Repeat the process of the second step until the destination node is found. When repeating the second step to forward the route request message step by step, a distributed route to the route repair initiating node is formed.
The fourth step: After receiving the first route request packet, the destination node starts the timer for monitoring for a period of time, selects a proxy node with the best link quality, and generates a route reply message to be sent to the route repair initiator node through the selected proxy node.
During this transmission, a distributed route to the destination node is formed.
The fifth step: After receiving the route reply packet, the route repair initiating node sends a route response packet to the destination node, and sets the route status to the available status. If no route reply packet is received within the specified time, a routing error packet is sent to the data sending source node.
Simulation results and analysis
There are a variety of network simulation platforms, such as NS2, OPNET, OMNeT++, NS3, etc. This paper chooses the NS3 network simulation platform for simulation. Based on the shared PLC NS3 Module of the University of British Columbia, some functional codes are modified to conform to the application scenarios of broadband micro-power wireless smart meter reading. The real-time route repair algorithm proposed in this paper is simulated on the built platform.
In this section, the improved AODVjr-Pro algorithm is simulated and compared with the NCLZHR algorithm in literature [8] and the F-AODVjr algorithm in literature [9]. With the number of nodes changes, Comparing and analyzing the success rate of routing request message transmission, average hops, communication success rate, and route repair delay, It can be seen that this algorithm has certain superiority. The range of the number of nodes is from 10 to 100. new_RSN weight (j) 0.4 Figure 5 describe the distribution of the success rate of routing request message transmissions in the three algorithms as the number of nodes increases.
The main simulation parameters as shown in
The success rate of routing request packet transmission reflects the size of the routing overhead. It can be seen from the figure that the success rate of the routing request message decreases with the increase of the number of nodes. When there are fewer nodes, there will be fewer forwarding times, and more routing request messages will reach the destination node, so the success rate of routing request messages reaching the destination node will be higher. The more nodes, the more packets are forwarded, the larger the routing cost, and the lower the success rate of route request packets. The simulation results show that the success rate of routing request message transmission in the AODVjr-Pro algorithm proposed in this paper is higher than that of the other two algorithms. That is to say, in the case of routing failure with the same number of nodes, this algorithm can make the network complete the routing repair process faster and resume normal communication. Figure 5. Routing request message transmission success rate Figure 6 describe the distribution of the average hops among the three algorithms as the number of nodes increases.
The hops refers to the forwarding times of route request message from source node to destination node, and the average hops refers to the average forwarding times of route request message in simulation time. The average hops is a key indicator of network overhead. As the number of nodes increases, the number of network layers increases. Because the hops is closely related to the number of layers, the more times the route request message is forwarded. As can be seen from the figure, the average hops of the AODVjr-Pro algorithm proposed in this paper is smaller than the other two algorithms. Especially when the node is greater than 60, the average hops growth in this algorithm is relatively flat. In contrast, this algorithm has fewer forwarding times and less routing overhead. The communication success rate of data transmission reflects the stable reliability of network routing and self-repair ability after routing failure. As can be seen from the figure, the communication success rate presents a downward trend with the increase of the number of nodes. When the number of nodes is greater than 70, the AODVjr-Pro algorithm of this paper has a flatter trend compared with the other two algorithms. And under the same number of nodes, the communication success rate of this algorithm is slightly higher than the other two algorithms. The algorithm is more suitable for the realtime routing repair process of broadband micro-power wireless intelligent meter reading. As can be seen from the figure, the route repair delay increases with the number of nodes, but the number of nodes increases from 10 to 100, and the delay change is only a few milliseconds. According to the figure, the AODVjr-Pro algorithm proposed in this paper has a smaller delay, which is lower than the other two algorithms. This shows that in the process of routing repair, the algorithm in this paper makes one-way broadcast of routing request message, which greatly reduces the routing overhead, shortens the time of routing repair and makes the routing repair more efficient.
Conclusion
In this paper, when designing the real-time routing repair process of broadband micro-power wireless intelligent meter reading network, the AODVjr algorithm is improved. The request message is improved to directional half-direction broadcast from whole-network broadcast, and the link quality is updated according to the weight. This article first elaborates AODVjr algorithm and points out the deficiencies that in the entire network broadcast. Then, the improved algorithm is put forward and simulated in the broadband micro-power wireless intelligent meter reading network scenarios. The simulation result shows that with the increase of the number of nodes, the AODVjr-Pro algorithm in this paper has better performance than the other two algorithms, the routing overhead is smaller, higher communication success rate, less routing repair delay.
|
2019-11-14T17:07:59.115Z
|
2019-10-01T00:00:00.000
|
{
"year": 2019,
"sha1": "e72a38fa67effc48b1f39b73e816c3644954606e",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1325/1/012150",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "1c10dd0bc586a13516ce028c2c5c966091fb070b",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Physics",
"Computer Science"
]
}
|
12573382
|
pes2o/s2orc
|
v3-fos-license
|
A survey of thrombosis experts evaluating practices and opinions regarding venous thromboprophylaxis in patients post major abdominal surgery
Background Patients undergoing major abdominal surgery are at high risk for developing venous thromboembolism in the post-operative period. Current evidence-based guidelines recommend routine pharmacological venous thromboembolism prophylaxis in patient at moderate to high risk post major abdominal surgery. However, the type of agent, dose and duration of thromboprophylaxis remain unclear. We sought to survey current clinical practice and assess for potential clinical equipoise regarding pharmacological thromboprophylaxis post major abdominal surgery. Methods An electronic survey targeting thrombosis expert members of Thrombosis Canada was conducted. Results The total response rate was 52.3% (45/86). All thrombosis experts recommended pharmacological thromboprophylaxis for high risk patients post major abdominal surgery. Over 68% of the thrombosis experts recommended thromboprophylaxis during hospitalization only. The majority of the participants recommended using LMWH (85.9%) over UFH (10.1%). Approximately a third of the surveyed thrombosis experts estimated the incidence of overall VTE at 7 to 10 days post-operatively in patients who do not receive thromboprophylaxis post major abdominal surgery to be between 4 and 6%. A total of 55.3% of the thrombosis experts estimated the incidence of PE to be between 0.5 and 1.0% for the same patient population. The risk of major bleeding episode was estimated to be between 0.5 and 1% in patients receiving 7 to 10 days of pharmacological thromboprophylaxis in the post-operative period by a majority of the thrombosis experts (68.4%). However, approximately 80% of thrombosis experts believed that there is still some clinical equipoise around the use of thromboprophylaxis post discharge (up to 7 to 10 days) in high risk adult patients post major abdominal surgery. Conclusions Thrombosis experts recommend LMWH prophylaxis post major abdominal surgery. There is still, however, significant clinical equipoise regarding the duration of thromboprophylaxis (hospitalization only vs. total to 7–10 days). The result of the survey might not be generalizable to non-academic centers and to other countries. Electronic supplementary material The online version of this article (doi:10.1186/s12959-016-0126-9) contains supplementary material, which is available to authorized users.
Background
Venous thromboembolism (VTE) is a condition associated with an increased morbidity and mortality among hospitalized medical and post-surgical patients. The most common presentations of venous thromboembolism are deep vein thrombosis (DVT) of the lower extremity and pulmonary embolism (PE) [1]. Patients undergoing major abdominal surgery (include any abdominal surgery that is laparoscopic or open, performed under general anaesthesia and lasted for at least 30 min) are at risk of developing a VTE complication in the post-operative period. Their VTE risk depends on both patient-specific and procedure specific factors [2]. Old age, previous VTE, cancer, obesity and prolonged immobilization post-surgery are examples of high-risk patient-specific factors. Examples of high-risk procedures include open abdominal and pelvic surgeries, abdominal-pelvic cancer surgery and bariatric surgery. Based on those risk factors, the estimated baseline risk for VTE post major abdominal surgery in patients with high risk factors for VTE is approximately 6% [2].
The American College of Chest Physician (ACCP) Evidence-based consensus guidelines published in 2012 [2] recommend that patients undergoing non-orthopedic surgery at moderate or high risk for VTE (general, abdominal-pelvic or thoracic surgeries) receive routine pharmacological thromboprophylaxis (Low molecular weight heparin (LMWH), unfractionated heparin (UFH) or fondaparinux). Although the efficacy and safety of pharmacological thromboprophylaxis agents have been proven, which agent to use (e.g. UFH vs. LMWH vs. fondaparinux) and at which dose (e.g. UFH 5,000 IU every 8 or 12 h) remains debatable. Furthermore, the duration of pharmacological thromboprophylaxis (i.e. in-hospital only vs. 7 to 10 days including an outpatient prescription) is unclear. We sought to establish the current clinical practice of Canadian thrombosis experts, assess for potential clinical equipoise regarding pharmacological thromboprophylaxis in this patient population and evaluate the potential participation in a future randomized clinical trial.
Population
The survey targeted thrombosis expert members of Thrombosis Canada. Thrombosis Canada is an established group of expert Canadian clinicians dedicated to advancing education and research in the prevention and treatment of thrombo-vascular disease [3]. Thrombosis experts of Thrombosis Canada are defined as Canadian clinicians who have made many significant contributions to the body of knowledge in vascular medicine and disseminated that knowledge through peer reviewed journal publications and books as well authoring national and international clinical practice guidelines. This expert clinician group has the necessary expertise and experience to provide meaningful opinions on the planning of a potential future randomized clinical trial (RCT).
Survey Monkey [4] online software was used to create and distribute the survey. Each survey participant received an email with hyperlink to the survey. A reminder email with a link to the survey was sent weekly for 2 weeks. Our target response rate was 35% based on previously published response rate of physician specialist to web-based survey [5]. The survey included a short introduction to the survey, its goals/objectives and the reason why the participant was chosen to participate. This was followed by series of categorical questions (a total of 14) with 4-5 answers, based on a short clinical vignette (Please see Additional file 1: Appendix 1 on-line). The first few questions were about the participant's current clinical practice. The following questions were related to two different clinical scenarios. We surveyed participants on their opinions on the efficacy and safety of pharmacological thromboprophylaxis and assessed if equipoise still exist around its post major abdominal surgery. Finally, we asked the participants if they would consider including their patients in a RCT, and if yes, to what intervention, dose and duration. Participation in the survey was voluntary and all data were kept anonymous and confidential. Filling out the online survey was viewed as an implied consent. All response answers were saved in the Survey Monkey online program, which was later spread into Microsoft excel program in the form of pooled data for analysis. Data was analyzed after 2 months of sending the survey.
Descriptive statistics (percentages) were used to analyze and summarize the result of the survey. Simple percentage compression were made between relevant demographic subgroups. Analyses were conducted using Survey Monkey online program.
The majority of the participants were hematologists (40.5%) followed by internists (29.7%). Most participants were males (67.6%) and middle age adults (age 46-55 (35.1%)) and the majority were in clinical practice for more than 10 years (56.8%). Most responders were from the province of Ontario (59.5%) and the majority (83.8%) of the thrombosis experts practiced in an academic center.
All the thrombosis experts recommended the use of thromboprophylaxis post major abdominal surgery. Approximately 70% percent (68.9%) recommended using thromboprophylaxis during hospitalization only. The others recommended extending thromboprophylaxis for 7-10 days or for a total of 28 days post major abdominal surgery (26.7 and 4.4% respectively).
The majority of the thrombosis experts (85.9%) recommended LMWH over UFH. Dalteparin 5,000 units daily or enoxaparin 40 mg daily were the most frequently recommended regimens (33.3 and 24.2%, respectively).
Approximately a third of the thrombosis experts estimated the incidence of overall VTE (symptomatic and asymptomatic) at 7 to 10 days post-operatively in patients who do not receive thromboprophylaxis post major abdominal surgery to be between 4 and 6% whereas 55.3% estimated the incidence of PE to be between 0.5 and 1.0% in this patient population. The risk of major bleeding episode was estimated to be between 0.5 and 1% in patients receiving 7 to 10 days of pharmacological thromboprophylaxis in the post-operative period by a majority of the participants (68.4%). Finally, a majority of thrombosis experts (57.9%) believe that the benefits of using pharmacological thromboprophylaxis for 7 to 10 days in high-risk patients outweigh the risk of bleeding in adult patients post major abdominal surgery in most cases. However, approximately 80% thrombosis experts believe that there is still some clinical equipoise especially around the use of thromboprophylaxis post discharge (up to 7 to 10 days) in high risk adult patients post major abdominal surgery. Thus, it is not surprising that they would consider allowing their patients to participate in a RCT assessing the use of thromboprophylaxis in adult patients post major abdominal surgery comparing different duration (e.g. during hospitalization only vs. 10 days) of thromboprophylaxis (89.5%).
Discussion
This survey of Canadian thrombosis experts shows that there is an agreement in the use of pharmacological thromboprophylaxis post major abdominal surgery. It also shows that majority of the experts would use thromboprophylaxis during hospitalization only. It also confirms that there is clinical equipoise and uncertainty around the use of thromboprophylaxis post discharge (up to 7 to 10 days) in high-risk adult patients post major abdominal surgery and that a clinical trial is desirable.
A majority of the clinicians selected LMWH as their preferred pharmacological thromboprophylactic agent. This is not surprising given that LMWH have a better safety profile compared to UFH. Unfractionated heparin requires subcutaneous self-injections twice or three times daily making them less convenient especially for extended post discharge thromboprophylaxis. In addition, UFH is associated with 2.6% risk of heparin induced thrombocytopenia (HIT), a rare but potentially serious adverse reaction causing low platelets with paradoxical thrombosis and tissue necrosis [2]. LMWH is less likely to cause HIT (0.2% compared to 2.6% with UFH) [2]. Although it is also given subcutaneously, it is usually given less frequently, usually once daily making it more appealing than UFH for extended post discharge thromboprophylaxis.
It was not surprising that there is an agreement regarding clinical equipoise around the use of thromboprophylaxis post discharge (up to 7 to 10 days) in high-risk adult patients post major abdominal surgery. Although most of clinical trials evaluated different pharmacological thromboprophylaxis for a fixed duration of 7 to 10 days, the surgical techniques, post-operative management and length of stay have changed significantly over recent years and more contemporary data is desperately needed. Furthermore, there is a lack of clinical trials that directly compared two different durations of thromboprophylaxis (in-hospital only vs. 7 to 10 days). Thus, the majoring of the experts would consider participating in a clinical trial comparing two different durations of thromboprophylaxis.
It is important to acknowledge the limitations of our cross sectional study. The survey was limited to Canadian experts, mostly from academic centers, and therefore may not reflect the opinion of other international experts or clinicians and surgeons in community hospitals. Similarly, the survey was not validated and tested in other populations. It would have been ideal to also capture the opinions of the general surgeons. We piloted the survey in a subgroup of members of the Canadian Association of General Surgeons [7] . However, it was felt that the questions of VTE and major bleeding complication rates were beyond the scope of their practice and be better defined by a group of Thrombosis Medicine. Therefore, we surveyed experts in the field to provide the most significant and applicable opinion in the topic. Similarly, the membership of Thrombosis Canada is of relatively small size and this might have resulted in potential selection bias. Nonetheless, they remained the most important clinical experts to survey. In addition, although the overall response rate can be still considered low, it exceeded our targeted response rate.
Conclusion
There is an agreement among thrombosis experts in using LMWH for thromboprophylaxis post major abdominal surgery. There is still equipoise around the use of pharmacological thromboprophylaxis for 7-10 days post-operatively including post discharge prescription. There sms to be underestimation of major bleeding events post major surgery in patients receiving pharmacological thromboprophylaxis. There is a need for a RCT comparing the use of pharmacological thromboprophylaxis in hospital only compared to duration of 7-10 days (including post discharge prescription) post major abdominal surgery.
|
2018-04-03T00:33:36.749Z
|
2017-01-13T00:00:00.000
|
{
"year": 2017,
"sha1": "4f208ead9a5a51b7188aee0b1ffa3eb629a270d4",
"oa_license": "CCBY",
"oa_url": "https://thrombosisjournal.biomedcentral.com/track/pdf/10.1186/s12959-016-0126-9",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4f208ead9a5a51b7188aee0b1ffa3eb629a270d4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
182431154
|
pes2o/s2orc
|
v3-fos-license
|
Intuitionistic fuzzy sets and their use in image classification
In this paper, the problem of classification of images is discussed. Our specific problem is that we need to classify tire images into selected classes which are characterized by some patterns. The theory of intuitionistic fuzzy sets is used for classification of the images. In the first step is showed the way how this type of images could be represented as the vectors. Then the membership and non-membership value to each coordinate are calculated and finally the value of similarity measure between patterns and specific image is computed. Classification is performed with respect to the valued of similarity measure.
Introduction
Intuitionistic fuzzy sets (shortly IFSs) were introduced by Krassimir Atanassov in 1983 [1]. Since then, many new properties and applications of this mathematical structure have been constructed. In this paper, we will use IFSs for image classification of the pictures of tires. The motivation for this application is an ongoing cooperation between the Department of Computer Science of Matej Bel University and the local criminal police department and the recognition and classification of the tire prints is one of the basic problems being addressed. In the paper we will show one of the first steps of this process, namely, obtaining the samples which could be used as elements of a database for comparing with the tire printd found at the crime scene. The pictures of tires are obtained from the internet and then they are automatically processed by application developed by us.
The paper is structured as follows: In the Section 2 we give a brief introduction into the theory of intuitionistic fuzzy sets and we define the properties which are used in this paper. In the Section 3 we discuss the way how we could prepare the data for classification. In Section 4 the obtained results are summarized and finally in the Section 5 the conclusions and some ideas for future work are mentioned.
2 Intuitionistic fuzzy sets Definition 1. Let X be a universe. An intuitionistic fuzzy set is a set Function µ A is called the membership function and function ν A is called the non-membership function.
By F we will denote the family of all IFSs. There exists another one function defined on F, function π A which is defined as This function is called hesitation margin. Let us have two intuitionistic fuzzy sets A = (µ A , ν A ), B = (µ B , ν B ). Then it holds In this paper we consider the discrete universe X = {x 1 , x 2 , . . . , x h } and for classification we use the cosine-base similarity measure .
This IFSs similarity measure has been designed by Ye [6].
As it was mentioned in the Introduction, we are using images which are downloaded from the internet. The basic idea is to download each image (together with its model identifier) which is on the website with the name of tire brand. This process is described in the paper [4]. Then we have a large set of different images. In Figure 1, there are shown the most often occurring types of images. We are focused on those images where the tire prints are best visible and therefore the most interesting for us are those with whole wheels (Figures 1a, 1e) and those with tire print ( Figure 1g). The classification of the set of images into the seven classes will be presented in this paper. The next procedure with the data is to find the ellipses which make a boundaries of tire print (see [3,5]), take the rectangle with sample of print and after some modification adding it into our database (see Figure 2). This database is created for comparing actually known tire prints with print found in the crime scene.
Preprocessing of the images
From each image we want to extract as much information as possible, therefore, we do not resize the images into the same size. Our preprocessing consists of the following steps: where values x, y represent the original size of the image and B k,l represent the number of white pixels in the part with order k and l. After this calculation we simply change the matrix M into the vector V .
Preprocessing of the data
In this part the way how to assign the degree of membership and the degree non-membership to each image vector coordinate is described. First, we work with the set of the templates. As the templates we take 3 images of each group of tires which are displayed on the Figure 1. So we have 21 templates which are represented as vectors. For preprocessing of the data we use the approach that was described in the paper [2]. Let us have image i (i = 1, 2, . . . , 21) represented by a 16-coordinates vector We start with the normalization of each coordinate by using formula where j = 1, 2, . . . , 16,X j is a mean and s j is the standard deviation calculated from the j-th coordinate of all images in the template database. Then the membership degree of each template coordinate is calculated by the weighted sigmoid function where r j is a weight value that is computed as Similarly, the non-membership degree of each template coordinate is calculated by the formula Since we are dividing the images into the seven classes we define the seven patterns by the formula where m = 1, 2, . . . , 7 and the valuesμ m,j andν m,j represent the arithmetical mean of function values of those templates which belong to given class.
Classification of the images
Now we are ready to classify any image. We are using following algorithm: 1. Take any image, 2. Use preprocessing and characterize image by 16-coordinates vector (see Section 3.1), 3. For each coordinate calculate its membership and non-membership value by the same formulas like they were used for templates (see Section 3.2), 4. Classify image into the suitable class by using the similarity measure S C .
To classify the image into the suitable class, we calculate the value of similarity measure S C between each pattern P m (m = 1, 2, . . . , 7) and given image. The image is classified into that class where the value of similarity measure is the highest.
Experimental results and discussion
For this experiment we took 326 images which were downloaded from the web with the different names of tire brand. We developed a software program that preprocesses the images by using the mentioned methods. As a result, the program creates seven folders and moves the images into the folders as they were classified by the described process. There was also given one template image into the each new folder. This could help us quickly identify the incorrectly classified images. We noticed that some of the incorrectly classified images have quite similar values of two best values of similarity measure. On the other hand, the difference between the best and the second best similarity value is more different if the image is classified into the right class. Therefore, we decide to create also a text file in which we extract the information about the difference between the best and the second best similarity value. The result of classification is displayed in the Table 1. From these data we could calculate that 90.8% of images were classified correctly. If we look at the incorrectly classified images, we could find some common properties of them. The difference between best and second best similarity value for correctly classified images is almost always greater than the value 0.01 (86.2%). For incorrectly classified images, this value reached the value in less than 0.01 in 36.7% of cases. If some correctly classified image reached the value of the difference between best and second best similarity value in less than 0.01, then this image is usually the image, which a human also finds hard to classify exactly into the one class. For example, the image on Figure 4a was classified into the Class 1 but image on Figure 4b was classified into the Class 4. The next example is the image on the Figure 4c which was classified into the Class 7.
Another problem is if there is too much captured light on the image. For example, image on Figure 4d was classified into the Class 1 but it actually belongs to the Class 5.
The worst results reach the classification of the text (Class 6). Into this class were again assigned the images with much captured light, for example image on Figure 4e.
Conclusions
In this paper, we use intuitionistic fuzzy sets for the classification of the images of tires. We describe the way how the image of a car tire could be represented by a vector. Then for each vector we determine the value of membership and also the value of non-membership function. We use the cosine similarity measure to calculate the similarity of the image with the predetermined patterns. We classify the set of images and discuss the problems of incorrect classification. The main advantage of this approach is that it has a high percentage of success and it can be used in automated processing of the images which are obtaining from the web. In the future, we would like to try to use other similarity measures. For example, those similarity measures which use not only the values of membership and non-membership function but they take into the account also the value of hesitation margin. We will try to verify if it is possible to improve our results by using this function.
|
2019-06-07T20:44:06.701Z
|
2019-05-01T00:00:00.000
|
{
"year": 2019,
"sha1": "c6a3ffeefe3c84d9e2090a13a6ccc0a2913d1e8c",
"oa_license": "CCBYSA",
"oa_url": "http://ifigenia.org/images/3/3a/NIFS-25-2-060-066.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "d9177492f84735d4669e43938b511a002b77b3d6",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
257403327
|
pes2o/s2orc
|
v3-fos-license
|
The L3 Flexion Angle Predicts Failure of Non-Operative Management in Patients with Tandem Spondylolithesis
Study Design Retrospective cohort study. Objective Determine impact of standard/novel spinopelvic parameters on global sagittal imbalance, health-related quality of life (HRQoL) scores, and clinical outcomes in patients with multi-level, tandem degenerative spondylolisthesis (TDS). Methods Single institution analysis; 49 patients with TDS. Demographics, PROMIS and ODI scores collected. Radiographic measurements—sagittal vertical axis (SVA), pelvic incidence (PI), lumbar lordosis (LL), PI-LL mismatch, sagittal L3 flexion angle (L3FA) and L3 sagittal distance (L3SD). Stepwise linear multivariate regression performed using full length cassettes to identify demographic and radiographic factors predictive of aberrant SVA (≥5 cm). Receiver operative curve (ROC) analysis used to identify cutoffs for lumbar radiographic values independently predictive of SVA ≥5 cm. Univariate comparisons of patient demographics, (HRQoL) scores and surgical indication were performed around this cutoff using two-way Student’s t-tests and Fisher’s exact test for continuous and categorical variables, respectively. Results Patients with increased L3FA had worse ODI (P = .006) and increased rate of failing non-operative management (P = .02). L3FA (OR 1.4, 95% CI) independently predicted of SVA ≥5 cm (sensitivity and specifity of 93% and 92%). Patients with SVA ≥5 cm had lower LL (48.7 ± 19.5 vs 63.3 ± 6.9 mm, P < .021), higher L3SD (49.3 ± 12.9 vs 28.8 ± 9.2, P < .001) and L3FA (11.6 ± 7.9 vs −3.2 ± 6.1, P < .001) compared to patients with SVA ≤5 cm. Conclusions Increased flexion of L3, which is easily measured by the novel lumbar parameter L3FA, predicts global sagittal imbalance in TDS patients. Increased L3FA is associated with worse performance on ODI, and failure of non-operative management in patients with TDS.
Introduction
Degenerative spondylolisthesis (DS) occurs in 19.1 to 43.1% of elderly patients and is a common cause of spinal stenosis. 1,2S of the lumbar spine most commonly involves a single level, 3 however, multi-level degenerative spondylolisthesis, or tandem spondylolisthesis (TDS), is relatively uncommon, representing only 5-12% of all degenerative spondylolistheses.[4][5][6] Most commonly DS occurs at the L4-L5 level, and the literature focuses predominantly on single level disease.5 In addition to classically cited risk factors such as elevated BMI, advanced age and female gender, there is a growing body of literature implicating aberrant sagittal radiographic parameters in the pathogenesis of DS. 5 In particular, a high pelvic incidence (PI) has been associated with an increased risk of DS. [7][8][9] Additionally, whether or not a patient with DS presents with global, sagittal anterior malalignment has been speculated to be related to various compensatory mechanisms including pelvic retroversion, thoracic flattening, and lower limb responses.5,9 Despite this growing body of work, the relationship between focal degenerative changes associated with DS and more global sagittal deformity has not been well defined. Aprior work that compared TDS to single-level degenerative spondylolisthesis found that patients with TDS had a significantly greater pelvic incidence, C7 tilt, pelvic tilt (PT), and PI-LL mismatch than those with single-level DS. 10 These findings suggest that TDS may be a distinct clinical entity from single-level DS and may represent a significant, and possibly underappreciated, source of severe global sagittal imbalance.However, there is a paucity of data that evaluates what radiographic parameters may impact the clinical outcomes of patients with TDS.
The purpose of this study was to correlate the novel lumbar radiographic parameters L3 lumbar flexion angle (L3FA) and L3 sagittal distance (L3SD) with global sagittal alignment parameters, patient reported outcomes, and ultimately failure of non-operative treatment in patients with TDS.The hypothesis of this study was that L3FA and L3SD would correlate with SVA and that elevated L3FA and L3SD would correlate with poorer patient reported outcome scores and consequently an increased likelihood of patients with TDS failing non-operative management and requiring surgical intervention.
Methods
This study was institutional review board (IRB) approved with IRB number STUDY20040115 and was exempt from obtaining informed consent.This study was a retrospective analysis of a prospectively collected database of patients with low back pain or extremity symptoms in the setting of TDS at a single institution from 2016 to 2020.Inclusion criteria were patients with TDS and adequate standing, anterior-posterior (AP) and lateral radiographs of the lumbar spine.Adequate standing lumbar spine radiographs for analysis of spinopelvic parameters have previously been defined as radiographs that include the upper end plate of the L1 vertebra, the sacral dome, and both femoral heads. 11Exclusion criteria were patients with high grade DS, a history of lumbar spine trauma, lumbar spine tumors, any symptoms concerning for cauda equina, conus medullaris, or other reasons to proceed with urgent surgery after the initial clinic visit, prior lumbar spine surgery (all patients with iatrogenic spondylolisthesis were excluded) or abdominal surgery, low-quality radiographic data, congenital malformations of the lumbar spine, or a history of a spine infections.High grade DS was defined according to the Meyerding classification as a ratio of overhang from the superior vertebral body to the anteroposterior length of the adjacent inferior vertebral body of greater than 50% (above Meyerding Grade 2). 12or the purposes of this study, TDS was defined as anterolisthesis of at least 3 mm at two levels of the lumbar spine (L1-S1), which is a definition that has been used in prior work related to TDS. 1 Two non-contiguous, anterior spondylolistheses were still considered TDS (this pattern was only encountered in one patient in this study). 1This study focused on TDS resulting in anterolisthesis due to posterior TDS being exceptionally rare (no patients with posterior TDS were identified in this study). 13SVA was measured on 36" standing full-length spine plain radiographs.LL, PI, L3SD and L3FA were measured on lumbar spine plain radiographs.L3 was chosen as the center of measurements because L3-5 is the most common presentation of TDS and because the apex of physiologic lumbar lordosis is typically near the inferior aspect of L3 (Figure 1). 14All radiographic measurements were performed manually by two senior orthopaedic surgery residents and subsequently averaged together to obtain the final value.An intra-class correlation coefficient (ICC) was calculated using R statistics software in order to assess intra-rater reliability for all lumbar spinopelvic parameters, and for the novel parameters L3SD and L3FA.All ICC calculations were noted to be excellent (>.9) between the two observers. 15hilips DICOM Viewer software (Koninklijke Philips N.V.) was used to view radiographs and perform measurements.An inter-rater correlation coefficient was then calculated to determine reliability.
The electronic medical record of included patients was retrospectively reviewed to determine if the patient had failed non-operative treatment and ultimately required surgery at any point during clinical follow-up.All patients initially presented to the same clinical practice for low back and/or extremity pain.All patients were treated with the same treatment algorithm, which includes standing lumbar radiographs at first visit, and a multi-tiered trial of non-operative treatment.Firstline nonoperative treatment included physical therapy, nonsteroidal anti-inflammatory drugs, and the addition of a Medrol dosepak if the onset of pain was relatively more recent.Second-line treatment was initiated if patients represented with continued pain complaints and included magnetic resonance imaging to identify areas of stenosis for consideration of epidural steroid injection.If patients represented with failure of both first-line and second-line treatments, they were offered repeat steroid injections and discussed surgery with the attending surgeon.Failure of non-operative treatment was defined as patients continuing to have pain and/or neurological complaints after first-line and second-line treatments and deciding to proceed with surgery, rather than re-attempt a second-line treatment.Additional collected demographic and clinical data included age, sex, BMI, co-morbidities as measured with the Age-Adjusted Charlson Comorbidity Index (ACCI), which has been used previously in orthopaedic spine literature. 16Health-related quality of life (HRQoL) scores collected at the initial clinical visit included Oswestry Disability Index (ODI) and the PROMIS Global-10 physical function and mental health sub-scores.
Statistical Testing
Statistical analysis was performed using SPSS 28 (IBM, Armonk, NY, USA).Missing values were inputed 5 times to permit adequate pooled analysis. 17A stepwise linear multivariate regression was performed using those patients with full length cassettes to identify demographic and radiographic factors independently predictive of SVA ≥5 cm. 18A Receiver operative curve (ROC) analysis was used to identify an ideal cutoff for lumbar radiographic values independently predictive of SVA ≥5 cm.Univariate comparisons of patient demographics, HRQoL scores and surgical indication were then performed around this cutoff using two-way Student's t-tests for continuous variables and Fisher's exact test for categorical variables.
Significance was defined as P < .05 in all cases.A post-hoc power analysis was performed for both the global sagittal malalignment and clinical outcome cohort cohorts.Power was found to be >95% and >80% for detecting statistically significant differences in the global sagittal malalignment cohort's average SVA and the clinical outcome cohort's average L3FA, respectively.
In a stepwise multivariate logistic regression of those patients with full length cassettes (n = 26), only L3FA (OR 1.4, 95% CI) was independently predictive of SVA ≥5 cm (area under the curve = .96).ROC analysis indicated a cutoff of L3FA cutoff of ≥2.5 was optimally predictive of SVA ≥5 cm (Figure 2).When patients with a pre-operative SVA above and below 5 cm were compared, standing PI, SS, and PT were equivalent between groups.The LL of the elevated SVA group was significantly lower than in the normal SVA group (48.7 ± 19.5 vs 63.3 ± 6.9 mm, P < .021).L3SD was significantly higher in the elevated SVA group than in the normal SVA group (49.3 ± 12.9 vs 28.8 ± 9.2, P < .001),as was L3FA (11.6 ± 7.9 vs À3.2 ± 6.1, P < .001).Sensitivity and specificity analyses demonstrated that an L3FA threshold greater than 2 degrees yielded a sensitivity and specificity for predicting an SVA >5 cm of 93% and 92%, respectively.When comparing the subgroup of patients with full length cassettes to the entire clinical cohort by demographics, radiographic parameters, and level of spondylolisthesis, there were no significant differences between the two groups (Table 1).In a univariate analysis of the entire cohort (n = 49), among patient factors only younger age was associated with an increased L3FA (68.2 ± 7.3 years vs 72.7 ± 6.9 years in low L3FA group, P = .03).Increased L3SD (47.4 ± 14.7 mm vs 21.7 ± 11.6 mm, P < .001),decreased LL (50.1 ± 18.2°vs 60.1 ± 9.6, P = .03)and increased PI-LL mismatch (20.1 ± 15.4 vs 5.0 ± 12., P < .001)were also associated with increased L3FA.While PROMIS scores were equivalent between high vs low L3FA groups, an increased L3FA was associated with an elevated ODI (44.1 ± 13.0 vs 33.2 ± 13.1, P = .006,Table 1).
The influence of L3FA on failure of non-operative treatment was evaluated.A significantly larger number of patients
Discussion
TDS is an uncommon multi-level spondylolisthesis with unclear sagittal alignment and clinical severity implications.
In a retrospective analysis of 49 patients with TDS managed over a 4-year period, we found that increased L3FA, or downward flexion of the L3 vertebral body, was independently associated with elevated SVA.A L3FA cutoff of ≥2.5 was predictive of a SVA ≥5 cm.Patients in the present study with TDS had a mean SVA of 51.3 ± 38.8 mm (range: À12.8-135.8mm), which is markedly greater than that of 22.0 ± 8.0 mm previously measured in patients with single-level DS. 19 Patients above the L3FA cutoff had an increased PI-LL mismatch, elevated ODI scores and were more likely to fail nonoperative treatment.Findings suggest that the relative flexion of the L3 vertebra in the setting of TDS may be associated with global spinal balance and patient reported outcomes.
Native spinopelvic morphology is thought to play a significant role in dictating mechanical stresses at the lumbosacral junction, thereby predisposing certain individuals to the development of DS. 20 It has been previously hypothesized that a higher PI requires increased LL to maintain a neutral sagittal alignment, thereby placing higher forces on the posterior articular joints and excess mechanical stresses on the posterior facets. 8,21,22The resulting accelerated posterior arthritis, in conjunction with increased baseline inclination of the vertebral endplate of L5 due to increased PI, has been postulated to be a significant predisposing factor to vertebral slippage. 23,24ll of these parameters have been noted in prior research to be more severely aberrant in patients with TDS compared with single-level DS. 10 Thus, it is reasonable to speculate that TDS may occur due to more severely abnormal native spinopelvic morphology, more severe degenerative facet changes and vertebral slippage and, ultimately, compensatory flattening of LL and elevated SVA.Roussouly et al. 21classified the normal spine into four morphotypes based on increasing PI and sacral slope (SS).It has been previously speculated that Roussouly morphotype 4, which includes a SS of >45 degrees with hyperlordosis and a high PI, may predispose to posterior arthritis and degenerative spondylolisthesis. 25Interestingly, the apex of lumbar curvature of Roussouly Type 4 spines has been reported to be most typically centered at L3, which is more proximal than the average apex of lumbar lordosis in the general population. 21Another recent work concurred with Roussouly regarding the proximal migration of the apex towards L3 in high PI individuals. 26However, the authors posited that this finding was due to a higher PI requiring the recruitment of more proximal lumbar segments to contribute a large proportion of the lumbar spine's total lordosis, which ultimately drove the apex proximally. 26This finding is somewhat in contrast to Roussouly, because it emphasizes the importance of the proximal lordotic segments, rather than the lower lumbar arc, in terms of determining the shape of the global lumbar lordosis. 21,26These works indicate a connection between a high PI and a more proximal lumbar apex, namely at L3.As previously mentioned, a high PI has also been associated in prior work with TDS.It is difficult to draw mechanistic conclusions between increased L3FA and worse clinical outcomes amongst TDS patients noted in the present work.However, it is possible that the loss of the natural L3 apex via increased L3FA (downward flexion) reflects deterioration of a crucial structural element of the lumbar spine, which both drives the apex further proximally and places further lordotic demand on the proximal lumbar spine until it is unable to compensate further.This specific pathologic cascade, detected via increased L3FA, may be linked to deterioration of both the harmonious balance of the lumbar spine and global sagittal balance in patients predisposed to TDS.
The tendency toward poor global spinal balance has been more commonly noted in patients with high-grade DS vs lowgrade DS. 27,28 Mechanistic factors that have been proposed for the association between higher grade DS and poor global spinal balance include a pathologic cascade that involves more severe anterior vertebral slippage, which leads to flattening of the lumbar spine via decreased LL. 8 Given that PI is an anatomic feature and thus fixed after birth, decreased LL leads to increased PI-LL mismatch and ultimately results in a flexion moment of the lumbar spine and the anterior displacement of SVA. 8,29This emphasis on the important relationship between PI-LL and elevated SVA is consistent with the present work, which found an SVA >5 cm in 54% of TDS patients and increased PI-LL mismatch in patients above the L3FA cutoff.However, it should be noted that an increased SVA may also be representative of the severe degree of stenosis in patients with TDS.Shin et al. reported that patients with increased SVA and PI-LL mismatch in the setting of spinal stenosis often have improved alignment following decompressive surgery because they no longer need to lean forward to unbuckle their ligamentum flavum and decrease their stenosis symptoms. 30his concept of spinal alignment improving by virtue of decompression alone is well established in the adult spinal deformity literature, in which patients with sagittal malalignment and a flexible spine may achieve improved alignment with a decompression alone rather than a multilevel fusion. 31,32It is therefore possible that the SVA in TDS patients with an elevated L3FA is inflated by the patient's response to stenosis rather than a purely mechanical problem.This can be better understood by comparing pre-and postoperative imaging, which was not analyzed by the present work.
The preliminary clinical findings of this work suggest that a relatively flexed L3 in patients with TDS is correlated with both elevated SVA and worse ODI scores.This association is not surprising given that prior work has commented on the association between SVA above 4.7 cm and the presence of severe disability as measured by ODI above 40 in the setting of adult spinal deformity. 33It may be speculated that ODI is a sensitive tool for assessing relative disability within the TDS population and that L3FA may be a primary driver behind poor ODI scores.
Associating TDS with more severe SVA elevation and predicting poor global sagittal balance via L3FA in the setting of TDS is important for a number of reasons.In comparison with single-level DS, TDS may be best seen as existing more commonly in the category of true adult spinal deformity, rather than as a focal, lumbar degenerative spinal pathology.Surgical treatment of TDS is often targeted at restoring LL and sagittal balance, and frequently requires much more extensive instrumentation and more frequent use of osteotomies compared with single-level DS.Surgical intervention for TDS thus incurs risks specific to adult spinal deformity.Elevated SVA is a significant risk factor for proximal junctional kyphosis and proximal junctional failure in adult spinal deformity patients after fusion. 34,35][46] The retrospective nature of this work creates a limitation in terms of assessing how L3FA and L3SD may change or improve after successful non-operative treatment, because routine follow-up imaging is not routinely obtained in these patients.Future work may prospectively assess how L3FA and L3SD change or improve over time in patients who are successfully treated non-operatively.This work has several limitations beyond those intrinsic to retrospective studies.A critical limitation of this work is its small sample size.Given the relative rarity of TDS in the general population, the number of patients (N = 49) available from a single institution was proportionately similar to that of a prior multicenter (13 institutions) study of TDS patients (n = 78). 10Additionally, a post-hoc power analysis demonstrated that this study was adequately powered.Another limitation is the lack of full length 36" cassettes for all patients.This is the result of recent increased utilization of full length imaging as routine standard of practice at our institution due to the availability of full length imaging (EOS).More consistent imaging availability would be preferred in future work.Additional future work may include in vivo biomechanical studies to establish a causal link between L3 deformity and global sagittal malalignment.We also seek to understand the utility of L3FA in patients with single-level spondylolisthesis, as the present work sought to initially evaluate this metric with patients with TDS alone due to the relative severity of this group's pathology.Finally, it is important to note that we are only discussing patients with TDS who have symptomatic spinal stenosis.These were only TDS patients whose pathology was severe enough to warrant surgery.This highlights that we are not describing TDS as a singular pathology, but only TDS within the surgical stenosis population.Describing TDS more fully would likely require a large multi-institutional prospective study.
In conclusion, L3FA ≥2.5 in patients with TDS can serve as a surrogate for SVA ≥5.0 cm and is predictive of poor patient reported outcome scores and the failure of non-operative management.L3FA may be a rapid way to evaluate the clinical impact of TDS on these potentially vulnerable patients as well as a target for surgical correction in the future.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Figure 1 .
Figure 1. A. SVA was measured as the angle between the posterior-superior corner of S1 to a vertical plumbline drawn from the center of the C7 vertebral body B. L3FA was measured as the angle between the superior endplate of L3 and a horizontal reference line.C. L3SD was measured as the horizontal distance between a vertical reference from the posterior-superior Corner of L3 vertebral body to the posteriorsuperior corner of S1.
Figure 2 .
Figure 2. Receiver operating curve analysis demonstrating the ideal cutoff value of 2.5 degrees for lumbar radiographic values independently predictive of SVA ≥ 5 cm Legend: SVA = Sagittal Vertical Axis L3FA = Flexion angle of the L3 vertebral body.
Table 1 .
Comparison of demographic, radiographic and HRQoL parameters of patients with lower vs elevated L3FA.
|
2023-03-09T06:16:34.567Z
|
2023-03-07T00:00:00.000
|
{
"year": 2023,
"sha1": "9ebede064fe4f66ca4feb4aeb38e92cabcce7e8c",
"oa_license": "CCBYNCND",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/21925682231161305",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "047efe9bec833a32609cbccaf3db3c5abded12b5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
251785332
|
pes2o/s2orc
|
v3-fos-license
|
Intraparenchymal Mucosa-Associated Lymphoid Tissue Lymphoma: A Case Report
Marginal zone B-cell lymphoma (MZBCL) of mucosa-associated lymphoid tissue (MALT) type, which is primary to the central nervous system (CNS), is a rare lesion, with those originating within the parenchyma even more so. We present the case of a 64-year-old male with weakness in the left hand and focal motor seizures of his arm, who was found to have a right frontal intraparenchymal lesion. Following resection, histopathological and immunohistochemical evaluations were completed, leading to a diagnosis of a primary CNS MZBCL of MALT type in the context of a negative workup of systemic disease. Neuroimaging, histopathological, and immunohistochemical findings, as well as a comprehensive literature review of similar cases, are discussed.
Introduction
Primary central nervous system lymphoma (PCNSL) is an aggressive yet rare variant of extranodal non-Hodgkin lymphomas, accounting for roughly 4% of primary and malignant tumors of the CNS [1]. They mainly arise from the brain, leptomeninges, spinal cord, and vitreoretinal compartment of the eye [2]. Approximately 90% of cases reported as PCNSL are diffuse large B-cell lymphomas, leaving small B-cell lymphomas in the minority [3].
Under the category of small B-cell lymphomas, marginal zone lymphomas (MZL) represent the majority of neoplasms that are primary to the CNS [4]. Mucosa-associated lymphoid tissue (MALT) lymphoma as a subtype of MZL, originally described as low-grade lymphomas within the gastrointestinal tract, is primarily found within the stomach but is also commonly found within salivary glands, the thyroid, ocular adnexa, lungs, and breasts [5,6]. However, primary CNS MALT lymphomas are rare.
Tu et al. reported 15 primary CNS MZL cases, of which 93% were dural-based lesions mimicking meningiomas, arising from sites including the convexity of the brain, falx, tentorium, middle skull base, ventricles, and spinal dura mater [7]. Primary CNS MZL lesions that arise from the parenchyma, however, are exceptionally rare and can be misdiagnosed as gliomas in certain patients [8].
We present a case of a patient with primary CNS intraparenchymal MALT lymphoma with an immunohistochemical profile. Furthermore, a review of the literature on similar cases, including treatment options and outcomes, is discussed. To our knowledge, the present case represents the first reported instance of a patient with such a lesion to be managed solely with surgical resection.
Case Presentation
A 64-year male with a history of rheumatoid arthritis and anti-phospholipid syndrome presented with mild left-hand weakness and a focal motor seizure involving his arm. The patient was receiving hydroxychloroquine for his rheumatoid arthritis and no biologic agents were used to our knowledge. He was found to have an extra-axial mass overlying the posterior right frontal lobe at the convexity, measuring approximately 2.3 x 2.7 x 2.3 cm (anteroposterior (AP) x transverse (TR) x craniocaudal (CC)), and an intraparenchymal lesion within the subcortical white matter of the right frontal lobe spanning approximately 0.8 x 1.1 cm with radiologic features consistent with a low-grade glial tumor ( Figure 1). He was assessed by the hematology team and was cleared for surgery to resect these lesions. Neuropathologic assessment following surgical resection confirmed the diagnosis of meningioma for the convexity tumor. Histopathologic evaluation of the intraparenchymal lesion demonstrated diffuse lymphoplasmacytic infiltrate composed of small mature lymphocytes, plasmacytoid lymphocytes, and mature plasma cells. Immunohistochemistry showed that most cells were CD20-positive (B cells) with kappa light chain restriction. Ki-67 demonstrated a very modest proliferative index. The tumor was uniformly positive for BCL2 and negative for CD5, CD21, CD10, and cyclin-D1 ( Figure 2). Finally, immunohistochemistry for synaptophysin and GFAP showed that the cellular infiltrate was non-reactive.
This was diagnosed as an MZBCL of MALT type.
With the diagnosis of low-grade B-cell lymphoma, the patient subsequently had a bone marrow biopsy, CSF analysis, and lymph node biopsy, which did not show systemic disease. He was also found to be HIV and H. pylori-negative. To date, the patient has elected not to proceed with adjuvant therapies, and the nine-month follow-up MR and positron emission tomography imaging has confirmed no recurrent mass or adverse interval change within the parenchyma.
Discussion
Only a few cases of intracranial primary low-grade lymphomas of the MALT subtype have been reported in the literature, with the vast majority located in the dura mater. As the CNS does not contain any mucosal or MALT tissue, it has been hypothesized that the meningothelial cells in the brain are analogous to epithelial cells at other sites where MALT lymphoma typically arises [8]. However, it remains unclear how primary MZBCL can manifest in intraparenchymal tissue. Recently, an association has been found between autoimmune disease and MZBL [9]. Our patient had a history of both rheumatoid arthritis and antiphospholipid syndrome, giving rise to the possibility of a causative antigen stimulus process.
A review of the literature using the EMBASE and MEDLINE databases using the keywords of "MALT lymphoma" or "mucosa-associated lymphoid tissue lymphoma" and "brain tumors" primarily yielded previous cases of dural MALT lymphomas. In total, 13 cases were found in the literature of cases who were diagnosed with primary MZBCL of MALT type involving the brain parenchyma (Table 1) [7,8,[10][11][12][13][14][15][16]. Five cases involved the frontal cortex, four in the parietal cortex, two involved the basal ganglia, and one the midbrain. The remaining studies reported a patient with multiple lesions involving the temporal and occipital cortex, as well as the spinal cord. With the exception of the patient diagnosed post-mortem via autopsy, all cases utilized either radiation or chemotherapy with only one patient receiving both radiation and surgery. In addition to presenting a novel and interesting radiologic diagnosis, this case poses questions regarding the oncologic management of intraparenchymal MALT lymphomas. Currently, surgery, chemotherapy, radiation, or a combination of these modalities are used to treat CNS lymphomas. To our knowledge, our case represents the first reported instance of a patient with intraparenchymal MZBCL of MALT type to be managed solely with surgical resection. Our patient has remained disease-free at nine months postoperatively suggesting that adjuvant treatment may not be required in the initial management of intracranial MALT lymphoma in certain cases. Radiation and chemotherapy may cause significant neurotoxicity. Hence, cases need to be carefully evaluated on an individual basis to minimize iatrogenic sequelae from exposure to these therapies. Furthermore, given the propensity of MALT lymphomas to recur, this patient and others similarly affected, ought to be closely followed with serial MR imaging [17].
Conclusions
In conclusion, the results presented here indicate that primary MZBCL may not be isolated to the meninges and can develop in brain parenchyma. Thus, MZBCL should be considered in the differential diagnosis of intra-axial CNS masses. This case suggests that localized MZBCL may be managed with local excision without the need for early radiation or chemotherapy. However, more evidence is required to draw conclusions regarding the optimal management of this disease.
Additional Information Disclosures
Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
|
2022-08-25T15:08:03.463Z
|
2022-08-01T00:00:00.000
|
{
"year": 2022,
"sha1": "f2f15ac7a6fb2901de55fbb5f25403e182b0775d",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/105833-intraparenchymal-mucosa-associated-lymphoid-tissue-lymphoma-a-case-report.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cfc763f64cf7e58eec441654c8e59bfe18bfe4da",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
259722219
|
pes2o/s2orc
|
v3-fos-license
|
Weedy Life: Coloniality, Decoloniality, and Tropicality
Respect for any form of life entails nurturing all the potentialities proper to it, including those that might be unproductive from the human point of view. Are there lessons to be learnt about decolonisation of the tropics from a focus on ‘weeds’? The contributors to this photo-essay collectively consider here the lessons that can be learnt about the relationship between colonisation and decolonisation through a visual focus on life forms that have been defined as weeds and, consequently, subject to a contradictory politics of care, removal, and control – of germinating, blooming, and cutting. The essay demonstrates the continuing colonial tensions between aesthetic and practical evaluations of many plants and other lifeforms regarded as ‘invasive’ or ‘out of place’. It suggests a decolonial overcoming of oppositions. By celebrating alliances of endemics and ‘weeds’ regeneratively living together in patterns of complex diversity, we seek to transcend policies of differentiation, exclusion and even eradication rooted in colonial ontology.
Introduction: Weedy Beginnings
nterest in multispecies ethnography has been on the rise in anthropology in recent years, building on a long history in the discipline of exploring how humans construct their social worlds in terms of more-than-human thingsplants, animals, spirits, and others. Yet, some scholars who were originally drawn to anthropology because of its very focus on the human being, anthropos, have been critical of research in the discipline that appears to be about everything and anything but the human, particularly research that effaces human political and economic relations, and issues of social inequality, justice, and injustice (Jackson, 2015;Ahuja, 2009;Lowrey, 2022). However, as Chao (2022), among others, has so well demonstrated, multispecies anthropology can no longer be accused of simply celebrating "the fact of human/nonhuman mingling" (Chao et. al., 2022, p.1). Instead, what a multispecies approach allows us to do is to reflect even more closely on social justice and injustice amidst the support we might give or deny to all species of being .
In attending to such issues, at the annual Australian Anthropological Society (AAS) conference in November 2022, participants in a session entitled "Weedy Life Support" were invited to reflect on vegetal and other things classified as 'weeds' in relation to concepts such as colonisation, decolonisation, cultivation, enslavement, eradication, and parasitism. The conference session considered questions such as: Are there lessons about coloniality and decoloniality to be learnt from a focus on weedy lifeforms? How might coloniality be confronted through textual and visual representations of various multispecies entanglements and assemblages? Or are such representations largely metaphorical eulogies for multispecies relations about to pass away or be diminished? If "so many of us are Anthropocene weeds", as Tsing (2017, p. 17) writes, then is it possible for our own weediness to allow for "landscapes of more-than-human livability"? This photo-essay is an outcome of the discussion that began at the AAS in which participant stories of the multispecies relations in which they were themselves immersed revealed tensions in the complex antagonistic relations between coloniality and decoloniality. As this essay developed, it became clear that conceptualisations of 'weeds' from the colonial period and continuing into postcolonial eras share several of the valuations that also define tropicality (Lundberg et al. 2022, p. 3)as wild and fecund, threatening and alluring, and needing to be tamed by policies of control and eradication. Several of our contributors highlight the positive affordances of vibrant tropicality for diverse relations between endemic plants and 'weeds', humans, bush spirits and other more-than-human life forms. Contributors consider some lessons to be learnt about the relationship between colonisation and decolonisation through our visual focus on life forms, highlighting how these processes are subject to a contradictory politics of care, removal, controlor what we term germinating, blooming, and cutting.
As a plant that is considered invasive or colonising, the concept of 'weed' carries with it many negative connotations. The contributions by Rosita Henry, Helen Ramoutsaki and Debbi Long in the first section of this essay -Germinatingall focus on the reevaluation of weeds as good for germinating understandings about the entanglements of coloniality and decoloniality. Henry draws on three images to show how weedy multispecies alliances might provide a way to rethink decoloniality in a way that transcends categorical and relational oppositions. Ramoutsaki poetically re-evaluates a weed that is widely considered an alien in Australian tropical gardens. Some weeds are defined as plants in the wrong placecolonisers of places from which they need to be 'weeded out', perhaps due to their tendency to become invasive (Maron et al., 2013). Their very tropical fecundity is seen as a problem because it enables them to dominate, to take over and force out othersand so we humans often respond by trying, in turn, to 'weed them out'. Yet, in the process of 'weeding' we might also grow to value the tenacious beauty and joyful exuberance of such 'matter out of place' (Argüelles & March, 2021;Douglas, 1966). Such is the case with Ramoutsaki. Her contribution here in photo and text also serves as an exegesis of the performative piece MC Nannarchy's Cinderella Weed Rap, which she created especially for the AAS session (see https://vimeo.com/821116266) and to which she refers in this essay. Debbi Long rounds off this section with a photographic and textual account of the value of weeds in permaculture, conveying the principle that, rather than being seen as invaders needing to be removed, such plants are incorporated into the ecosystem in the service of regeneration.
The second section of the essay -Bloomingconcerns different species that tend to over-bloom, in the sense of their capacity to thrive and spread, both in and out of place. Here the short reflections by Greg Acciaioli, Simon Foale and Celmara Pocock look to plants, fish, and landscapes as affects and effects of colonial tastes. Acciaioli focuses on contemporary issues concerning the water hyacinth in Indonesia, which was originally spread by colonial powers for its aesthetic value, while Foale critically reflects on the scientific under-recognition of fish commonly labelled 'weeds of the sea'sardines, scads and small mackerelsthat contribute to food security in the global tropics, in favour of the aesthetic value of colourful coral reef fish. The problem of the dominance of colonial aesthetic criteria also informs Pocock's contribution on the historical replacement of native casuarina trees by coconut palms as part of tourist developments on islands of the Great Barrier Reef, transforming not only the landscape but also the soundscape. Paying closer attention to the sensory dimensions of human relations with the more-than-human, Pocock argues, may offer a pathway to decolonise tropical living.
Tropical Alliances: Persons, Plants, and Place by Rosita Henry
Perhaps because I am a descendant of once frowned upon mésalliances between racial categories, I treasure the exuberant and unruly assemblage of plants in my garden lawn (Figure 1). Vegetal life provides a rich conceptual vocabulary for human reflective engagement with the worldsuch reflection itself being an expression of multispecies relationality. Plant metaphors have proved to be particularly fertile in social theory. For example, plant life lies at the heart of the social theory (or theories) of peoples of the Highlands of PNG. Plants have also provided much food for thought among a global community of other scholars across the humanities and social sciences, as evidenced by Deleuze and Guattari's (1987) widely propagated philosophical concept of the rhizome (Strathern, M., 2017).
A rhizome has no beginning or end; it is always in the middle, between things, interbeing, intermezzo. The tree is filiation, but the rhizome is alliance, uniquely alliance. (Deleuze & Guattari, 1988, p. 25) Placing Western philosophical ideas within the same frame as the onto-epistemologies of Papua New Guinean Highlanders raises the spectre of coloniality in our own writing. To be decolonial, must we totally eradicate colonial concepts and their exclusionary dualismsrhizome-tree, alliance-filiation, western-non-western, from our texts and images? My tropical lawn is the subversion of a well-manicured lawn that requires a vigilant 'weeding out' of difference for the sake of routinely reproduced sameness. Surface uniformity is required to emerge and be maintained through colonial order and control. Instead, my tropical lawn project embraces the idea that colonial ordering is inseparably intertwined with the decolonial. The decolonial works towards tempering coloniality through the creation of rhizomic alliances across all kinds of differences and dualisms engendered by various onto-epistemologies. Alliances of difference across dualistic and oppositional identifications of people, plants, and place are created in many cultural contexts through the gift exchange of seeds, seedlings, and cuttings. The photo above ( Figure 2) shows a display of seeds ready for planting in freshly prepared ground at Kunguma Village in the Western Highlands of Papua New Guinea. The Penambi Wia people who made the display of seeds often present themselves as planted cuttings, as do other Western Highlanders (Henry & Wood, 2022). Andrew Strathern (1977, pp. 504-506) notes that in Melpa tok ples (language) persons from the same segmentary group, or lineage, refer to themselves as mbo tenda 'one shoot' or 'one stock'. Mbo refers to a plant 'shoot' and to something 'planted' by humans. Knowledge too is understood to be propagated by implanting it in people. Mbo rondont is the term for teaching (literally meaning to 'implant a cutting', mbo). The seeds shown in the photo were displayed as part of a lesson given by Penambi Wia gardeners on their propagating practices to a group of Australian students from James Cook University attending an ethnographic field school. The teachers explained that gardeners often gift seeds and cuttings to each other and that such a gift creates the obligation of a returnoften part of the yield of the plants grown from that seed or cutting. In the Western Highlands, segmentary groups (cuttings) seek alliances with other such groups. For Penambi Wia gardeners, productive relationships are not created within sameness but are carefully cultivated across difference through the exchange, germination and propagation of plants.
Exchanges of seeds and cuttings have also been vital among my own immigrant family in our attempts to put down roots in the tropics of Australia. Among these is a plant that, like my Sri Lankan Burgher forebears, has a long colonial historythe chilli. The variety of chilli we favour growing is likely to have been introduced from Brazil to Sri Lanka during the time of Portuguese colonial power on the island (1505 to 1658) (Katz, 2019, p. 30). My eight siblings and I all grow the chilli in our gardens. We call it 'granny's chilli', but also sometimes by its Sinhala name, nai miris (cobra chilli). The plants we grow ( Figure 3) come from generations of seeds produced from one that our maternal grandmother brought from Sri Lanka (secreted in her shoe, according to family legend) to satisfy her yearning for the taste of home in a foreign land.
My forebears were both colonialists and decolonialists. My mother and her parents were classified as 'white' enough during the era of the White Australia policy 1 to be accepted as immigrants. Yet, their non-white whiteness challenged colonial subject positions and racial classifications that made Australia white, and their attempts to put down roots subverted distinctions between endemic plants and potentially dangerous exotics prohibited from entering Australia. They worked with their bodies and seeds to subvert and weaken such distinctions by creating collaborations across dualistic and oppositional identifications of people, plants, and places.
Cinderella Weed at Home in the Wet Tropics by Helen Ramoutsaki
At the core of MC Nannarchy's Cinderella Weed Rap (Ramoutsaki, 2022 https://vimeo.com/821116266) are questions regarding which entities and practices fit and are valued in the context of the Wet Tropics of Northern Australia, with a focus on my backyard on Kuku Yalanji Kubirriwarra bubu (Indigenous land). Two imported plants are contrasted: the North European daffodil with flashy golden flowers and the tropical Cinderella Weed, Synedrella nodiflora (L.) Gaertn. In the Wet Tropics, the genus name 'Synedrella' is not only linked to the common name 'Cinderella Weed' as a quasi-homophone: there are similarities between the status of the plant and the protagonist of popular tales. Possibly originating in China or Egypt, there are now thousands of versions of the Cinderella rags to riches narrative. Cinderella tales concern a downtrodden young woman who "must prove that she is the rightful successor in a house in which she has been deprived of her rights"; yet, to her advantage, "she has also been driven by her own indomitable spirit and desire to claim her rightful place in the world" (Zipes, 2016, pp. 358-359). There comes a time when both Cinderella and Synedrella call to be acknowledged. Synedrella nodiflora's entrance into Australia's tropical ecosystem came without colonial poetic, aesthetic or commercial fanfare. The species is quietly logged in a botanical database as first identified on cleared land in Cairns in 1914 (Australian Virtual Herbarium, 2023). Not fitting into the settler coloniality of a Eurocentric aesthetic and with little recognition in Australia for human usefulness, Synedrella nodiflora's status is limited to weediness. In considering who fits or has a rightful place in my backyard, allowing that such a species has value is an act of decoloniality, delinking value from imposed hierarchies and rethinking whether value is an attribute that applies to one species in isolation.
In the broad view of my backyard, the who that fits are all in the more-than-human world. Creatively, the prolific biodiverse perspectives in the tropics invite a poetic profusion, a tumble of words with layers of storeys/stories and multiplications of meanings. In the lushness of tropical flora, valuing begins with noticing, picking out a plant from others that crowd around it. MC Nannarchy's rap came from my eventual awareness of Synedrella nodiflora, who has been introduced from the tropical Americas and is naturalised in areas of Australia, including the Wet Tropics (CSIRO, 2020). MC Nannarchy refers to the value Synedrella has to humans: as food, medicine and animal feed (CABI, 2022).
Figure 5: Synedrella nodiflora gives value to underappreciated weedy others
Others in the multispecies gardening collective include fattening grasshoppers which can cast a shadow over shared foodplants, yet Synedrella leaves provide an additional food source while also hosting leafcurling caterpillars in their silk-stitched retreats. Photo by Helen Ramoutsaki, 2023. Yet, Synedrella's worth is not only as a servant of others. Saying a weed is 'the right plant in the wrong place' might allow the possibility that the plant has some value when kept in its place. However, this anthropocentric position does not consider that from Synedrella's perspective, my tropical backyard is absolutely her right place: she has found a fit in the system. This is significant to my style of Whatever Gardening, in which value comes from co-participation. Rather than having a central role as a gardener, I am a part of a multispecies gardening collective and the conditions of the tropical habitat determine who thrives in relationship to others in the ecosystem. As a plant who has established a fit in the Wet Tropics, Synedrella is part of the co-relationships in my backyard.
In the rap, Cinderella Weed is contrasted with the daffodil, which is valued but does not fit in the Wet Tropics. Its high value in its native Northern Europe is exemplified in Wordsworth's poem I wandered lonely as a cloud (1815, p. 328). The gold of the daffodils represents an aesthetic wealth that endures in memory, bringing joy and comfort. Coloniality in Australia encompasses the value-legacies of such garden plants. Daffodils were offered for sale in the colony of Van Diemen's Land from at least 1836 (Hobart Town Courier, 1836). They are now normalised as garden plants throughout temperate Australia; however, in the Wet Tropics their cultural legacy seems to have been restricted to a nostalgic fancy dress costume at 'Cinderella balls' (Cairns Post, 1925).
In the tropics, daffodils will possibly bloom during the dry season but only after being kept in the refrigerator to simulate temperate winter conditions. They only grow as annuals: the bulbs rot in hot, humid, monsoon-saturated ground. To regrow them, the gardener has to import bulbs back into the ecosystem. There is no co-relationship through lifecycles with others in the gardening collective. Daffodils remain forever temporary visitors, giving aesthetic value but not contributing to the wider system.
In her tropical fit, Syndedrella spreads through a series of adaptations, including the two types of florets and seeds produced by members of the sunflower family. Ray florets produce heavier seeds that fall close to the parent plant and are suited to their conditions. The seeds of the disc florets disperse more widely and are suited to a range of growing conditions (Usharani & Raju, 2018). This means that Synedrella can thrive where she is established and is also well adapted to colonising in the botanical sense of occupying a new habitat or ecological niche. She tends to overgrowth where there is disturbance, so my low-intervention active undisturbing helps keep her presence in equilibrium when weeding would likely not.
Regenerative practices, as described by Wet Tropical organic farmer Andre Leu (2021, pp. 27-38), do not seek to eliminate plants designated as weedy but to bring them into balance with the ecosystem by cutting back and allowing them to mulch the ground, by trusting the shading-out process of larger plants as the system matures, and by allowing the plant's role as a living mulch. In relationship with the thrips and bees and butterflies that assist her pollination, and the Hypolimnas bolina caterpillars that eat her leaves, Synedrella's value is not rare, but it is shared.
Permaculture and the Reframing of Weeds in the Subtropics by Debbi Long
Industrialised monocultural food production clears land of diverse habitat, planting single crops over wide areas. Taking inspiration from tropical forest systems, permacultural land management values ecosystems where plants are 'stacked' in multiple, diverse layers. In permaculture philosophy, no plant is a weed in and of itself. It is always about context. Weedsclassic matter out of placeare plants in places people do not want them to be (Morrow, 2022, p. 340) and "usually appear when successful and stable ecosystems are altered so that new conditions favour them" (Morrow, 2022, p. 341). The photo above ( Figure 6) illustrates a block of land on subtropical Yuin country on the south coast of New South Wales which is in the process of being regenerated through permaculture practices. The soil, in which there is minimal microbial activity, has been compacted from over a century of exposure to colonially introduced hoofed animals (cattle and sheep). Grasses were the only form of plant life on the block at the beginning of the rehabilitation project. The photograph shows four different ways in which plants labelled as weeds are being made useful and welcomed as members of a regenerating ecosystem.
The first example is how a lawn has been mowed around the newly built cabin. Invasive kikuyu, couch, clover and other lawn-type groundcovers, regarded as weeds elsewhere on the block, are allowed to flourish in this small patch of lawn. The functions of the lawn include leisure space, snake deterrence, pollinator attraction, and fire protection.
The second and third examples involve two habitat plantings: the mulched garden bed in the foreground and the area of grass in the rear of the picture, in which a diverse range of native species has been planted into the current grassland. Both plantings are aimed at performing multiple functions: visual screening, windbreaks and shade, as well as habitat for birds, insects and lizards. The bed in the foreground of the picture, was made by covering the weeds with weed matting (cardboard) and mulch. Being deprived of sunlight, the grasses will rot and bring much-needed organic matter into the soil. The habitat planting in the large area of grass behind the water tank has been planted with over 100 native plants. The existing weeds here, left for the time being as long grasses, provide shelter for the infant bushes and trees and act as soilstabilising groundcover. Over time, the grasses will be shaded out by the shrubs and native groundcovers.
The fourth example of how weeds are used can be seen to the left of the picture ( Figure 6), in the black plastic tubs (see also Figure 7 below). These tubs are used to convert weeds into nutrient-rich fertiliser 'teas'. With their deep roots, thistles bring micronutrients stored deep in the soil up to the surface. Thistle tea makes these nutrients available to plants with shallower root systems, and the bacteria created in the fermenting process kickstarts important microbial reactions, bringing life to sterile soil. Rather than being seen as invaders needing to be removed, plants regarded as weeds are incorporated into the ecosystem, supporting regeneration processes.
Aesthetic and Practical Allure of Water Hyacinth in Indonesia by Greg Acciaioli
Now classified by the Invasive Species Study Group (ISSG) as one of the world's 100 most invasive species and often labelled as one of the world's worst weeds, water hyacinth (Eichhornia crassipes) was originally spread by colonial powers from its native habitat in the Amazon basin of South America to regions of Africa, Australasia and Asia (Osmond & Petroeschevsky, 2013). Naturalists and botanists carried it to the colonies for its 'ornamental beauty', often first depositing it for display in botanical gardens established by colonial authorities (Kitunda, 2018, p. xiii).
Figure 8. Water hyacinth on the edge of Lake Tondano, North Sulawesi province
Photo by Greg Acciaioli, August 2022.
However, conspicuously gendered accounts, usually unverified, of its spread beyond colonial botanical gardens in South and Southeast Asia tend to depict female elites -'a few Bengali ladies' (Iqbal 2021) and a 'Thai princess' (Mancuso 2020;Jernelöv 2017) -as, 'overwhelmed by the beauty of its flower' (Iqbal 2021), collecting it for planting in their own ponds, whence it spread unchecked throughout deltaic Bengal and all of Thailand. Ryan's (2017, p. 181) approach of literary botany depicts how water hyacinth has figured in Saya Zawgyi's contemporary poetry in Myanmar as "a sentient and expressive plant persona capable of responding gracefully to the intensely variable aquatic conditions of the Irrawaddy River", thus capturing the continuing agency of water hyacinth in seductively colonising the country's waterways.
In the case of Indonesia, water hyacinth (eceng gondok in Bahasa Indonesia) was first brought to Java in 1884 so that its aesthetic attractions could be displayed in the botanical garden established by Dutch East Indies botanists in 1817 in Bogor, West Java (Mancuso, 2020). It is unknown how it spread from there, though the role of water reflux following a local flood of the Ciliwung River flowing through the Bogor Botanical Garden has been mentioned (Jernelöv, 2017, p. 119). It now forms dense, sometimes impenetrable floral carpets in rivers and lakes throughout the archipelago. Local authorities in Indonesia as well as many local inhabitants (although their terminology is very different) recognise various aspects of the long-term deleterious environmental impacts of this colonising flora: destruction of phytoplankton and aquatic plants beneath its light-blocking cover, sometimes proceeding to anaerobic or low oxygen content; monopolisation of available nutrients such as nitrogen and phosphorus; evapotranspiration of water, contributing to lakes receding and becoming shallower; fowling fish cages (karamba); and impeding access to fishing sites, as evident in its spread along the edges of Lake Tondano in North Sulawesi shown in Figure 8 (Jernelöv, 2017, pp. 119-120). Such effects have led, in some lakes, to endeavours of total eradication, as is currently being undertaken by the Indonesian military at Lake Limboto in Gorontalo, the province just south of North Sulawesi ( Figure 9).
Figure 10. Water hyacinth spreading in proximity to fish cages (karamba)
The centre of Lake Limboto, Gorontalo Province, Sulawesi. Photo by Greg Acciaioli, August 2022.
However, despite the universal condemnation of this weed by environmentalists, some fishers have resisted such eradication efforts, seeking to live together with these expanses of water hyacinth and exploit their more positive effects. They recognise that these expanses also serve as food sources and congregation sites for such planteating fish as tilapia, the major introduced species in many of Indonesia's inland lakes (Acciaioli, 2009), thus increasing fish numbers and hence yields for fishers. Such a perspective mirrors the experience of fishers in another tropical lake where water hyacinth proliferation has been a problem, Lake Victoria in Africa (Njiru et al., 2012). Figure 10's depiction of water hyacinth in close proximity to fish cages (karamba) in Lake Limboto illustrates such placement together, as if in a symbiotic relationship. Some government officials have even promoted water hyacinth's spreada government demonstration first brought water hyacinth to Lake Tondanoas they have envisaged furniture and handicrafts such as baskets made from the dried plant as a tourism draw and export commodity. Such a strategy once again parallels what has been attempted as well for water hyacinth products from Lake Victoria (Jernelöv, 2017, p. 126).
As in Africa (Kitunda, 2018), the human relationship to water hyacinth in Asia has oscillated through several phases of attraction and repulsion. First carried to and spread throughout Asia for its aesthetic qualities, its practical effects upon fisheries, agriculture, and transport soon led to attempts to eradicate itthe 1917 Water Hyacinth Act, for example, banned its possession and cultivation in colonial Burma (Jernelöv, 2017, p. 120). However, in the postcolonial context, some continuing efforts of eradication have been complemented by endeavours of accommodation and utilisation, as evident in Indonesia with the contrast of military uprooting in Lake Limboto ( Figure 9) and government promotion in Lake Tondano. The complex entanglements of aesthetic inclinations (both colonial and postcolonial) and human livelihoods with what many regard as predominantly an 'aquatic pest' remain a testament to the ambiguous agency of a plant that has proliferated through its own artful and practical allure.
Weeds of the Sea in the Asia-Pacific by Simon Foale
Sardines, scads, and small mackerels (often referred to as 'small pelagic' fish and in certain contexts as 'fodder' fish or even 'trash' fish) are sometimes described as 'weeds of the sea' because of their short lifespans, fast growth rates, high fecundity, and the resultant capacity of their populations to bounce back quickly after heavy fishing pressure. Small pelagic fish are becoming increasingly important for food security around the world, particularly as larger, less resilient (but often more desirable) species become (and stay) depleted (Pauly et al., 1998;Roeger et al., 2016). It also turns out that small weedy pelagics happen to be nutritionally superior to most larger fish speciesthey are particularly rich in Vitamin A, Vitamin B12, Calcium, Iron, and Zinc (Farmery et al., 2020). Small pelagic fish can live in the open ocean, adjacent to coasts, and are also often found in estuaries. They form schools, which makes them easy to harvest. They tend to grow well in waters where nutrient levels are high (e.g. from upwellings or rivers), because their food (phytoplankton and the tiny crustaceans that eat phytoplankton) proliferate quickly in response to elevated nutrients (just as plants in a paddock or garden grow better with added fertiliser or compost).
Small pelagic fish are undervalued in many parts of the Asia-Pacific, including Australia, and particularly North Queensland, where people tend to prefer larger species, particularly reef fish. There is surprisingly little scientific attention focused on small pelagic fisheries, despite their immense importance for food security in poorer and more densely populated parts of the Asia-Pacific region such as Indonesia, Philippines, Cambodia, Vietnam, Burma, Bangladesh, and India.
Reef fish ( Figure 12) are regarded as sexier by marine scientists and attract more research funding and, in turn, generate many more scientific publicationsdespite having vastly less importance as food on a regional scale (Clifton & Foale, 2017;Teh et al., 2013). This disturbing epistemological hegemony, in which coral reef-associated fish species are misleadingly touted to be critically important for food security in parts of the global economic periphery (AKA the 'developing' world) and attract scientific research attention disproportionate to their actual food security importance, could reasonably be critiqued through the lens of decolonialitythe science has become captive to a set of values that are profoundly rooted in a colonial ontology.
Figure 12. Regal Angelfishone of the photogenic reef fish species
Regal Angelfish (Pygoplites diacanthus) has helped establish the 'iconic' status of coral reefs. Photo by Simon Foale, 1995. Over the twentieth century, coral reefs transformed, in the Western imagination, from places of mystery and danger to objects of aesthetic consumption (Elias, 2019). With considerable assistance from emerging photographic technologies (Elias, 2019;Foale & Macintyre, 2005), the aesthetic value of coral reefs drove the development of a large, lucrative and politically powerful tourism industry, particularly in Queensland, Australia, where the Great Barrier Reef became an 'icon' and achieved World Heritage status. This impressive elevation in aesthetic (and economicvia tourism) value of tropical coral reefs has penetrated and profoundly shaped the sub-discipline of 'coral reef science', which includes the science of reef-associated fisheries. The aesthetic dimension can be seen to have 'colonised' and measurably distorted the ostensibly 'objective' science around coral reef fisheries through earnest and well-meaning spin, designed to attract research and conservation funding (Clifton & Foale, 2017).
But the spin ignores, downplays, or denies the scientific truth about reef-associated fisheries, including their over-stated importance for food security (Tey et al., 2013) (especially relative to the above-mentioned, largely ignored 'weed fish' species) of Asian and Pacific human populations. These populations, as a result of their impoverishment by centuries of colonial exploitation, do not themselves have the leisure time or money to indulge in the aesthetic consumption of corals and myriad species of pretty but mostly nutritionally useless reef fish.
Decolonising reef fishery science will require a more historically and epistemologically reflexive understanding of the way the tourism industry and its values have influenced some of the reef science community's core paradigms and assumptions. This is especially important in Australia and other wealthy (and thus leisured) populations where a greater awareness is needed of the problems created when this particular social construction of science is projected across economic and cultural boundaries. Claude Levi-Strauss famously stated that 'The scientific mind does not so much provide the right answers as ask the right questions.' As carbon emissions, mostly produced by rich people (Hickel, et al., 2022), slowly kill reefs via coral bleaching, the 'weed fish' species will only increase in their importance for feeding the poor.
Casuarinas and Coconut Palms, Great Barrier Reef, Australia by Celmara Pocock
Palm trees have been described as the 'prince of plants' (Gray 2018) and are strong colonial signifiers of the tropics. While diverse species of palm originate throughout the tropics, the cultivation, propagation, and transport of certain varieties are intimately linked with colonialism. Their global spread is driven by a triumvirate of commercialism, commodification, and symbolic connection to religion, nobility, and exotic pleasure (Gray, 2018). The quintessential coconut palm encapsulates all three: their profitable crops have been commercialised and commodified in a range of products, and the trees themselves are a commodified symbol of the tropics. Thus, coconut palms are a synecdoche for utopian tropical islands, which are imagined as places of endless natural abundance and social harmony (Pocock et al., 2022). The islands of the Great Barrier Reef, along the eastern seaboard of Australia were promoted as tropical idylls from the early twentieth century, but holidaymakers were often disappointed by the absence of naturally occurring coconut palms (Pocock, 2005). The islands were not the tropics they imagined. To fulfil this colonial imaginary, coconut palms were planted at key tourist locations along the mainland coast and on offshore islands, and by the mid-century, the resorts were readily recognisable by their clusters of palms amidst native vegetation. Today, naturalised populations of coconut palm have proliferated, and untended groves littered with fallen fronds and abandoned fruit are regarded as weeds (Central QLD Coast LandCare Network, 2023). While conservationists respond by weeding out these unwanted plants, anthropologists Richard Martin and David Trigger (2015) highlight how this can create conflict with local Aboriginal communities who have deliberately planted and tended coconuts as symbolic of relaxation and luxury. Such entanglements of colonial symbolism with Indigenous worldviews, highlight how coloniality becomes inherent in conservation management and tourism, and may even be enacted by Indigenous people.
While the Great Barrier Reef World Heritage Area is managed for its 'natural' attributes including the native vegetation of the islands, coconut palms dominate areas frequented by tourists. The proliferation of architecturally distinctive coconut palms meets the visual expectation of the tropics, but these radically altered landscapes trap tourists in a perpetual loop of colonial experience.
Figure 14. Tourist wearing a pith helmet, with coconut and palm tree among casuarinas
Tourist Chris Doyle wearing a pith helmet, itself a symbol of colonial exotic (Rovine, 2022), stands at the base of a coconut palm, holding its fruit while casuarinas dominate the background. Photo by R.M. Berryman, 1933. National Library of Australia.
The preoccupation with this visual tropical signifier, further disrupts and displaces the multisensory and embodied knowledge and appreciation of reef islands as particular and distinctive places. Such experiences were part of early tourist experiences where, living outdoors and sleeping under canvas, people were entangled in relationships with more-than-human island flora and fauna.
Before coconut palms dominated, endemic casuarinas (Casuarina equisetifolia) imprinted themselves on tourists' perceptions and experiences. The fine delicate branches of casuarinas or she-oaks offered useful shade, their fallen needles created a carpet underfoot, and stands of trees framed tourist photographs (Pocock, 2002). And most evocatively, the gentle sigh of she-oaks lulled visitors to sleep and brought them pleasure and connection beyond colonial imaginaries of the tropics. These past experiences suggest it is possible to appreciate the tropics without reference to colonial signifiers and that embodied entanglements with the more-than-human may offer a pathway to decolonial tropical living in the future.
Figure 15. Delicate branches of the casuarina, Great Barrier Reef
The delicate branches of the casuarina are used to frame this promotional photograph of the Great Barrier Reef. Photo by J Fitzpatrick, 1951. National Archives of Australia.
Bad Fences Make Good Neighbours: Weedy Protests by Kristin McBain-Rigg
Fences represent the borderlands of colonial rural Australia. The history of these fence lines is contentious as well as practicalfrom the mid-1800s some of the early fences used in rural Australian communities included ringlock wire or chain link fences, with different gauges of wire suited to keeping different kinds of animals in and others out; providing a strange foreign division of multispecies relations across the country (Pickard, 2010). Fences were used to keep colonial orderand stand as silent sentinels of the race for civilising a wild tropical land, dismissing the indigenous boundaries and borders and the linking networks that had existed for thousands of years prior to colonial invasion. These fences were more economical than shepherds in the colonies, laying waste to both human-human and human-nonhuman relations on some pastoral properties (Pickard, 2010).
My familiar childhood memories are filled with these relics that over the course of time had become bad fences for suburban propertieslow, sagging chain fences, the kind that allowed free passage between neighbour's yards, visibility across space, relationships across time. Fences that allowed us to see activities of others, to share in a community space that was demarcated only for the purposes of property boundariesbureaucratic borderlands. Neighbours could converse across fencesand if, as Helliwell (1992) asserts, "good walls make bad neighbours" in the Gerai Dayak Longhouse of tropical Borneo, then the tropical communities of my childhood were formed around bad fences that made good neighbours. When I moved into my own home, I was fortunate enough to find two such 'bad' fences on the property. A young family at the back loved our 'bad' shared boundary because it allowed them to enjoy the view of our 'rainforest' type backyarda rare treat in an urbanised location. On the side was Pearlour elderly neighbour, who was the first to have lived on the block, in the home that she and her husband had built. Her yard was also a rare paradise in a sea of manicured lawns, a yard wild with ferns and a creeping bush we called 'maiden's blush'. These plants weighed heavily on the chain fence, which dipped and bowed along the boundary. We could see Pearl when she was in her yard, and she could see us in ourswe talked over the fence, felt safe with our young sons in the yard, and she felt safe that someone could look out for her, too.
She taught me much about the plant life we shared, about life and relationships across multiple generations.
When Pearl died, her plants lived on as a testament to the life she had built and fostered. When new neighbours purchased the house, it was a hopeful timea time to relate anew and share the knowledge passed by Pearl about how to cultivate the rich abundance they had acquired. But this was not part of their plana complete clearing of the space and a chance to create a cultured, civilised yard (a manicured lawn). They tore down the chain fence to erect a high wooden fence between us, despite my protests; so, I insisted on taking a cutting of the maiden's blush before its complete eradication. I cultivated the cutting, winding it through my own yard, to remember Pearl. The roots are now on my side of the new fence. The fence provides a solid climbing frame for it to grow on. The complete eradication of the plant is not possible…every time the neighbours try to cut it back, or kill it off, it comes back in a kind of weedy rhizomatic protest (Deleuze & Guattari,1988). What might have been a good fence preventing further colonial expansion of weeds has quickly become a bad fence enabling such processes. The creeper continues its relentless life and maintains the memory of those who came before; it serves as a reminder of what may have become a wasteland lacking any trace of Pearl, our neighbour. Instead, our yards, linked by fences and plants, remodelled our social relations in ways that are more conducive to a continuous if somewhat agonistic sharing of Pearl's memory that is what Luce Irigaray and Michael Marder (2016, pp. 215-16) might call dynamically ecological. As Kieran O'Mahony (2022) argues, "…ecologising memory and place is an important conceptual and ethical tool when considering the tensions of everyday human-nonhuman relations and their multiple uncertain futures", lending itself to a broader kind of relational decolonisation and rewilding of urban borderlands.
A Tree as a Disturbing Political Space, Papua New Guinea by Michael Wood
Tropicality, according to colonial imaginaries, may refer us to a discourse that constructs the tropical world and its rainforests as the West's environmental other and "the White man's grave" (Stepan, cited in Clayton & Bowd, 2006, p. 208). But among those living and working in Papua New Guinea's logging concessions, environmental otherness is only one quality of their relationship with the rainforests. Most residents are temporarily or permanently at home in such an environment as they attempt to transform it into a marketable commodity that can fulfil imagined promises of affluent modernity and development held by Iban, Malaysian Chinese, Filipinos, Mubami, Kamula, and others who live in these logging concessions.
Figure 18. Sketch of a dali patalo man
Sketch by Bape Ewala Wawade, 1997. Figures 18, 19 and 20 show us some aspects of one of these other residents of logging concessions. These particular residents are called in Kamula, dali patalo, and are known in PNG tok pisin as masalai or 'bush spirits'. Figure 18 shows a dali patalo man emerging from a tree. Some Kamula say the tall canopy-piercing trees the dali patalo like to live in are manifestations of the dali patalo themselves and the sketch outlines some elements of this possibility. The link between the manifest tree and the typically unseen spirit is also expressed in the name dali patalo where dali is the Kamula word for tree. The tree can also be explained as a 'likeness' or 'copy' of the dali patalo's body. The Kamula word for these relationships of similarity can also be used to translate 'spirit'. Figure 19 shows us what was said to be the house of some dali patalo. The tree had not been cut down by the chain-saw operators even though it was located dangerously close to a set of roads that made up a sharp ninety-degree T junction. As we were looking at the tree a Kamula acquaintance suggested the tree was the likeness of the actual house of the dali patalo. Located in an unseen component of the world, this house was understood to be the same as a Kamula house. The tree was not logged out of respect for the dali patalo who lived there. The residents were thought by some Kamula to get angry at the destruction of their homes by the logging workers. They were said to respond to such attacks on their homes by making trees fall on the chain-saw operators sometimes killing or injuring them. The dali patalo did this as they were doing their own logging in their unseen world. This was not a repudiation of the destructive power of logging, but a repositioning of that power as fully under the control of the victims of logging who used their new power to retaliate for the loss of their homes and their dispossession and exile from the rainforest.
The home in the photograph can be understood as a permitted "intermediate disturbance" (Kirksey & Chao, 2022, p. 16) to logging. The preservation of the dali patalo's tree was endorsed by the logging company, but the tree was dangerously positioned on a T-junction that potentially threatened all drivers who used the junction. Moreover, the dali patalo and the tree were empowered by this protection, since both were able to manifest a new dangerous out-of-place destructive power This repositioning and empowering of the dali patalo also involves the fusion of industrial logging with relations of production involved in hunting. The same people who now could log and kill ordinary humans working in the concession were also known to help the Kamula by hunting in their world in parallel with Kamula hunters. This conjunction of manifest and unseen hunter was often profoundly productive (see Figure 20), but in the case of logging, relations between the seen and unseen actors were more antagonistic and violent. Such a situation is significantly defined by a colonising industrial logging and a strongly, often causally, related vision of parallel logging in the unseen world where dali patalo log in their own world without any Asians or other types of people present. A feature of industrial logging in PNG is that it is often managed by Malaysian Chinese and employs Asian workers. Some Kamula accounts of the dali patalo provide a rather different vision of how logging might be undertaken.
Fundamental to this vision is the assumption that certain trees and dali patalo are ontologically unstable and, therefore, transformable into each other (Vilacca, 2005). It is these assumptions that inform any adequate account of the tree surviving in the wrong place, making it powerful and dangerous to people (as in Figure 19). The tree is not an autonomous oasis of 'non-Western' decoloniality, but one entangled with contemporary capitalism involving the co-occurrence of different types of conflicting and shifting powers that are shuttled between the manifest and unseen aspects of the world. Describing such entanglements involves outlining context-specific politics whereby various entities have gained new powers from their conflicts and transformations so that a protected, but misplaced, tree can threaten workers and coerce them into slowing down at a T junction. However, the logging concession contains other conjunctions of roads, trees, and dali patalo in ways that can generate quite different political relationships and possibilities.
Conclusion: Picturing Transformative Weeds
A visual exploration of weeds, as things 'out of place', can tell us much about coloniality and decoloniality in the tropical world. The images in this photo-essay have highlighted some of the tensions, contradictory views, and ambivalences that humans and other forms of life have about their weedy co-residents. We have also presented images of accommodations and alliances with such co-residents.
While weeds are often defined as 'out of place' colonisers, the way they come to matter as 'out of' and 'in' place is currently largely defined by radical environmental changes that emerge from climate change and often barely regulated natural resource extraction. In such contexts, weeds can sometimes lend support to different lifeforms, within a multispecies political community. Their fertility makes them valuable sources of multispecies sustenanceincluding as food for human thought.
Yet, in the lush fecundity of the tropics, too much fertility sometimes becomes problematic, replacing relationality across difference with endlessly reproduced sameness. Infertility is generally considered bad and fertility good, but too much fertility is often feared, and so a war on weeds, defined as 'invasive species', ensuesan endless battle against the diverse fecundity of the tropics.
At the same time, as the contributors to this essay reveal, a concern with weedy lifeforms, especially in the tropics, can lead to a questioning of the strict opposition that is sometimes made between the colonial weeds and the decolonial endemic. Our images tend to highlight, and thereby encourage, complex diversity and its local politics as the way forward to resolving some of our current problems concerning the future of the tropics.
We began this essay by expressing reservations about a form of multispecies research that tends to 'weed out' the human by focusing exclusively on 'other-than-human' relations. In co-creating this essay, the contributors seek to offer an approach in which the human and more-than-human are always taken togetherimmersed in, and straining against, unequal power relations. Through this collection of short narratives about our own personal and research relationships with 'weedy' lifeforms, the cocreators of this photo-essay offer an alternative to approaches based on bounded categories and strict oppositions. Decolonisation is understood as always necessarily engaged with colonial pasts and presents, but a future in which there is neither coloniality nor decoloniality can be envisioned through a continuous process of weeding the inequalities and injustices among humans, and among humans and more-than-humans in multispecies relationships.
Greg Acciaioli is currently a senior honorary research fellow at The University of Western Australia, where he lectured in Anthropology for 29 years. Born of European migrant stock in California, he became a migrant himself to Australia as a postgraduate student at the Australian National University. Although his engagement with Indonesia began with his PhD research in Central Sulawesi, Indonesia, marrying a woman from that research site, the plain surrounding Lake Lindu, has led to long stints of nonresearch-oriented residence there and thus being declared a member of the local community. His concern with the dilemmas of inland fisheries in Indonesia has stemmed from this hybrid personal and academic experience of life at Lindu. He has worked with the Alliance of Archipelagic Indigenous peoples (Aliansi Masyarakat Adat Nusantara or AMAN) to promote co-management of national parks in Indonesia and has also worked on the issue of accommodating stateless Bajau Laut in marine parks in Sabah, Malaysia.
Simon Foale is an Associate Professor teaching anthropology in the College of Arts, Society and Education at James Cook University, Australia. Simon's research interests range between political ecology, the anthropology of development and the history and philosophy of science. His primary geographic focus is coastal Papua New Guinea, Solomon Islands and the Western Pacific. His disciplinary and geographic interests are in large part a consequence of having been born in pre-independence Solomon Islands (to white Australian parents). That first-hand experience and knowledge of the deeply exploitative character of the colonial state, combined with a post-graduate education in anthropology, makes the ongoing struggle against capitalist extractivism and race-based discrimination all the more serious for him. Simon is also an active unionist.
Rosita
Henry is a Professor in anthropology in the College of Arts, Society and Education at James Cook University. She reflexively positions herself as a weedy product of colonialism, whose forebears were both colonisers and decolonisers. Rosita attempts to work against coloniality by researching colonial, anti-colonial and decolonial relations between people and places across tropical Australia and the Pacific as expressed in cultural festivals, the politics of belonging and emplacement, cultural heritage, material culture, land tenure frictions, and state bureaucratic effects. Debbi Long is a CIS-gendered woman of acknowledged convict and immigrant Anglo-Celtic heritage, and unacknowledged First Nations heritage. She was born on Wiradjeri country, raised on Yuin Country, and after more than four decades of living in other parts of Australia and overseas, has returned to Yuin country to live in a fourgeneration family compound. Her research as a medical anthropologist, hospital ethnographer and health systems analyst has included clinical governance and health management, multidisciplinary clinical team communication, and development health and the SDGs (Sustainable Development Goals). Her current life/research interests, framed by the permaculture ethics of "Earth Care, People Care, Fair Share", incorporate health in areas such as permaculture and food security, sustainable housing techniques, and i/Indigenous knowledges, with an overarching focus on resource equity and decolonisation. She has a PhD in Anthropology.
Kristin McBain-Rigg is a non-Indigenous (white) female anthropologist who works with rural, remote and Aboriginal and Torres Strait Islander communities to improve health outcomes by seeking to decolonise biomedical systems and the ways we educate health practitioners throughout Northern Queensland. She is employed as a Senior Lecturer in the College of Public Health, Medical and Veterinary Sciences at James Cook University. She has a PhD in Anthropology.
Celmara Pocock is Director of the Centre for Heritage+Culture, and a Professor of anthropology teaching heritage studies at the University of Southern Queensland. Her decolonial and anti-colonial positionality is shaped by her experiences of migration and queerness, and a longterm commitment to working with and for Australian First Nations. This manifests in research that focuses on social and community approaches to understanding the environment, including, aesthetics and senses of place; storytelling and emotion; and the intersections between heritage and tourism. Human relationships with the environment are core to her work, and her monograph Visitor Encounters with the Great Barrier Reef: Aesthetics, Heritage, and the Senses was published by Routledge in 2020.
Helen Ramoutsaki is a PhD page and performing poet-natural historian, practiceembedded researcher and educator of English-Welsh-Irish birth heritage. My experiences living, working and birthing in England and on Crete, and my more than twenty years as a settler on Kuku Yalanji Kubirriwarra bubu in the Wet Tropics of Far North Queensland, have revealed to me the complexities of relative privilege, the tensions of differential regard and the value of being responsive to others. I am grateful to those who, not in an academic context, have trusted me with glimpses into story/place/language/culture that are not mine to pass on but that have enriched my deepening immersion in places and communities. My fascination with natural history and my love of wordcraft motivate my practice as a poet, natural historian and photojournalistic photographer compiling transdisciplinary creative natural histories. I also collaborate with my grandmother alter-ego, MC Nannarchy, who writes and performs ludically-serious raps concerning attitudes and ethics within the more-thanhuman world, with a focus on ethical tropical sustainability and the work of the multispecies gardening collective in our backyard habitat patch.
Michael Wood is a white anthropologist and adjunct with the College of Arts, Society and Education at James Cook University. He grew up in British, Australian and American colonies and has since worked, for a long time, with Kamula speakers in Papua New Guinea mainly on their complex engagement with industrial logging. More recently he has become interested in understanding how the Kamula have been subject to colonizing violence from their neighbours and how this violence has generated new forms of power that now influence the lives of the Kamula. He has a PhD in Anthropology.
|
2023-07-12T06:08:09.910Z
|
2023-07-03T00:00:00.000
|
{
"year": 2023,
"sha1": "c3aa95882d5f2fbe730f6b2f56f6731a73a5eef9",
"oa_license": "CCBY",
"oa_url": "https://journals.jcu.edu.au/etropic/article/download/3985/3776",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "32dd5e035e115515efeaac2a022c41e51204e5ea",
"s2fieldsofstudy": [
"Environmental Science",
"History",
"Sociology"
],
"extfieldsofstudy": []
}
|
16499523
|
pes2o/s2orc
|
v3-fos-license
|
On the Fermi and Gamow-Teller strength distribution in medium-heavy mass nuclei
An isospin-selfconsistent approach based on the Continuum-Random-Phase-Approximation (CRPA) is applied to describe the Fermi and Gamow-Teller strength distributions within a wide excitation-energy interval. To take into account nucleon pairing in open-shell nuclei, we formulate an isospin-selfconsistent version of the proton-neutron-quasiparticle-CRPA (pn-QCRPA) approach by incorporating the BCS model into the CRPA method. The isospin and configurational splittings of the Gamow-Teller giant resonance are analyzed in single-open-shell nuclei. The calculation results obtained for $^{208}$Bi, $^{90}$Nb, and Sb isotopes are compared with available experimental data.
Introduction
The Fermi (F) and Gamow-Teller (GT) strength functions in medium-heavy mass nuclei have been studied for a long time.The subject of studies of the GT-strength distribution is closely related to the weak probes of nuclei (single and double β-decays, neutrino interaction, etc.) as well as to the direct charge-exchange reactions ((p,n), ( 3 He,t), etc.).The weak probes deal with the low-energy part of the GT strength distribution (see, e.g., [1,2]).The direct reactions allow to study the distribution in a wide excitation-energy interval including the region of the GT giant resonance (GTR) (see, e.g., [3][4][5][6][7] and references therein) and also the high-energy region (see, e.g., [8,9]).The discovery of the isobaric analog resonances (IAR) and the subsequent study of their properties have allowed one to conclude on the high degree of the isospin conservation in medium-heavy mass nuclei.It means that the IAR exhausts almost the total Fermi strength and the rest is mainly exhausted by the isovector monopole giant resonance (IVMR) [1,10].
The Random-Phase-Approximation (RPA)-based microscopical studies of the GT strength distribution started from the consideration of a schematic three-level model taking the direct, core-polarization, and back-spin-flip transitions into account [11].One or two weaklycollectivized GT states along with the GTR were found in the model.Realistic Continuum-Random-Phase-Approximation (CRPA) calculations for the GT strength distribution were performed in [12] and [13].Some of the states predicted in [11] were found in [13].Attempts to describe the GT strength distribution in details within the RPA+Hartree-Fock model have been undertaken recently in [14].Unfortunately, the authors of [12,14] did not address the following questions: 1) the GT strength distribution in the high-energy region of the isovector spin-monopole giant resonance (IVSMR); 2) effects of the nucleon pairing; 3) the isospin splitting of the GTR.
Having taken the nucleon pairing into consideration in CRPA calculations, the authors of [13] predicted the configurational splitting of the GTR in some nuclei and initiated the respective experimental search for the effect [3].In [13], however, the influence of the particleparticle interaction in the charge-exchange channel was not taken into account.Along with the pairing interaction in the neutral channel, the proton-neutron interaction is taken into consideration within the proton-neutron-quasiparticle-Random-Phase-Approximation (pn-QRPA) to describe the double β-decay rates (see, e.g., [2] and references therein).Unfortunately, most current versions of the pn-QRPA do not treat the single-particle continuum that hinders the description of both high-lying IVMR and IVSMR.In addition, it seems that the question whether the modern versions of the pn-QRPA comply with the isospin conservation has not been raised yet.
The present paper is stimulated partially by the experimental results of [3][4][5][6][7] and by the intention to overcome (to some extent) the shortcomings of previous approaches.As a base we use the isospin-selfconsistent CRPA approach of [17][18][19], where direct-decay properties of giant resonances have been mainly considered.We pursue the following goals: 1) application of the isospin-selfconsistent CRPA approach to describe the F and GT strength distributions in closed-shell nuclei within a wide excitation-energy interval including the region of the isovector monopole and spin-monopole giant resonances; 2) formulation of an approximate method to deal with the isospin splitting of the GTR; 3) taking into account the spin-quadrupole part of the particle-hole interaction for description of GT excitations; 4) incorporation of the BCS model into the CRPA method to formulate an isospin-selfconsistent version of the proton-neutron-quasiparticle-Continuum-Random-Phase-Approximation (pn-QCRPA) approach; 5) application of the pn-QCRPA approach to describe the F and GT strength distributions in single-closed-shell nuclei within a wide excitation-energy interval; 6) examination of the GTR configurational splitting effect within the pn-QCRPA and its impact on the total GTR width.
We restrict ourselves to the analysis within the particle-hole subspace, and, therefore, do not address the question of the influence of 2p-2h configurations (see, e.g., [15]) as well as the quenching effect (see, e.g., [16]).However, we simulate the coupling of the GT states with many-quasiparticle configurations using an appropriate smearing parameter.The above-listed goals could be apparently achieved within the pn-QCRPA approach based on the density-functional method [20].However, the authors of [20] focused their efforts mainly on the analysis of the low-energy part of GT strength distribution relevant for the astrophysical applications. 1he paper is organized as follows.In Section 2 the basic relationships of the isospinselfconsistent CRPA approach are given.They include: description of the model Hamiltonian, its symmetries and the respective sum rules (subsections 2.1 and 2.2), the CRPA equations with taking into account the spin-quadrupole part of the particle-hole interaction for description of the GT excitations (subsection 2.3); the approximate description of the isospin splitting of the GTR (subsection 2.4).In Section 3 we extend the CRPA approach to incorporate the BCS model to describe both the nucleon pairing in neutral channels and the respective interaction in the charge-exchange particle-particle channels.The generalized model Hamiltonian, its symmetries and respective sum rules are given in subsections 3.1 and 3.2.The standard pn-QCRPA equations are reformulated in terms of radial parts of the transition density and free two-quasiparticle propagator.That allows us to formulate the coordinate-space representation for the inhomogeneous system of the pn-CRPA equations (subsection 3.3).As applied to single-open-shell nuclei, a version of the pn-QCRPA approach is formulated on the base of the above system (subsection 3.4).The choice of the model parameters and calculation results concerned with the F and GT strength distributions in 208 Bi, 90 Nb, and Sb isotopes are presented in Section 4. Summary concerned to the approach and discussion of the calculation results are given in Section 5.
Description of the Fermi and GT strength functions
in closed-shell nuclei
Model Hamiltonian
We use a simple, and at the same time realistic, model Hamiltonian to analyze the Fermi, "Coulomb" (C), and GT strength functions in medium-heavy mass spherical nuclei.The Hamiltonian consists of the mean field U(x) including the phenomenological isoscalar part U 0 (x) along with the isovector U 1 (x) and the Coulomb U C (x) parts calculated consistently in the Hartree approximation: Here, U 0 (r) and U so (x) = U so (r)σl are the central and spin-orbit parts of the isoscalar mean field, respectively; v(r) is the symmetry potential.The potential U(x) determines the singleparticle levels with the energies ε λ (λ = π for protons and λ = ν for neutrons, λ is the set of the single-particle quantum numbers, (λ) = {lj}) and the radial wave functions r −1 χ λ (r) along with the radial Green's functions (rr ′ ) −1 g (λ) (r, r ′ ; ε).On the base of Eq. ( 1), one can get expressions relating the matrix elements of the single-particle Fermi (τ (−) ) and GT (σ µ τ (−) ) operators (σ µ are the spherical Pauli matrices): (π)(ν) = 1 √ 3 (π) σ (ν) .We choose the Landau-Migdal forces to describe the particle-hole interaction [21].The explicit expression for the forces in the charge-exchange channel is: where the intensities of the non-spin-flip and spin-flip parts of this interaction, F 0 and F 1 respectively, are the phenomenological parameters.
2.2.The symmetries of the model Hamiltonian and sum rules.
The model Hamiltonian Ĥ complies with the isospin symmetry provided that Using the RPA in the coordinate representation for closed-shell nuclei, one can get according to Eqs. ( 1), ( 4)-( 6) the well-known selfconsistency condition [22,23]: where n (−) (r) is the neutron excess density, n β λ are the occupation numbers (β = p, n).The selfconsistency condition relates the symmetry potential to the Landau-Migdal parameter F 0 .
The equation of motion for the GT operator Ŷ (−) a can be derived within the RPA analogically to Eq. ( 5) [23]: Equations ( 5) and ( 9) allow one to render some relationships being useful to check the calculation results for the strength functions within the RPA.The strength function corresponding to the single-particle probing operator V (∓) = a V (x a )τ (∓) a is defined as: with ω s = E s − E 0 being the excitation energy of the corresponding isobaric nucleus measured from the ground state of the parent nucleus.Using Eqs.(5) and (11) one gets the relationship: Here, the Fermi and "Coulomb" strength functions correspond to the probing operators T (∓) and Û(∓) C , respectively.The model-independent non-energy-weighted sum rule (NEW SR) for F(0 + ) and GT(1 + ) excitations is well-known [24]: The energy-weighted sum rules (EW SR) are rather model-dependent and according to Eqs. ( 5), (9), and (10) equal to: with 0| Ûso |0 = β λ n β λ (j β λ − l β λ )l β λ (l β λ + 1)(U so ) λλ .According to Eqs. ( 5), (13), and (15), the exact (within the RPA) isospin SU(2) symmetry is realized for the model Hamiltonian in question in the limit In this limit the energy E A and the wave function |A of the "ideal" isobaric analog state (IAS) are The "ideal" IAS exhausts 100% of (NEW SR) 0 .In the calculations with the use of a realistic Coulomb mean field, the IAS exhausts almost 100% of (NEW SR) 0 (with the rest exhausted mainly by the IVMR).Therefore, one can approximately use the isospin classification of the nuclear states.Redistribution of the Fermi strength is caused mainly by the Coulomb mixing of the IAS and the states having the "normal" isospin T 0 − 1 (T 0 = (N − Z)/2 is the isospin of both the parent-nucleus ground state and its analog state).In the present model the mixing is due to the difference U C (r) − ∆ C .The Wigner SU(4) symmetry is realized in the limit In this limit the energy E G and the wave function |Gµ of the "ideal" Gamow-Teller state (GTS) are , respectively.Redistribution of the GT strength is mainly due to the spin-orbit part of the mean field.
The strength functions within the continuum-RPA
The distribution of the particle-hole strength can be calculated within the continuum-RPA making use of the full basis of the single-particle states.Being based on the abovedescribed model Hamiltonian, the CRPA equations for calculations of the F (C) and GT strength functions can be derived using the methods of the finite Fermi-system theory [21] a be the multipole charge-exchange probing operator leading to excitations with the angular moment J and parity π = (−1) L .Here, is the irreducible spin-angular tensor operator with σ 00 = 1, σ 1µ = σ µ .In particular, we have V 000 (r) = 1 (U C (r)), T 0000 = 1 and V 101 (r) = 1, T 101µ = σ µ for description of F(C) and GT excitations, respectively.After separation of the isospin and spin-angular variables, the strength functions corresponding to the probing operators V (∓) JLSM are determined by the following equations: Here, Ṽ (∓) JLS (r, ω) are the effective radial probing operators, F S=0,1 are the interaction intensities of Eq. ( 4), (4πrr JLS,JL ′ S ′ (r, r ′ ; ω) are the radial free particle-hole propagators: The expression for A (+) can be obtained from Eq. ( 19) by the substitution π ↔ ν with ω being the excitation energy of the daughter nucleus in the β + -channel, measured from the ground state of the parent nucleus.For J π = 0 + excitations (L = S = 0) there is only one non-zero propagator A (∓) 000,000 and, therefore, Eqs. ( 17), (18) have the simplest form.Such a form has been used explicitly in [18] for describing IAR and IVMR.For 1L1,1L ′ 1 are diagonal with respect to S and, therefore, Eq. ( 18) is the system of equations for effective operators 101 and Ṽ (∓) 121 .As a result, the spin-quadrupole part of the particle-hole interaction contributes to the formation of the GT strength function, as it follows from Eqs. (17), (18).The use of only diagonal (with respect to L) propagators A (∓) JL1,JL1 corresponds to the so-called "symmetric" approximation.In particular, this approximation was used in [17,19] to describe the monopole and dipole spin-flip charge-exchange excitations.As a rule, the use of the "symmetric" approximation leads just to small errors in calculations of the strength functions S (∓) JLS in the vicinity of the respective giant resonance (see, e.g., [25]).Nevertheless, the detailed description of the low-energy part of the GT strength distribution appeals to the "non-symmetric" approximation.
The F and GT strength functions S (∓) J (ω) (hereafter, indexes L = 0, S = J are omitted) calculated within the CRPA for not-too-high excitation energies (including the region of IAR and GTR) reveal narrow resonances corresponding to the particle-hole-type doorway states.Therefore, the following parametrization holds in the vicinity of each doorway state: Here, (r J ) s , ω s , and Γ s are the strength, energy, and escape width of the doorway state, respectively.
The values of J (ω)dω)/(EW SR) J are useful to be compared with unity to check the quality of the calculation results.Due to the relations given in Eqs.(11) and (20) the ratio of the "Coulomb" to the Fermi strengths has to be ω 2 s for each doorway state.
Isospin splitting of the GT strength distribution.
Due to the high degree of the isospin conservation in nuclei, the GT states are classified with the isospin T 0 − 1, T 0 , T 0 + 1.In particular, T 0 components of the GT strength function can be considered as the isobaric analog of the isovector M1 states in the respective parent nucleus: According to the definition (11), the strength function of the T 0 components is proportional to the strength function of the isovector M1 giant resonance in the parent nucleus: Here, S M 1 (ω ′ ) is the M1 strength function corresponding to the probing operator M(0) a and depending on the excitation energy in the daughter nucleus.Within the CRPA the M1 strength function is calculated using the relations, which are similar to those of Eqs. ( 17), (18), and can be found, e.g., in [27].
According to Eq. ( 22), a noticeable effect of the isospin splitting of the GT strength function takes place only for nuclei with a not-too-large neutron excess.The suppression of the T 0 and T 0 + 1 components is the reason why the GT states |s , obtained within the RPA, are usually assigned the isospin T 0 − 1, although the RPA GT states do not have a definite isospin.In fact, the states |s and |M, T 0 are non-orthogonal: where Q(−) s is a RPA boson-type operator corresponding to the creation of a collective GT state |s .Therefore, one has to project |s states onto the space of the GT states with the isospin T 0 − 1 = T < by means of subtraction of the admixtures of T 0 states to force the relevant orthogonality condition: This equation is valid under the assumption that the integral relative strength x > = M x M of the T 0 = T > component is small as compared to unity.In this case taking into account the T 0 + 1 component is even more unimportant.We restrict the further analysis by the approximation that there is the only GT state in the respective RPA calculations exhausting 100% of the NEWSR and, therefore, this state can be considered as the "ideal" GTS.Under these assumptions one has: Having averaged the exact nuclear Hamiltonian over the state |s, T 0 in the form of Eq. ( 24), one gets: According to the approximate relations ( 25) and ( 26), the relative strength and the energy of the T 0 − 1 GTS diminish as compared with the respective RPA values.The decrease is determined by the zeroth and first moments of the strength function of the isovector M1 GR in the parent nucleus: These relations again lead to the conclusion that the value x > ≃ 2A 1/3 (N − Z) −2 is rather small even if the value (N − Z) is not large.Only for nuclei with a minimal neutron excess (a few units) all three isospin components of the GT strength could have comparable strengths.
The model Hamiltonian
The interaction Ĥp-p in the particle-particle channels has to be included in the model Hamiltonian for nuclei with open shells, in order to take the effects of nucleon pairing into consideration.We choose the interaction in the following separable form (as it is used in the BCS model) in both the neutral and charge-exchange particle-particle channels with the total angular momentum and parity of the nucleon pair being Here, G J=0,1 are the intensities of the particle-particle interaction, P Jµ is the annihilation operator for the nucleon pair: where a λm (a + λm ) is the annihilation (creation) operator of the nucleon in the state with the quantum numbers λm (m is the projection of the particle angular momentum).The interaction (28) preserves the isospin symmetry of the model Hamiltonian.
We use the Bogolyubov transformation to describe the nucleon pairing in the neutral channels in terms of quasiparticle creation (annihilation) operators α + λm (α λm ) (see, e.g., [28]).As a result, we get the following model Hamiltonian to describe F (C) and GT excitations in the β − -channel within the quasiboson version of the pn-QRPA: Here, , µ β and ∆ β are the chemical potential and the energy gap, respectively, which are determined from the BCS-model equations: The total interaction Hamiltonian in both particle-hole (4) and particle-particle (28) channels can be expressed in terms of the quasiparticle (pn)-pair creation and annihilation operators which obey approximately the bosonic commutation rules: The explicit expressions for the interactions are: where
The symmetries of the model Hamiltonian and sum rules.
Due to the isobaric invariance of the particle-particle interaction (28), the Eq. ( 5) still holds and leads exactly to the same selfconsistency condition of Eq. ( 7) with the only difference, that the proton and neutron densities are determined with account for the particle redistribution caused by the nucleon pairing: The direct realization of the Eq. ( 5) within the pn-QRPA with making use of Eqs. ( 30)-(34) and T (−) = πν (χ π χ ν )(Q 00 πν ) + leads also to the selfconsistensy condition of Eq. ( 7) provided that the full basis of the single-particle states for neutron and proton subsystems is used along with Eq. ( 2) for the radial overlap integrals for the proton and neutron wave functions.The consistent mean Coulomb field U C (r) and (EW SR) 0 of Eq. ( 15) are determined by the proton and neutron excess densities of Eq. ( 35), respectively.
The Eq. ( 12) relating the Fermi and "Coulomb" strength functions holds also within the pn-QRPA if the all above-mentioned conditions are fulfilled.In particular, the use of a truncated basis of the single-particle states within the BCS model leads to a unphysical violation of the isospin symmetry.In such a case the degree of violating Eq. ( 12) can be considered as a measure of the violation.
The equation of motion for the GT operator Ŷ (−) µ with taking the nucleon pairing into consideration is somewhat modified as compared to Eqs. ( 9), (10).According to Eqs. (30)-(34) and making use of the full basis of the single-particle states along with Eq. ( 2) for the radial overlap integrals we get within the pn-QRPA ( Ŷ (−) The operator ( Û(−) µ ) p-h is defined by the expression (10) in which both the neutron excess and proton densities are used according to Eq. ( 35).The expression for ( Û(−) µ ) p-p has the form: According to Eqs. ( 10), (36), and (37) the expression for (EW SR) 1 consists of two terms, the former, (EW SR) p-h 1 , coinciding with Eq. ( 16) with the nucleon densities and the occupation numbers appropriately modified by the nucleon pairing, and the latter being due to the nucleon pairing only: In the SU(4)-symmetry limit one has the equality (EW SR) 1 = (EW SR) 0 , which holds also for double-closed-shell nuclei.
The pn-QRPA equations.
The system of the homogeneous equations for the forward and backward amplitudes X J πν (s) = s, Jµ| A Jµ πν + |0 and Y J πν (s) = s, Jµ| ÃJµ πν |0 , respectively, is usually solved to calculate the energies ω s and the wave functions |s, Jµ of the isobaric nucleus within the quasiboson version of the pn-QRPA (see, e.g., [2]).In particular, the system of the equations for the amplitudes follows from the equations of motion for the operators A + and à making use of the Hamiltonian (30),( 33),(34).Instead, we rewrite the system in equivalent terms of the elements r −2 ̺ J i (s, r) of the radial transition density.The elements are determined by the amplitudes X(s) and Y (s) as follows: where the operators P and Q are defined after Eqs. ( 33), (34).According to the definition (39), the elements ̺ 1 , ̺ 2 , ̺ 3 , ̺ 4 can be called, respectively, the particle-hole, hole-particle, hole-hole and particle-particle components of the transition density, which can be generally considered as a 4-dimensional vector: {̺ J i }.In particular, the particle-hole strength of the state |s, Jµ corresponding to a probing operator V (−)
Jµ
is determined by the element ̺ J 1 : (r 20)).The pn-QRPA system of equations elements of the free two-quasiparticle propagator A (+) can be obtained from the corresponding elements A (−) by the substitution π ↔ ν, p ↔ n.
Choice of the model parameters
Parametrization of the isoscalar part of the mean field U 0 (x) along with the values of the parameters used in the calculations have been described in details in [29].The dimensionless intensities f J of the Landau-Migdal forces of Eq. ( 4) are chosen as usual: F J = f J •300 MeV fm 3 .The value f 0 = f ′ = 1.0 of the parameter f ′ determining the symmetry potential according to Eq. ( 7) is also taken from the [29], where the experimental nucleon separation energies have been satisfactorily described for closed-shell subsystems in a number of nuclei.The value of the dimensionless intensity f 1 = g ′ of the spin-isovector part of the Landau-Migdal forces g ′ = 0.8 is chosen to reproduce within the CRPA the experimental energy of the GTR in 208 Pb parent nucleus [17].
The strength of the monopole particle-particle interaction G 0 is chosen to reproduce the experimental pairing energies (actually, we identify the pairing energy with ∆, obtained by solving the BCS-model equations ( 31)).The summation in the second of the equations is limited by the interval of 7 MeV above and below the Fermi level.The same truncation is used in the expressions like ( 46 1.Note also, that the strength of the spin-spin particle-particle interaction G 1 is chosen equal to G 0 , which seems close to the realistic ones used in literature.Such a choice allows us to simplify the calculation control using the (EW SR) 1 , because the "particle-particle" part of this sum rule goes to zero in accordance with Eq. (38).Another reason for the choice is the "soft" spin-isospin SU(4)-symmetry, which is roughly realized in nuclei (see, e.g., [26]).
F (C) strength functions
The F(C) strength functions have been calculated within the CRPA for the 208 Pb parent nucleus according to Eqs. ( 17), (18) and within the pn-QCRPA for 90 Zr and tin isotopes according to Eqs. (44), (45) or their modifications.The "cut-off" parameter k = 3 was used in calculations of the radial propagators of Eqs. ( 46)-(48).All above-mentioned equations have been taken for J = 0.The following characteristics of the F strength distribution have been deduced from the calculated F strength functions: the IAR energy E IAR ; the mean energy Ē(−) IV M R of the IVMR (−) ; relative Fermi strength x for 3 energy intervals: the vicinity of the IAR, IVMR (−) , and IVMR (+) ; relative energy-weighted Fermi strength y for these 3 energy regions.As for the low-energy interval, for all nuclei in question the values of x are below 0.1% (except for 90 Zr, where x ≃0.2%).The value of (EW SR) 0 entering the definition of y has been calculated according to Eq. ( 15) with the nucleon densities modified by the nucleon pairing.All the characteristics of the F strength distribution are listed in the Table 2 along with the experimental IAR energies.To check the selfconsistency of the model, the parameter − 1 has also been calculated.
GT strength functions
The GT strength functions have been calculated within the CRPA for the 208 Pb parent nucleus according to Eqs. ( 17), (18) and within the pn-QCRPA for 90 Zr and tin isotopes according to Eqs. (44), (45) or their modifications taken for J = 1.¿From the calculated GT strength functions the relative GT strength x and relative energy-weighted GT strength y for 4 energy intervals (low-energy, the vicinity of the GTR, high-energy, and the vicinity of the IVSMR (+) ) have been deduced.The value of (EW SR) p-h 1 entering the definition of y has been calculated according to Eq. ( 16) with the nucleon densities and the occupation numbers modified by the nucleon pairing.The value of (EW SR) p-p 1 defined according to Eq. ( 38) equals zero because of putting G 1 = G 0 in our calculations.The values of the parameters x and y are listed in the Table 3 along with the calculated energies of the GTR.The values of the relative strengths x s = r s /(N − Z) of the GT doorway states calculated according to Eq. ( 20) with and without taking the nucleon pairing into consideration are shown in the Figs.2-4 (solid and dotted vertical lines, respectively).
The coupling of the GT doorway states with many-quasiparticle configurations is taken into consideration phenomenologically and is described in average over the energy in terms of the smearing parameter I(ω) simulating the mean spreading width of the doorway states.Following [17] we choose I(E x ) to be an universal function revealing saturation at rather high excitation energies: where α and B are adjustable parameters, E x is the excitation energy in the daughter nucleus.Equation ( 49) is close to the parametrization of the intensity of the optical-potential imaginary part obtained within the modern version of an optical model.The choice of the parameters α = 0.09 MeV −1 and B = 7 MeV has allowed us to describe satisfactorily the total widths of a number of the isovector giant resonances (including GTR) in the 208 Pb parent nucleus [19].In this work the energy-averaged GT strength function S(−) 1 (ω + iI(E x )/2) using the same parametrization of I, where S (−) 1 is the GT strength function calculated according to the CRPA or pn-QCRPA equations.The calculation results are shown in the Figs.2-4 (thin line).The calculated energy dependence in the vicinity of the GTR is approximated by the Breit-Wigner formulae to get both the GTR energy E GT R and width ΓGTR .The values of E GT R obtained thereby and ΓGTR along with the values calculated without taking the nucleon pairing into consideration (in brackets) are listed in the Table 3 in comparison with the corresponding experimental data.
We analyze the effect of the isospin splitting according to Eqs. ( 24), ( 26) without taking the nucleon pairing into consideration because this effect turns out to be weak even for such a rather light parent nucleus as 90 Zr.In this case there is the only T 0 component of the GT strength function in 90 Nb (due to the g 9/2 → g 7/2 M1 transition in the proton subsystem).Its relative strength and energy are x > = 4.7 % and E > = 12.8 MeV, respectively.It leads to a decrease in the relative strength and energy of the GTR equal to x > and x > (E > −E < )/x < = 0.3 MeV, respectively, as compared with the values found within the CRPA.
Summary concerned to the approach
We have extended the CRPA approach, used previously in [17][18][19] for describing the GTR and the IAR in closed-shell nuclei, and have formulated a version of the pn-QCRPA approach for describing the GT and F strength distributions in open-shell nuclei.The common ingredients of both approaches are the following: the phenomenological isoscalar part of the nuclear mean field; the isovector part of the Landau-Migdal particle-hole interaction; the isospin selfconsistency condition being used to calculate the symmetry potential; mean Coulomb field calculated in the Hartree approximation; the use of the full basis of the single-particle states in the particle-hole channel and, as a result, formulating the continuum versions of the RPA; a phenomenological description of the coupling of the particle-hole doorway states with manyquasiparticle configurations in terms of the mean doorway-state spreading width (the smearing parameter).
The specific feature of the developed pn-QCRPA approach is the use of an isospin-invariant particle-particle interaction to describe both the nucleon pairing phenomenon in neutral channels and particle-particle interaction in charge-exchange channels.The method to check the isospin selfconsistency, which could be violated by the use of a truncated basis of single-particle states in particle-particle channel within the approach, is also used.
F (C) strength distribution
The present results (Table 2) show at the microscopical level that the isospin is a good quantum number for medium-heavy mass nuclei.In particular, the IAR exhausts almost all F strength.The rest is mainly exhausted by the IVMR.The heavier a nucleus is, the more the relative F strength of the IVMR becomes.Similar results have also been obtained for the energy-weighted Fermi-strength distribution.
The appropriate choice of the particle-particle interaction along with the use of the same truncated basis of single-particle states in the neutral and charge-exchange channels ensure the isospin selfconsistency of the approach (the calculated values of z are found to be less than 0.1%).For the same reason, the effect of the nucleon pairing on the F strength distribution is small.
The systematic lowing of the IAR energy compared with the experimental value is a shortcoming of the approach, possibly caused by the absence of the full selfconsistency.
GT strength distribution
For all nuclei in question, the calculated GT strength function splits into three main regions (see Table 3 and Figs.2-4).The main part of the GT strength (70-80%) is exhausted by the GTR.In the case of 208 Bi, the calculated GTR relative strength is found to be in a reasonable agreement with the respective experimental value x exp GT R = (60 ± 15)% [4].The spin-quadrupole part of the particle-hole interaction is taken into account only for 208 Bi.As a result, the calculated GTR relative strength is decreased from 68% to 66% and the low-energy GT strength is noticeably redistributed.In the case of 90 Nb, the difference between experimental and calculated results is somewhat larger: x exp GT R = (39 ± 4)% [6], x exp GT R = (61 ± 8)% (x exp = (66 +20 −10 )% up to 20 MeV in excitation energy) [5].This could be partly due to the well-known quenching effect which has been discussed during the last two decades.This effect is not finally established experimentally well enough and its discussion is out of the scope of this work.
The calculated high-energy part of the the GT strength distribution is mainly exhausted by the IVSMR and its satellites.This part is relatively large for 208 Bi (about 19%) and decreases up to about 9% for 90 Nb (4.6% due to the IVSMR and 4.5% due to T > component of the GTR).The low-energy part contains the weakly-collectivized GT states, which can be related to those found in [11,13,14].The nucleon pairing leads to noticeable enriching of the calculated lowenergy part of the the GT strength distribution in open-shell nuclei.This fact is in a qualitative agreement with respective experimental data (shown in Figs.2-4).The calculated values x low /x GT R 19.2% and 25.8% are found to agree reasonably with the corresponding experimental values 28.2% and 30.4% [7] for 90 Nb and 208 Bi, respectively.Bearing in mind also the abovementioned calculated and experimental values of x GT R , we can expect that the quenching effect is not so noticeable as it is usually assumed.
The configurational splitting of the main GT state is found for some single-open-shell nuclei (Figs. 2, 3).In the case of the 90 Zr parent nucleus, the splitting is caused by the direct spin-flip transition 1f n 7/2 → 1f p 5/2 , which becomes possible due to the proton pairing and whose energy is close to the GTR energy calculated without taking this transition into account.The value of the splitting (about 0.5 MeV) is found to be rather small.For this reason, the total GTR width is only slightly increased.In the case of the Sn isotopes, the strongest effect caused by the 1h n 11/2 → 1h p 9/2 transition is found for the 120 Sn parent nucleus.The splitting energy is found to be rather large and leads to a noticeable increase of the total GTR width.The calculated isospin splitting energy for 90 Nb (4.0 MeV) is found to be in good agreement with the respective experimental one (4.4MeV [6]).
In conclusion, we incorporate the BCS model in the CRPA method to formulate a version of the partially self-consistent pn-QCRPA approach for describing the multipole particle-hole strength distributions in open-shell nuclei.The approach is applied to describe the Fermi and Gamow-Teller strength functions in a wide excitation-energy interval for a number of single-and double-closed-shell nuclei.A reasonable description of available experimental data is obtained.
Authors are grateful to Prof. M. Fujiwara and Prof. A. Faessler for thorough reading the manuscript and many valuable remarks.V.A.R. would like to thank the Graduiertenkolleg "Hadronen in Vakuum, in Kernen und Sternen" (GRK683) for supporting his stay in Tübingen and Prof. A. Faessler for hospitality.
)-(48) for the elements of the free two-quasiparticle propagator.Comparing the calculated value of the nucleon separation energy B calc β ≃ −µ β + ∆ β with the corresponding experimental one can be considered as a test of the used version of the BCS model.The parameters of the BCS model along with B calc β and B exp β are listed in the Table
Figure 2 :Figure 3 :
Figure2: The relative GT strengths x in 90 Nb, 112−116 Sb calculated within the pn-QCRPA (solid vertical line) and CRPA (dotted vertical lines) in comparison with the respective experimental data[3,7] (dashed vertical lines).The smeared pn-QCRPA GT-strength distribution S is also shown (thin solid lines)
Figure 4 :
Figure 4: The relative GT strengths x in 208 Bi calculated within the CRPA (solid vertical lines) along with the respective smeared distribution S (thin solid line).The experimental data are taken from [7] (dashed vertical lines)
Table 1 :
Calculated values of the pairing gap ∆ β along with the calculated and experimental proton and neutron separation energies B calc, exp p,n .The monopole pairing strength G 0 is also givenNucleus G 0 • A, ∆ β , B calc n , B calc p , B exp n , B exp p , MeV MeV MeV MeV MeV MeV
|
2014-10-01T00:00:00.000Z
|
2002-01-23T00:00:00.000
|
{
"year": 2002,
"sha1": "cc10186044a9e030add78b9bff54b1683c631266",
"oa_license": null,
"oa_url": "https://arxiv.org/pdf/nucl-th/0201065",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "cc10186044a9e030add78b9bff54b1683c631266",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
253224023
|
pes2o/s2orc
|
v3-fos-license
|
Taming the TuRMoiL: The Temperature Dependence of Turbulence in Cloud-Wind Interactions
Turbulent radiative mixing layers (TRMLs) play an important role in many astrophysical contexts where cool ($\lesssim 10^4$ K) clouds interact with hot flows (e.g., galactic winds, high velocity clouds, infalling satellites in halos and clusters). The fate of these clouds (as well as many of their observable properties) is dictated by the competition between turbulence and radiative cooling; however, turbulence in these multiphase flows remains poorly understood. We have investigated the emergent turbulence arising in the interaction between clouds and supersonic winds in hydrodynamic ENZO-E simulations. In order to obtain robust results, we employed multiple metrics to characterize the turbulent velocity, $v_{\rm turb}$. We find four primary results, when cooling is sufficient for cloud survival. First, $v_{\rm turb}$ manifests clear temperature dependence. Initially, $v_{\rm turb}$ roughly matches the scaling of sound speed on temperature. In gas hotter than the temperature where cooling peaks, this dependence weakens with time until $v_{\rm turb}$ is constant. Second, the relative velocity between the cloud and wind initially drives rapid growth of $v_{\rm turb}$. As it drops (from entrainment), $v_{\rm turb}$ starts to decay before it stabilizes at roughly half its maximum. At late times cooling flows appear to support turbulence. Third, the magnitude of $v_{\rm turb}$ scales with the ratio between the hot phase sound crossing time and the minimum cooling time. Finally, we find tentative evidence for a length-scale associated with resolving turbulence. Under-resolving this scale may cause violent shattering and affect the cloud's large-scale morphological properties.
INTRODUCTION
While scales and relevant physics may vary, interactions between regions of cooler gas and coherent flows of hotter gas are prominent in many contexts.These interactions are prevalent in the circumgalactic medium (CGM), such as high velocity clouds (e.g., Wakker & van Woerden 1997;Putman et al. 2012), ram-pressure stripping of infalling satellites (e.g., Emerick et al. 2016;Simons et al. 2020) and the resulting streams (e.g., Bland-Hawthorn et al. 2007;Bustard & Gronke 2022), or cooling flows from cosmic accretion (e.g.Mandelker et al. 2020).There are also instances of these interactions within the interstellar medium (ISM), like the stellarwind driven bubbles within star-forming clouds (e.g.Lancaster et al. 2021).They are also relevant to the ram-pressure stripping of cluster galaxies and star formation in the tails of jellyfish galaxies (e.g.Tonnesen & Bryan 2021).We take a particular interest in their role within galactic winds (e.g.Fielding & Bryan 2022).
Galactic winds are ubiquitous throughout cosmic time, and play a pivotal role in galaxy evolution; they regulate star formation and transport metals out of the interstellar medium (ISM) (Somerville & Davé 2015).Observations indicate that stellar-feedback-driven winds are inherently multiphase; they are composed of comoving gas phases that vary in temperatures by orders of magnitude (see Veilleux et al. 2005 andRupke 2018 for reviews of observational evidence).
Observations favor a model in which supernovae drive hot ≳10 6 K winds that accelerate and entrain clouds of cool ∼10 4 K gas from the ISM (e.g.Chevalier & Clegg 1985).This model is complicated by hydrodynamical instabilities that drive mixing of gas between the cloud and wind.Because the timescale for mixing to destroy the cloud (by homogenizing the gas phases) is shorter than the ram-pressure acceleration timescale, it's remarkably difficult to accelerate clouds before they're destroyed (Zhang et al. 2017).
Various ideas have been proposed to address this difficulty.Some, like magnetic shielding (e.g.McCourt et al. 2015;Grønnow et al. 2018;Cottle et al. 2020), may extend the cold-phase lifetime by reducing mixing (see also Forbes & Lin 2019, for other mechanisms).Others are alternative acceleration mechanisms like radiationpressure (e.g.Zhang et al. 2018) or cosmic rays (e.g.Wiener et al. 2019;Brüggen & Scannapieco 2020).Another idea suggests the remnants of destroyed clouds seed the in situ formation of clouds in cooling outflows (e.g.Thompson et al. 2015;Schneider et al. 2018;Lochhaas et al. 2021).
Radiative cooling is also known to extend the coldphase lifetime (e.g.Mellema et al. 2002;Fragile et al. 2004;Melioli et al. 2005;Cooper et al. 2009).This work focuses on the regime in which rapid cooling acts as a mechanism that facilitates cloud survival (e.g.Marinacci et al. 2010;Armillotta et al. 2016).In this regime, cooling in a thin layer of gas at the interface between the phases is able to overcome the destructive effects of mixing (Gronke & Oh 2018).As turbulent mixing feeds hot phase material into this layer, isobaric cooling removes the temperature differential in the new material (Fielding et al. 2020).This process facilitates the transfer of mass and momentum to the cold phase providing a powerful additional acceleration source and allowing cloud growth.Hereafter, we refer to this mechanism as turbulent radiative mixing layer (TRML) entrainment.
The literature largely agrees that the occurrence and efficacy of TRML entrainment is controlled by three principal dimensionless numbers: (i) the density contrast χ = ρ cl /ρ w between the cloud and the wind, (ii) the Mach number of the wind M w = v w /c s,hot , and (iii) the cooling efficiency ξ = τ mix /τ cool .Here, τ mix and τ cool specify the characteristic timescales for mixing and for cooling of the mixing layer.As in Fielding et al. (2020) we primarily consider ξ sh = t shear /t cool,min , where t shear = R cl /v w is the shear-time and t cool,min is the minimum cooling time.In practice, our choice for τ cool is similar to the popular option of using t cool,mix , the cooling time of gas within the mixing layer at T mix ∼ √ T cl T w and n mix ∼ √ n cl n w . 1 It has been suggested that the relevant cooling timescale is instead set by cooling in the hot, volume filling, wind phase (Li et al. 2020;Sparre et al. 2020).We reconcile differences between these cooling timescales in follow-up work (Abruzzo et al., in prep.).Despite the obvious central importance of turbulence mechanisms underlying the operation of TRMLs, we do not yet have a clear understanding of how the turbulent velocity v turb changes as the 3 principal dimensionless numbers (χ, M w , and ξ) are varied.This is closely related to two fundamental unanswered questions.
(i) What is the role of cooling in driving turbulence?Shear layer studies find no or very weak cooling time dependence of v turb (Fielding et al. 2020;Tan et al. 2021).In contrast, some cloud crushing simulations find that cooling induced pulsations may be the dominant driver of turbulence (Gronke & Oh 2020a,b).Reconciling these pictures requires a careful investigation of how v turb scales with ξ.
(ii) What is the timescale for turbulent mixing?Shear layer studies associate τ mix with the eddy turnover time at the outer scale, or τ mix ∼ L outer /v turb (T cl , ℓ = L outer ), where v turb (T cl , ℓ = L outer ) is a fixed fraction of the relative velocity for χ ≳ 100 (Fielding et al. 2020;Tan et al. 2021).This scales similarly to t shear .Wind-tunnel studies instead link τ mix with the cloud-crushing time, t cc = √ χR cl /v w .The Kelvin-Helmholtz and Rayleigh-Taylor instabilities have growth times of order t cc and destroy clouds over a few t cc , in the absence of cooling (Klein et al. 1994).Gronke & Oh (2018) predicts cloud survival when ξ GO = t cc /t cool,mix exceeds unity.While both choices give ξ a v w R −1 cl scaling, the latter introduces an extra dependence on χ −1/2 .This discrepancy could have profound impacts on cloud survival criteria and requires a careful understanding of how v turb scales with χ.
To address these questions, we investigate the turbulent properties that emerge in wind-tunnel simulations of cloud-wind interactions.While turbulence in TRMLs has traditionally been treated as homogeneous (e.g Begelman & Fabian 1990;Gronke & Oh 2018;Fielding et al. 2020), we will show that it depends not just on scale but also on phase.This has important implications for mixing and hence cloud survival.Although most previous work on TRML entrainment has focused on cloud-wind density contrasts of χ = 100 − 300 (see however Sparre et al. 2020;Gronke & Oh 2018, 2020a), galactic winds are expected to have χ ≳ 10 4 (Fielding & Bryan 2022).Furthermore, we have preliminary evidence for important changes to the dynamics and clumping structure for χ ≫ 102 (Gronke & Oh 2020b).In this work we, therefore, place particular emphasis on higher χ results.
In § 2, we describe the suite of simulations used in this investigation.Videos of these simulations can be found at http://matthewabruzzo.com/visualizations/.In § 3, we describe and compare three approaches for characterizing multiphase turbulence, followed by a description of the results from applying these methods to our simulation suite in § 4. Subsequently, we describe implications of our results and detail our conclusions in § 5 and § 6.
SIMULATIONS
We ran a suite of 3D uniform grid hydrodynamical simulations using the enzo-e 2 code, which is a rewrite of enzo (Bryan et al. 2014) built on the adaptive mesh refinement framework cello (Bordner & Norman 2012, 2018).Our simulations employed the van Leer integrator (without constrained transport) (Stone & Gardiner 2009) with second order reconstruction and the HLLC Riemann solver.
Our simulations begin with a motionless spherical cloud embedded within a hot, uniform, laminar wind in the x direction.We imposed an inflow condition on the upstream boundary (positive x) and outflow conditions for the other boundaries.The cloud and wind material are initialized with p/k B = 103 K cm −3 .The cloud density ρ cl in all of our simulations is chosen such that T cl = 5010 K.This roughly corresponds to the temperature where heating starts to dominate over cooling (without self-shielding).The wind density is then determined by the desired value of χ.
To model radiative cooling, we use the grackle 3 library (Smith et al. 2017), assuming solar metallicity and no self-shielding.Specifically, we use the tabulated heating and cooling rates for optically thin gas in ionization equilibrium with the z = 0 Haardt & Madau (2012) UV background.We turn off cooling in gas with T > 0.6T w in our simulations with χ ≤ 10 3 .This helps to avoid complications from cooling in the hot wind in our χ = 100 simulations in which the ratio of the cooling time of the hot wind to the cooling time of the mixing layer is t cool,w /t cool,mix ∼ 40.In higher χ simulations cooling of the wind fluid is so slow that this ceiling has no discernible impact. 4For simplicity, we also turned off heating/cooling below T cl .
To break initial symmetries, we initialized the density of each cell, within the cloud, to the average of ρ(x), where ρ(x) ρ cl = 1 + 0.099 For each i, we drew a random unit vector êi and values for λ i and ϕ i from [R cl /8, R cl ] and [0, π).Cells on the cloud edges were initialized with subsampling; each subcell had a width of R cl /128.
Our simulations have resolutions of R cl /∆x = 4, 8, 16, 32, 64.Unless stated otherwise, results are presented for R cl /∆x = 16.By default, the wind-aligned dimension and transverse dimensions for most of our simulations' domains had sizes of 120R cl and 20R cl , respectively corresponding to a 1920 × 320 2 grid at our fiducial R cl /∆x = 16 resolution.The sizes were somewhat larger (360R cl and 30R cl ) for our χ = 10 4 simulations in order to minimize the impact of the bow shock reflections, prevent dense material from leaking out of the transverse boundaries in cases of shattering, and to give room for tail formation.While the default dimensions are adequate for determining whether our clouds survive in runs with M w ≥ 3 or χ = 10 3 , we find that boundary effects can impact later time measurements.Thus for such cases, with radiative cooling and R cl /∆x = 4, 8, 16, we present results from runs with a wind-aligned length of 240R cl .In all cases, the cloud was initialized at the center of the domain and we employed a frame-tracking scheme that updated the reference frame every t cc /16 such that the mass-weighted velocity for cells with ρ ≥ √ ρ cl ρ w was zero.
Table 1 presents a list of our simulations.As we will discuss in § 3, our measurements of v turb involve averages over velocity properties.Thus, leakage of material from the domain could plausibly bias our measurements.However, the generality of the scaling relations derived in this work, which apply to runs that do and do not leak material, suggests that overall effects on our v turb measurements are probably mini-4 We only explicitly checked the effects of a cooling wind in the χ = 100, Mw = 1.5 runs and χ = 10 3 , ξ sh = 27.8 run.We expect only minimal late time complications in our χ = 1000, ξ sh = 83.4run since it has t cool,w /tcc ∼ 51 (Abruzzo et al. 2022).While our χ = 300, ξ sh = 3.2 run has the same t cool,w /tcc, complications may be significant since that run is close to the survival threshold.
Complications are likely significant in larger χ = 300 runs.
CHARACTERIZING TURBULENCE
The primary goal of this paper is to characterize the turbulent properties of the turbulent radiative mixing layer that mediates mixing and cooling between the hot wind and cold cloud.Although much effort has been devoted to understanding turbulence in single phase media, there has been considerably less work for multiphase systems (e.g., Mohapatra et al. 2022;Gronke et al. 2022;Gronke & Oh 2022).The potential dependence of turbulent properties on both scale and the gas' local thermodynamic state complicates the interpretation of conventional methods for characterizing v turb .There are a number of possible ways to extend existing turbulence measures; however, their novel nature means that they can be difficult to interpret and their robustness is unclear.In order to get around this difficulty, in this paper we consider three distinct methods for characterizing our multiphase turbulent simulations.These are built around three different ideas based on (i) a filter-based Note-Unless otherwise noted, all runs were initialized with T cl = 5010 K, where t cool,cl /t cool,min ∼ 105.All simulations were initialized with an initial thermal pressure of p/k B = 10 3 cm −3 K.In each run, t cool is minimized at T = 1.83 × 10 4 K with a value of 75.5 kyr; the sound speed at this temperature is 18.6 km/s.The cooling length, ℓ cool = cst cool , is minimized at T = 1.70 × 10 4 K with a value of 1.43 pc.
a Denotes whether clouds survive (i.e. if the cold phase mass ever drops to 0). "Borderline" indicates cases where the line between survival vs. destruction and rapid subsequent precipitation is fuzzy b the cold phase mass, ρ > √ χρ cl , dropped to ∼0.01%, 0.07% of the initial value in the R cl /∆x = 8, 16 runs before growth.At these times, there is no mass denser than ρ cl /3.c the mass of gas denser than √ χρ cl (ρ cl /3) drops to 16% (6%) of it's original value and begins monotonic growth after 21.5tcc (24tcc).In an alternate version of the same run, where the domain dimensions are 120 × 10 2 , the mass instead drops to 32% (8%) of its initial value and starts growing after 12.5tcc (11tcc).
technique, (ii) a geometric approach, and (iii) classic structure function ideas.
We describe these approaches below, and to supplement our description of the methods, we apply each to Shows the phase dependence of v turb , measured via filtering, of the R cl = 64∆x run of our χ = 1000, ξ sh = 27.8,Mw = 1.5 simulation at 2.5tcc.The top panel includes contributions from all three velocity components.The bottom panel just includes contributions from the components transverse to vwind ; this is consistent with how v turb from filtering is measured throughout the remainder of this work.The solid orange line denotes the median while the dashed orange lines bound values between the 15th and 85th percentile.The dotted-dashed line shows v turb magnitudes that are equal to the sound-speed.The steep drop-off in v turb near Tw, is an artifact of the fact that wind is initially laminar.
Filtering
In our first approach, we attempt to explicitly remove the bulk flows by filtering out the large-scale bulk velocities.Specifically, we estimate v turb by applying a high-pass Gaussian filter, with density weighting, to each component of the velocity.The size of the filter is chosen to correspond to scales on which the bulk flow is varying, that is approximately the cloud radius.We use density weighting (which corresponds to smoothing the momentum) in order not to be dominated by the volume-filling hot gas component.
More precisely, in this approach, the i-th component of the turbulent velocity is given by where f σ (x) is the formula for a normalized, separable, three-dimensional Gaussian.In short, the convolution of f σ (x) with v i (r) (i.e. the fraction term) estimates the laminar part of v i , and subtracting if from v i gives the turbulent part.
Throughout this work, we use a Gaussian filter with a standard deviation of R cl /4; this was chosen after extensive experimentation to visually pick out turbulent regions with a minimal "bleed" into the laminar regions.Our results do not depend qualitatively on the exact choice of the filtering scale as long as it is on the order of the cloud size.
The rightmost two panels in Fig. 1, illustrate the hi-pass filtered transverse velocity components for the aforementioned simulation at 4.5t cc , and the center panel shows the combined magnitude of these components, v trans,hi .The left two panels show the density and specific thermal energy slices.Note that here, as elsewhere in this paper, we use e ≡ (p/ρ)/(γ − 1) to denote the specific thermal energy of the gas.This quantity is closely related to temperature, but is easier to compare among runs with different χ values since e wind = χe cl (due to variations in mean molecular mass T w < χT cl ).The inset panels make it readily apparent that the turbulent velocity has a clear phase dependence.
In Fig. 2 we show the phase dependence explicitly (at 2.5t cc ), plotting the 2D distribution of mass as a function of temperature and (top) hi-pass filtered velocity including all components, v tot,hi , and (bottom) just the transverse components, v trans,hi for this snapshot.Because of the spatial gradients that persist in the downstream velocity component (see Appendix A), we use v trans,hi to estimate v turb in the remainder of this work.
This approach is sensitive to turbulence on scales below the high-pass filtering limit; since this is approximately the driving scale (of order the cloud radius), we expect this to be a good measure of the turbulent properties, although it may also remove some of the contribution to the turbulence on scales just below the driving scale.One possible downside of this approach is the contribution of the bulk flow in scales at and below the filtering scales; we have explored alternate weighting schemes and find only minor differences.Although we do not have detailed scale information (except the re-moval of large scales), this approach does permit a very fine examination of the turbulent properties with phase (i.e., specific energy) as seen in Fig. 2.
Indeed, this figure clearly shows a different dependence on specific energy below and above log 10 (e/e cl ) ≈ 0.7, which corresponds to the peak of the cooling curve we adopt.We return to this point in § 4. Fig. 1 qualitatively shows that spatial variations in turbulence are largely explained by the spatial variations in gas phase.The main exception is the hottest phase, which is "contaminated" by unmixed, laminar gas (this is reflected in Fig. 2).
Because measuring phase information isn't as seamless for our other approaches, we define a set of nominal coarse phase bins to be used with them.We define the bin edges in terms of log(e/e cl )/ log χ to ease comparisons between runs with different χ values.The bin edges are −∞, 1/12, 3/12, 5/12, 7/12, 9/12, 11/12, which are illustrated by the vertical dotted lines in Fig. 2.
Geometric
Our second approach uses the geometry of isosurfaces in the flow to characterize the turbulence.To motivate this, consider a toy model in which a cloud's geometry is a sphere or a cylinder.The cloud is oriented such that the azimuthal angle, ϕ, measures the angle on the plane transverse to v wind .While cloud acceleration and accretion (e.g. by a TRML, Fielding et al. 2020) can drive steady coherent flows along the wind and radial directions, turbulence is the only source of motion along φ.In other words, we can characterize v turb with v ϕ .Tan et al. (2021) drew a similar conclusion in shearing box simulations about the utility of the dispersion of the velocity component perpendicular to shear and inflow directions.
Despite their more complex morphology, we can apply the same logic to real clouds.For a given snapshot, we employ the Lewiner et al. (2003) marching cubes algorithm to construct five topologically correct meshes of triangle facets that trace specific internal energy isosurfaces using values that coincide with the centers of the closed bins mentioned in § 3.1.We supplement these with additional isosurfaces at values near the peak of the cooling curve (we vary the precise locations based on the χ value of the simulation).Fig. 3a shows a cutaway visualization of several of these iso-surfaces at 2.5t cc for our χ = 1000, ξ sh = 27.8 simulation.
For each facet, we define v ϕ−like ≡ v • (v wind × n), where v is the linearly interpolated velocity and n is the outward normal vector.Finally, we estimate v turb for an iso-surface with the area-weighted standard deviation of v ϕ−like (excluding facets with vwind × n = 0).
Fig. 3b-c shows area-weighted distributions of v r−like ≡ v • n and v ϕ−like for the previously mentioned simulation.The distribution for v r−like shows a negative mean for each iso-surface, which is consistent with netinflow of gas.In contrast, the distributions for v ϕ−like is centered on 0, which is exactly what we expect.
While this approach does not provide any scale information about turbulence, it can be used to provide detailed phase information.For example, Fig. 3d illustrates qualitatively similar phase-dependence to the filtering measurements.However, in contrast to the filtering technique, this approach requires generation of a separate surface for each phase to be probed and so is much more computationally intensive.As is discussed in Appendix A, the main advantage of this approach is that it provides the most accurate early-time measurements.
Velocity Structure Function
Our final turbulence measure is the velocity structure function, which has the advantage of explicitly exploring the dependence on length scale ℓ, but comes with some uncertainty due to the potential influence of gradients in the large-scale bulk flows.
To compute this measure, we consider a velocity vector field that is sampled at a collection of points.Let |δv| denote the magnitude of the velocity difference between a pair of points.We define the first and second order velocity structure functions, ⟨|δv|⟩(ℓ) and ⟨(δv) 2 ⟩(ℓ), as the average values of |δv| and (δv) 2 for all pairs of points separated by a distance ℓ.Except where otherwise noted, the velocity differences only include components orthogonal to the wind direction (see Appendix A for further explanation).
Given the obvious phase dependence in our other turbulence metrics, we compute the structure function for individual phases of the gas in our simulations, using the same bins defined in § 3.1.All structure function calculations in this work are computed using all pairs of points from individual phase bins.We note that both points in each pair always comes from the same phase bin, and we leave consideration of cross-phase terms to future work.We omit the hottest phase-bin from our analysis because a large fraction is laminar (contaminating the signal) and it is computationally expensive to compute.
We also use discrete bins of ℓ, which depend on the cell width, ∆x, in our simulations.The ith ℓ bin is centered on ℓ = i∆x and has a width of ∆x.However, for i = 0 and i = 1, we have adjusted the bins such that they only Illustrates iso-temperature surfaces and derived v turb measurements for our the R cl /∆x = 64 run of our χ = 1000, ξ sh = 27.8,Mw = 1.5 simulation at 2.5tcc.Panel a shows a cut-away of five nested iso-surfaces measured at log χ e/e cl = 1/6, 1/3, 1/2, 2/3, 5/6 (for this system, T = 1.3 × 10 4 K, 3.3 × 10 4 K, 10 5 K, 3.3 × 10 5 K, 3.3 × 10 4 K, 10 6 K).The arrow illustrates the φ direction measured in the plane transverse vw.Panels b and c respectively show the normalized areaweighted distributions of the v normal and v ϕ−like velocity components measured on the iso-surfaces pictured in a. Panel d shows the standard deviation of the distributions from panel c (colored diamonds), as well as data derived from other isosurfaces (gray circles), plotted as a function of log χ e/e cl .
contain values for pairs of cells that share a face and an edge, respectively.In other words, the i = 0 bin (i = 1 bin) only contains values for cells exactly separated by ℓ = ∆x (ℓ = √ 2∆x).Throughout this work, we largely focus on ⟨(δv) 2 ⟩(ℓ) because it has a similar magnitude to our other v turb metrics.The top panel of Fig. 4 shows ⟨(δv) 2 ⟩(ℓ) measured for each phase bin of our χ = 1000, ξ sh = 27.8 simulation at 2.5t cc .The peak in ⟨(δv) 2 ⟩(ℓ) at ℓ ∼ R cl , present in all phases (in some cases it manifests as a change in slope), is expected since the outer scale should be of order the cloud size R cl ; although the complicated cloud structure at late times is unlikely to correspond to a narrow range for the injection of turbulence.We leave investigation of the behavior above R cl to future work.
We generally observe a weaker dependence on ℓ than the ∝ ℓ 1/3 scaling expected for idealized, Kolmogorov turbulence (for ⟨(δv) 2 ⟩(ℓ)), although this depends somewhat on phase.We caution that the precise ℓ scal-ing at intermediate (inertial) scales may not be a robust measurement due to the bottleneck effect, which arises for under-resolved turbulent cascade (e.g.Rennehan 2021; Mohapatra et al. 2022).
In agreement with the other measures, we also see a general decrease in the turbulent velocity with temperature.
Comparison
We have shown that much more information about the phase, scale, and spatial dependence of turbulence can be gleaned from these simulations when using metrics beyond the standard root-mean-square approach.We now compare these more refined turbulent metrics to each other.
The top row of Fig. 5 shows the v turb phase dependence, measured with each approach, in a R cl /∆x = 16 of the run of the previously mentioned simulation at t = 1.0, 5.5, 9.5, 17.5t cc (see § 4.3 for a discussion of how resolution affects our measurements).The differing approaches achieve remarkable qualitative agreement about the magnitude, phase dependence, and temporal dependence of v turb .In particular, all measures show that v turb increases rapidly with temperature at early times, before transitioning to a flatter slope at later times.In addition, all approaches show very similar amplitudes.However, the approaches are clearly not interchangeable.Indeed, this plot demonstrates the utility of computing all three turbulence measures, allowing us to ascertain the robust results without over-interpreting features that are not seen in at least two of the techniques.
When considering the volume-averaged properties of the entire system, our geometric approach offers the most robust measurements because it is most resilient to biases that may arise from the gradients in the laminar component of the flow at early times (see Appendix A for more details).
In the context of phase-dependence, the filtering approach clearly is the most convenient metric because it naturally provides turbulence as a function of phase.
However, unlike the other approaches, the filtering approach does not examine the turbulence of one phase in isolation to the others, which may introduce "artifacts" in this type of comparison.We will show in § 4.3, that the negative slope at large T , at late times may be a resolution effect.
The ⟨(δv) 2 ⟩(ℓ) approach captures much of the same phase dependence while also opening a window into the scale dependence of the turbulence.With that said, it is the most computationally expensive.
RESULTS
Having established the robustness and relative merits of our turbulence metrics, we now examine what they tell us about the cloud-wind interaction.We start (in § 4.1) by describing the phase dependence of v turb , its scaling with dimensionless parameters, and timedependence.Then, in § 4.2 we briefly discuss the driving scale before turning to an evaluation of numerical convergence in § 4.3.
For the purpose of this discussion and subsequent sections, we define the cold phase as all gas with densities of at least √ ρ cl ρ w (i.e. the density of the mixing layer).We also define the relative velocity, v rel , as the difference between v w (at the inflow boundary) and the mass-weighted velocity of the cold phase (initially v rel is v w but declines as the gas is entrained).
Turbulent Properties
Throughout this group of subsections, we will compare simulations with different parameters and ∆x = R cl /16.We first consider the phase dependence of v turb , then show how v turb scales between simulations, and finally describe the time-evolution of v turb .
Phase dependence
We begin by presenting the phase dependence in two limiting cases of the χ = 1000 and M w = 1.5 cloudwind interaction.These two cases are: (i) a run without cooling (ξ sh = 0) and (ii) a run where cooling is sufficient for entrainment (ξ sh = 27.8).
The bottom row of Fig. 5 shows the non-radiative run.In this case, the scaling of v turb with e (or T ) is consistent with a power-law where M turb is constant (i.e.v turb ∝ c s ∝ √ e) throughout the cold phase's lifetime.The amplitude of the turbulence decreases as v rel drops but there is no indication that the scaling with phase changes (note that at late times, the cold gas is entirely absent, due to mixing with the hot phase and so we cannot measure its turbulent properties).As expected, these trends are unaffected by our choice of turbulent metric.
No Cooling
Each row shows phase-dependence of v turb measured with different metrics (large panels), and bulk property evolution (smaller panels) for a χ = 1000, Mw = 1.5, ∆x = R cl /16 simulation.The top row shows our ξ sh = 27.8 simulation, (the cloud is entrained) and the bottom row shows the same simulation without cooling (the cloud is destroyed).The v turb panels respectively show data measured with high-pass filtering, from isosurfaces, and using ⟨(δv) 2 ⟩(ℓ).The data is colored by the time at which they are measured (the vertical lines in the small panels denote the times) and the dashed black line indicates values of cs as a function of phase (assuming constant pressure).Note that the lowest temperature point on the blue and green curves of the isosurface panel in the lower row, are likely outliers: the relevant isosurfaces probably bound a small amount of mass.The bulk evolution panels respectively show the mass in the cold phase, (i.e.gas denser than √ ρ cl ρw) and the average velocity difference between the cold phase and vw as functions of time.
The top row of Fig. 5 shows the run with cooling.Compared to the constant-slope power-law phase dependence of turbulence in our non-radiative run, the case with cooling clearly has more complex behavior.We parameterize the phase dependence of systems with sufficient cooling for entrainment, at a given time, using a broken power-law, Figure 6.The top row compares the dependence of v turb with gas phase at four representative times in the clouds' evolutions (full-size panels on the left) and bulk property evolution (small panels on the right) for a separate collection of simulations.The top row compares 9 simulations with χ = 100, Mw = 1.5, and the same initial cloud temperature, but varying R cl (and so varying ξ sh ).The v turb panels shows filtering measurements after 1.5tcc (leftmost panel), when the relative velocity between the cold and hot phase are various fractions of its initial value (middle 2 panels), and after 20tcc (right panel).Data is only shown for a given simulation for 0 < log χ e/e cl < 0.9.The black dashed line shows v turb = cs ∝ √ e and the vertical grey line extends between the temperatures where the cooling length is minimized and t cool is minimized.The bulk property panels show the evolution of (upper) the cold phase's mass and (lower) the relative velocity in each simulation.The curves in these panels are annotated with dots to specify the snapshots displayed in the v turb panels.It's a little ambiguous whether the cloud survives in the χ = 100, t shear ∼ 0.57 run, or if its destruction seeds the prompt precipitation of cold phase material (see Table 1 for more details).The bottom row shows the same information, but for a set of 4 simulations that all have χ = 1000.The other difference is that the rightmost v turb panel in the bottom row compares v turb measurements at v rel /vw ∼ 0.2.We note that c s,hot t cool,min is 6.56 pc (20.7 pc) for simulations in the top (bottom) row.
Below e break , the scaling of v turb on e is constant in time.Above e break , the slope of the power-law dependence, α, has clear time-dependence.At very early times (≲t cc /8), geometric measurements provide some evidence (not shown) that α = 1/2; in this case Eq. 3 is equivalent to the scaling of our non-radiative run.As the system evolves, α decreases (i.e. the slope flattens above e break ).When the cloud is mostly entrained, α stabilizes at ∼ 0.6 While we don't show it, we note that similar behavior occurs in our χ = 1000, ξ sh = 2.78 run, but the cloud is destroyed long before α drops to 0.
This demonstrates an essential feature of the turbulence in systems with sufficient cooling for cloud survival and entrainment: the cold phase has a larger turbulent Like the top row of Fig. 6 except that the pictured simulations primarily vary the cloud temperature.Each simulation has χ = 100 and Mw = 1.5.We expect at higher resolution that the power-law slope below e break in the purple curve will be closer to 0.5 (i.e. the slope of the dashed black line).6 except that the pictured simulations primarily vary in χ.We have made two compromises in presenting this data.First, we fix β to 0.25 for all panels.This is done as a simplification because β changes on a timescale related to χ.Second, the rightmost v turb panel compares simulations at a fixed value of v rel /vw rather than at a fixed time.The last panel typically compares the simulations at a point in evolution when v turb stabilizes (see § 4.1.3).However, that time seems to come much later in our χ = 10 4 simulation, after the simulation terminates.While we include the χ = 10 4 run for completeness, strong resolution dependence (see Table 1) and the atypical shape of the cool-phase mass evolution may indicate that it is not well-converged.As noted in § 2, some material that started in the cloud leaks out of the domain at 6.5tcc, which coincides with the large drop-off in cool-phase mass.
Mach number and turbulent kinetic energy than the hot phase.
4.1.2.Scaling with Cloud Parameters (χ, Mw, and ξ sh ) Now that we've established the behavior in these limiting cases, we discuss how the principal dimensionless numbers (χ, M w , and ξ sh ) affect the magnitude of v turb in simulations with rapid enough cooling to ensure cloud survival.At a given stage of a cloud's evolution (i.e. for a given value of v rel /v w or fixed time), we find that v turb satisfies the scaling, where ξ sh and M w both refer to values used to initialize the problem.This is equivalent to saying that v turb scales with the ratio between the hot-phase soundcrossing time (R cl /c s,hot ) and t cool,min .The best fit values for β are 0.25 and ∼ 0.5 at early and late times, respectively.This change in β seems to coincide with a 6 except that the pictured simulations primarily vary Mw.The solid (dotted) lines show data from simulations with R cl /Mw = 37.6 pc (376 pc) and ξ sh = 5.73 (57.3).We note that the c s,hot t cool,min is 6.56 pc for all simulations in this plot.As we will show in panels e-h of Fig. 10, v turb evolves more slowly in higher Mw runs.Consequently, the "late times" panel shows data from Mw = 0.75, 1.5 runs at 20tcc, and data from Mw = 3, 6 runs at 30tcc (we did not run the Mw = 6 simulation to late enough times or with a long enough domain for an optimal late-time comparison).
transition between temporal evolutionary stages, which we will discuss further in the next subsection and link to a change in the primary source of turbulent kinetic energy.
Figures 6-9 compare v turb measurements, adjusted to remove differences captured by this scaling, for different sets of simulations.In other words, the agreement between the curves in a given panel in these figures indicates the accuracy of the adopted scaling.Because the principal dimensionless numbers clearly affect the slope of v turb above e break , the reader should primarily consider agreement at e break (denoted by a vertical line) and in colder gas.Note that unlike previous plots, the black dashed line shows v turb ∝ c s (e) rather than v turb = c s (e).
First, we consider the scaling for runs with M w = 1.5.Fig. 6 shows the scaling on R cl ; the top (bottom) row shows runs with χ = 100 (χ = 1000).The impressive overlap of the curves in each panel demonstrates that the adopted scaling works remarkably well -there are occasional differences at high e, but the turbulence in the gas closest to the wind phase is the most challenging to accurately measure.The figure also clearly shows that the shape of the turbulence dependence with e changes over time, a point we will return to later.
Figures 7 and 8 provides evidence that v turb depends not just on R cl , but on the ratio R cl /c s,hot by comparing runs with different R cl and c s,hot values.The variation in c s,hot come from adopting different values for T cl and χ.Fig. 8 provides further confirmation that c s,hot is the correct sound-speed to include in this scaling because c s,hot has different χ-dependence from the sound speed in the (cold) cloud phase.
Finally, Fig. 9 demonstrates the v turb scaling for runs that vary in M w .It largely confirms the lack of M w dependence.
We provide a rough normalization for Eq. 4 when v rel /v w ∼ 0.75.In this case, we find that v turb (e break ) ∼ 0.4c s,break (R cl /(t cool,min c s,hot )) 1/4 .The precise normalization will change when using other techniques to measure v turb .
All of these results are computed with the filtering metric for turbulence.We note that these scalings are somewhat less clear for geometric and ⟨(δv) 2 ⟩(ℓ) measurements of the χ = 100 simulations (the scaling between χ = 1000 runs is clear for all metrics).For example, the geometric measurements show slightly different trends among the runs that initially lose mass, and suggest that β never changes from 0.25 in the χ = 100 runs.Although the latter quirk is difficult to explain, we are encouraged by the fact that the geometric approach does show the change in β for the χ = 10 3 simulations, and the fact that the ⟨(δv) 2 ⟩(ℓ) relation definitely supports β = 0.5 at late-times in our χ = 100 simulations.With that said, ⟨(δv) 2 ⟩(ℓ = R cl ) measurements do show more scatter than is present in the top row of Fig. 6.While this could be physical, the coarser phase bins may also contribute to this scatter.We defer further investigation to future work.
Temporal evolution
So far, we have focused on how the phase-dependence of the turbulence changes (at a set of different times) Figure 10.Each panel shows the temporal evolution of v turb (blue solid curve), the average radial inflow (olive curve), the surface area (violet dashed-dotted curve), and v rel (dashed red curve).The v turb , average inflow, and surfaces areas were all computed from an isosurface constructed at the temperature at which t cool is minimized (this is e/e cl = 4.8).More specifically, the average inflow is computed from the area-weighted average of the normal velocity component on each facet on the isosurface.In contrast, v rel measures the bulk relative velocity of all gas with ρ > √ ρ cl ρw and is normalized such that it starts at unity and approaches zero as a cloud is entrained.To denote that a cloud becomes entrained (is destroyed) we include a "✓" ("×") in the panel label.The top (bottom) row show runs that have χ = 100 (χ = 1000) and Mw = 1.5 while varying ξ sh .The middle row shows runs that have χ = 100 and ξ sh = 5.73 while varying Mw.Panels c and f show the same run.As noted in the Table 1, the cloud's fate in panel b is somewhat debatable, given that the cold-phase mass drops to 0.07% of it's initial value before growing.For this case, we have elected not to show data after the cloud starts growing.
with cloud properties.We turn our attention to how v turb changes with time in a single simulation.Given how the slope of the v turb broken power-law phase dependence is largely time independent for the cold phase up to the break, we focus on v turb at e = e min,cool ∼ e break .Fig. 10 shows, for a broad range of simulations, the time evolution of v turb , measured geometrically (we use this metric to isolate a narrow phase bin), the average inflow velocity, and surface area on the same isosurface.The figure also shows the time evolution of v rel .We do not show other types of v turb measures because they are less accurate at early times (see Appendix A), and do not distinguish between turbulence and gradients in inflowing gas as well as the geometric measurements.We start by considering the turbulent evolution in a characteristic case with cloud entrainment: Fig. 10c shows a χ = 100 run with M w = 1.5 and ξ sh = 5.73.v turb has two primary evolutionary stages.Initially, in the 'pre-entrained stage', v turb rapidly grows until it reaches a peak value; v turb is sustained near this peak for a short time, and then it starts to drop off, as the cloud becomes partially entrained.During the subsequent 'partially entrained' phase, v turb stabilizes at a smaller value (within a factor of ∼2 of the peak) that is sustained for the remainder of the run.
The primary source of turbulent energy during the pre-entrained stage appears to be the relative velocity.This would explain why v turb peaks within a few t cc : we expect large v rel to drive the Kelvin-Helmholtz and Rayleigh-Taylor instabilities, which have growth rates proportional to t cc (Klein et al. 1994).This also explains similar rapid turbulent growth during the initial stage of the non-radiative and the slow cooling simulations in panels a and b of Fig. 10.Furthermore, it explains why the drop in v turb , which indicates the transition between stages (and is most prominent in the radiative runs), follows the drop in v rel -this is presumably because v rel no longer provides enough turbulent energy to sustain the peak v turb .
The two stages of v turb evolution roughly coincide with the stages of areal growth identified in Gronke & Oh (2020a).The 'pre-entrained' stage coincides with the rapid surface area growth dominated by the formation of the cloud's tail.Likewise, the 'partially entrained' stage roughly corresponds to the slower isotropic areal growth that occurs once the cloud is entrained.It's also noteworthy that the average inflow velocity plateaus before the slower isotropic areal growth, which is consistent with findings from Gronke & Oh (2020a).
At face value it might seem surprising that there is net inflow in Fig. 10b even though we know that this run is losing mass during the first ∼10t cc (see the mass evolution of the χ = 100, R cl = 5.64 pc run in Fig. 6).However, this just illustrates that net inflow doesn't necessarily equate with mass growth; the inflowing gas will raise the temperature of the gas enclosed by an isosurface in the absence of sufficient cooling.
We now consider how the principal dimensionless numbers (ξ sh , χ, M w ), affect the v turb evolution with time.In general, we find that these parameters only minimally affect the overall trend, so we focus on the relatively small differences that do emerge.
First, we examine variation in ξ sh .Compared to panel c of Fig. 10, panel d illustrates that more efficient cooling can increase the maximum v turb as well as the value of v turb at late times.This is consistent with the scalings from the last subsection.In this panel, v turb approaches v inflow 's magnitude at late times.It's plausible that all entrained runs in the figure would show the same behavior if we had measurements for late enough times; it may just be most prominent in panel d because v inflow is elevated and the cloud is accelerated more quickly.This feature may suggest that v turb is dominated by the radial flow at late times.We also find that higher ξ sh simulations have a somewhat smaller surface area.
The bottom row of Fig. 10 shows data for a set of runs with χ = 10 3 , and varying entries of ξ sh .In simulations in which the cloud survives, the transition between evolutionary stages of v turb happens at larger v rel /v w when χ is larger.This transition appears to roughly coincide with the time at which the value of β, from Eq. 4, increases from 0.25.Differences in v turb 's magnitude are qualitatively consistent with the scaling given in that equation.Finally, the middle row of Fig. 10 compares runs with varying M w .Increasing M w appears to increase v turb 's initial growth rate, v turb 's magnitude, and the duration over which v turb 's maximum magnitude is sustained.There is also some indication that higher M w simulations may also have larger inflow rates and larger surface areas, even at late times.
Independent of ξ sh , χ, and M w , Fig. 10 illustrates that the acceleration timescale is tightly correlated with the stages of areal growth (the surface area and v rel curves feature abrupt slope changes at similar times).In contrast, the transition between v turb stages appears less tightly coupled with the acceleration timescale as the principal dimensionless numbers are changed.We attribute this mostly to the fact that the cold phase is not a rigid body with a single bulk velocity, but instead has different velocities at different spatial locations.
This differential acceleration is responsible for the cloud's head-tail morphology: downstream material moves faster than upstream material.Regions with larger v rel (compared to v w ) should generally have larger v turb , albeit with some scatter related to the local history of turbulent driving.This is illustrated for a high resolution version (to improve sampling) of our χ = 1000, ξ sh = 27.8 run in Fig. 11.Here, we explore the relation between our measured turbulence metric (at the cooling peak) and the relative velocity of the gas as a function of both time (colors) and location along the length of the cloud (different points with the same color).This demonstrates that there is a correlation between these quantities not just at different times for the whole cloud (as shown in Fig. 10), but also along a cloud at a given time, strengthening the case for a causative relation.
How does this relate back to the loose coupling seen between the v turb evolutionary stages and the acceleration timescale, when we vary the principal dimensionless numbers?Because entrained clouds in our various simulations have different wind-aligned lengths, we know that changes in these numbers alter the cloud's differential acceleration.Consider the temporal evolution of the volume-averaged v turb measurements for a narrow phase bin of a very coherently accelerated cloud and a less coherently accelerated cloud.One would naturally expect that that v turb measurements might spend more time near its maximum value in one of these cases.It's not much of a stretch to assume that v rel might be fairly different when v turb starts to decrease (i.e.begins transitioning between stages).Thus, we would find different coupling between v rel 's evolution and v turb 's evolution in these cases.
Evolution of the driving scale
We now briefly revisit the velocity structure function in order to investigate how the turbulent driving scale varies with time.The bottom panel of Fig. 4 shows ⟨(δv) 2 ⟩(ℓ) for the R cl /∆x = 64 run of our χ = 1000, M w = 1.5, ξ sh = 27.8 simulation when the cloud is mostly entrained in the wind (v rel /v w ∼ 0.27).Comparisons with the top panel (v rel /v w ∼ 0.94) reveal that the outer scale of turbulent driving, which coincides with the peak ⟨(δv) 2 ⟩(ℓ), does not change substantially from early to late times.Although we don't show it, we confirmed similar behavior in the R cl /∆x = 32 run of our χ = 100, M w = 1.5, ξ sh = 5.73 simulations for similar values of v rel and at times when the cloud is more entrained.
We note that it's unclear why the 9/12 ≤ e/e cl < 11/12 phase bin's ⟨(δv) 2 ⟩(ℓ ∼ 0.3R cl ) measurement, from the lower panel, is smaller than comparable measurements for other phase bins.This feature also appears in the R cl /∆x = 32 version of this simulation.In contrast, this feature is absent from the aforementioned χ = 100 run; in that case ⟨(δv) 2 ⟩(ℓ) is always larger for a given ℓ in hotter gas.
Convergence
In this section, we discuss how numerical resolution impacts our various measurements.We primarily compare the measurements among different resolution runs of our χ = 1000, ξ sh = 27.8 simulation, varying R cl /∆x from 4 to 64.
Turbulence Metrics
The large panels in the top row of Fig. 12 compare the phase dependence of v turb using filtering measurements of v turb at various points in the cloud's lifetime.The figure shows that resolution appears to slightly affect the magnitude and the slope of the v turb phase dependence above e break .Importantly, the figure also suggests that the occurrence of a negative slope of the phase dependence is likely a resolution effect.The full phase dependence of v turb is well converged for R cl /∆x ≥ 32.These same conclusions apply to our other turbulence metrics (shown in the other rows).
Fig. 13 shows how resolution affects the temporal evolution of various quantities.The top panels show convergence in the total cold phase mass7 and v rel ; the only noteworthy feature is that rapid growth begins slightly sooner at higher resolutions.However, the surface area measurements are not converged at all; it increases more rapidly for higher resolution runs.These results are consistent with the findings of Gronke & Oh (2020a) for a χ = 10 simulation.
There are some differences in the v turb evolution.While low resolution runs have a strong, sharp peak followed by a flat region, higher resolution runs have a moderate peak with a gradual descent.With that said, there seems to be convergence for R cl /∆x ≳ 16, and all of the runs qualitatively agree with our picture that there are two stages of v turb evolution.The average inflow velocity measurements are similar overall but do show some significant differences -its somewhat unclear what the relevant trends are.We defer further investigation of inflow velocity convergence to future work.
Phase structure
Resolution strongly affects the 2D thermodynamic e− p phase-space distribution.Previous work (e.g.Fielding et al. 2020;Abruzzo et al. 2022) The top row compares the v turb phase dependence of different resolution runs of a simulation at 4 selected stages of evolution (full-size) and bulk property evolution (small panels) of each simulation.The illustrated simulations all have χ = 10 3 , ξ sh = 27.8,Mw = 1.5.The v turb panels shows filtering measurements after 1.5tcc (left panel) and when the relative velocity between the cold and hot phase are various fractions of its initial value (other panels).Data is only shown for a given simulation for 0 < log χ e/e cl < 0.9.The black dashed line shows v turb = cs ∝ √ e and the vertical grey line extends between the temperatures where the cooling length is minimized and t cool is minimized.The bulk property panels respectively show the evolution of (top) the cold phase's mass and (bottom) the relative velocity in each simulation.The curves in these panels are annotated with dots to specify the values during the snapshots displayed in the v turb panels.The second and third rows of v turb panels are the same as the top row, except that they respectively display geometric and ⟨(δv) 2 ⟩(ℓ = R cl ) measurements.The R cl /∆x = 32, 64 runs make use of a smaller simulation box than the other displayed runs.While visual inspection of our simulations, lead us to expect boundary effects to bias measurements in those cases, when v rel /vw ∼ 0.25, this doesn't seem to be an issue for this exercise.
isobar that is bounded by the properties of the cloud and wind.However, a pressure decrement emerges in the phase distribution at points along this isobar where cooling is not resolved.
Each point in the internal energy-pressure (e-p) phase space has an associated cooling length-scale c s t cool .Fig. 14 shows that the size of the pressure decrement scales inversely with how well c s t cool is resolved.The figure also suggests that resolving the minimum cooling length scale (i.e., ∆x ≲ ℓ cool ∼ min(c s t cool )), which is equivalent to the "shattering" length scale (McCourt et al. 2018), is adequate to largely remove the pressure decrement for χ ∼ 100, which is consistent with results from prior works (e.g.Abruzzo et al. 2022).
Under-resolved cooling is not the sole reason for the gas distribution's deviations from the pressure isobar.Ji et al. (2019) previously argued that it is actually the sum of the turbulent pressure, ρv 2 turb , and thermal pres-sure that should match the external pressure.Fig. 14 illustrates the median turbulent pressure as a function of e with dashed lines.In both χ cases, the turbulent pressure shows clear convergence in our higher resolution runs.The turbulent pressure's lack of dependence on e below e break and inverse correlation with e above e break (at the pictured time) are consistent with the scaling described in Eq. 3. The factor of ∼3 difference in the maximum values (i.e. at e ∼ e break ) of the turbulent pressures between the two χ cases helps explain why the χ = 1000 case has larger deviations in the thermal pressure from the external pressure.This difference is consistent with the scaling from Eq. 4. For context, we expect v 2 turb at e break to be a factor of ∼4.85 2β larger in this χ = 1000 case, although the value of β is ambiguous; Fig. 6 suggests that these particular χ = 1000 and χ = 100 runs should have β = 0.5 and β = 0.25 at 11.5t cc .
Figure 14.Solid colored lines show the median thermal pressure as a function of temperature for multiple resolutions of our Mw = 1.5 simulations with χ = 100, ξ sh = 5.73 (left) and χ = 10 3 , ξ sh = 27.3 (right) at 11.5tcc.The background shaded region shows the size of the cooling length associated with a point in phase-space measured relative to the simulation's R cl .There is not an associated length scale below e cl or above ∼ew because we have disabled cooling and heating at these temperatures.At low pressures, just above e cl there isn't an associated length scale because heating dominates.For the sake of comparison, the dashed lines show the median turbulent pressure, ρv 2 turb (derived from the filtering approach).
At the finite resolutions of our simulations, there is a decrement in the total pressure in our simulations.However, at infinite resolution it is plausible that the total pressure of the gas is constant.In short, the minimum c s t cool along the segment of the pressure isobar, connecting the cloud and the wind phase properties, specifies the grid-scale requirement for fully resolving phasestructure.Remarkably, the degree to which we resolve ℓ cool appears to have minimal impact on the 1D e phase distribution.This is shown in Fig. 15b (we will discuss the rest of the figure in the next section).
Turbulent structure and Cloud Morphology
Our results hint that under-resolving turbulence might influence various properties of these interactions.To illustrate this, we turn to Fig. 15, which shows measurements taken from different resolution runs of our χ = 10 3 , ξ sh = 27.8 simulation at 7.5t cc .Each panel in the top rows displays lines of data that comes from each resolution.Subsequent rows just show measurements taken from a single resolution.
Fig. 15a shows the first-order velocity structure function, ⟨|δv|⟩(ℓ), measured for gas in the 1/12 ≤ log χ e/e cl < 3/12 (8.1 × 10 3 K ≤ T ≤ 2.0 × 10 4 K) phase bin at various resolutions.Values are divided by the bin's maximum sound speed and the gray region denotes the width of the bin.⟨|δv|⟩(ℓ) specifies the average magnitude of the velocity differences8 for pairs of points separated by a length-scale ℓ.
As the separation ℓ decreases so does the velocity difference.On scales comparable to the cloud radius, ℓ ∼ R cl , the slope and normalization of ⟨|δv|⟩(ℓ) are re-markably well converged.9As the separation approaches the grid scale the velocity differences are damped by numerical dissipation.Where this numerical dissipation kicks in relative to the sound speed appears to have a major impact on the morphology of the system.In reality the true physical viscosity of these systems is uncertain, but is likely to be much less than the effective numerical viscosity even in our highest resolution simulation.
On large scales the velocity differences are greater than the sound speed, but at small enough separations the velocity differences become subsonic.We define the turbulent sonic length, ℓ turb,sonic , as the scale at which ⟨|δv|⟩(ℓ) passes through the point ⟨|δv|⟩(ℓ turb,sonic ) = c s .By extrapolating the slope from large separations we can estimate ℓ turb,sonic in the limit of infinite resolution (and very small viscosity), which, in this case, falls around 0.07R cl .This ℓ turb,sonic is not resolved by the simulations with R cl /∆x = 4 or 8, is marginally resolved by the R cl /∆x = 16 simulation, and is fairly well resolved by the R cl /∆x = 32 and 64 simulations.When ∆x > ℓ turb,sonic , the average velocity difference, in a given phase bin, can be supersonic at the viscous scale (i.e. between adjacent cells).
Panels d, f, h, j, and l show the distribution of velocity difference magnitudes, in the previously mentioned phase-bin, measured at ℓ = ∆x; the average values of these distributions give the leftmost points of the curves in panel a.These panels illustrate that as ℓ turb,sonic /∆x decreases, fewer pairs of cells have supersonic velocity differences.They also show that some grid-scale supersonic velocity differences persist when ℓ turb,sonic is barely resolved.
We now investigate the question: What are the consequences of under-resolving ℓ turb,sonic ?Panels e, g, i, k, and m of Fig. 15 show the projected density of these simulations.The dramatic differences in these maps suggest that the degree to which ℓ turb,sonic is resolved may be linked to morphological differences between simulations.We find that the cold phase in higher resolution simulations is composed of more large-scale structures and has a narrower transverse extent, whereas in lower resolution simulations the cold phase is clumpier and more dispersed.The cold phase in the low resolution simulations has effectively shattered while in the higher resolution simulations that have ℓ turb,sonic /∆x > 1 the cold phase remains more intact (McCourt et al. 2018).These effects support a picture in which under-resolving turbulence intensifies shattering by enabling the presence of supersonic velocity differences on the grid scale.This also naturally explain the wider dispersal of cold gas in low resolution simulations since the most intense shattering cause explosive breakup of clouds (Gronke & Oh 2020b).Physically, supersonic grid-scale velocity differences will lead to large pressure imbalances that will in turn promote the dispersal as opposed to coagulation of cold cloudlets (Gronke & Oh 2022).
Using Eq. 4, which captures how v turb scales with v rel /v w and t cool,min , we can write a rough scaling relation for ℓ turb,sonic .Assuming that ⟨|δv|⟩(ℓ) ∝ ℓ ζ , we find for cold gas with e cl ≤ e ≤ e break that For sake of convenience we take ζ = 1/3 (the scaling for Kolmogorov turbulence), which is close to what is found in the simulations (see Fig. 15a).At early times in the 'pre-entrained' stage (e.g., when v rel /v w ∼ 0.75) β = 1/4, which yields a precise prediction for the turbulent sonic length (6) The normalization is measured empirically in our χ = 1000, ξ sh = 27.8 simulation.Note that we focus on the ⟨|δv|⟩(ℓ) measurements from the same phase-bin that includes e break since, as we have shown above, this is the region of phase space where these scalings are robust, but the general trends will be the same for other bins below e break .Due to the fact that we have only measured the length where ⟨|δv|⟩(ℓ) is equal to the maximum sound-speed of the phase bin, this relation should be considered an upper-limit on ℓ turb,sonic .
It's more intuitive to compare this relation against other known length-scales, like the minimum radius for cloud survival, R cl,crit or the minimum cooling length.
For fixed cloud properties, we find that This demonstrates that the turbulent sonic length tends to be more difficult to resolve in runs with larger χ and 10 This assumes that R cl,crit has the scaling from the Li et al.
(2020)/ Sparre et al. (2020) survival criterion, since this does an accurate job predicting cloud survival (see § 5.5).Survival criteria have the generic form, τ cool /tcc < q and thus R cl,crit ∝ τ cool Mw/q.In this case, τ cool ∼ t cool,w and q ∝ R 0.3 cl M −2.5 w n 0.3 w v 0.6 w , or equivalently q ∝ M −1.9 w R 0.3 cl p 0.3 µ 0.3 w .For T ≳ 10 5 K, t cool roughly scales as e 2.7 p −1 and the mean molecular weight, µ, is constant.Putting this together yields R 1.3 cl,crit ∝ χ 2.7 e 2.7 cl M 2.9 w p −1.3 c s,cold when Tw ≳ 10 5 K.
higher M w .If we assume that ℓ cool = min(c s t cool ) ∼ c s,break t cool,min and c s,break ∼ c s,cold , we find that ℓ turb,sonic /ℓ cool ∼ ξ ulations, it should be clear that ℓ turb,sonic exceeds ℓ cool in all of our entrained runs.We find that this relation reproduces the value of ℓ turb,sonic measured from the R cl /∆x ∼ 32 run of our χ ∼ 100, M w = 1.5, ξ sh = 5.73 simulation, to within ∼50%.The lower resolution runs of that simulation all resolve ℓ turb,sonic , and we are encouraged that none of them shows signs of shattering (the transverse extent is fairly consistent among runs).In the R cl /∆x ∼ 16 run of our, χ ∼ 10 4 , ξ sh = 172.8simulation, we find that ℓ turb,sonic is smaller than the grid scale, when v rel /v w ∼ 0.75, which is consistent with the relation's prediction.We note that both resolution runs of this simulation clearly shatter.We performed a few spotchecks with a handful of our other runs and the relation seems accurate to within a factor of a few, but more careful modeling is required since ℓ turb,sonic is close to R cl /8 and R cl /16 in many of our runs.
While we primarily presented this analysis for the phase bin containing e break , we also find evidence (not shown) suggesting that the general results can be extrapolated to lower temperature bins.This is intuitive, given our earlier finding that v turb (e)/c s (e) is roughly constant for e cl ≤ e ≤ e break .
Although our association of these large-scale morphological changes with ℓ turb,sonic requires further investigation, it presents an attractive way to understand several outstanding related questions, namely, when do clouds shatter (Gronke & Oh 2020b), and why do higher M w simulations require so much higher resolution to achieve convergence (Gronke & Oh 2020a; Bustard & Gronke 2022, we elaborate further in section 5.7).Although this need not be true in the general case (e.g. if there are external drivers of turbulence), ℓ turb,sonic > ℓ cool in all of our runs.Thus, the resolution effects on large-scale morphology may be more closely related to under-resolved cooling.
We clarify that resolving small scale structure (e.g.surface area and number of clumps) has other conditions unrelated to resolving ℓ turb,sonic .Sparre et al. (2019) and Gronke & Oh (2020a) each show that convergence of such properties is very weak in high resolution simulations that resolve ℓ cool (in both studies, ℓ turb,sonic > ℓ cool ).
Phase Dependence of turbulence
We have demonstrated for the first time that the turbulent velocity, v turb , in a mixing layer follows a bro-ken power law dependence on temperature or internal energy.A major implication of this finding is that the turbulent kinetic energy density is not constant across gas phase.Consider the ratio of the turbulent kinetic energy densities in the hot and cold phases, or ϵ = ρ w v turb (T w ) 2 /(ρ cl v turb (T cl ) 2 ).Per Eq. 3, this evaluates to ϵ = (e break /e w ) 1−2α .We remind the reader that α, the power-law slope above e break , starts out near 1/2 at the earliest times and decreases to ∼0 at a rate that depends on the principal dimensionless numbers.Thus, during the bulk of the cloud-wind interaction, the cold phase has a larger turbulent kinetic energy density (i.e.ϵ < 1).This contradicts (explicit and implicit) assumptions that ϵ = 1 in multiple works on TRMLs.
For example, we consider the arguments that lead to the expression for the temperature of the mixing layer, T mix ∼ √ T cl T w (Begelman & Fabian 1990;Gronke & Oh 2018).This relation derives from the average of the cold and hot phase temperatures, weighted by the mass flux from each phase into the mixing layer.The derivation assumes that each phase's mass flux scales with the respective v turb values.Because the derivation involves arguments equivalent to assuming ϵ = 1, it overestimates the hot phase's v turb and consequently the mass flux when compared against the values for the cold phase.Thus, √ T cl T w overestimates T mix and the size of the discrepancy is inversely correlated with ϵ.Because the value of t cool is commonly monotonic between t cool,min and t cool ( √ T cl T w ) (e.g.see figure 14 of Abruzzo et al. 2022), typical calculations overestimate t cool,mix by an amount also negatively correlated with ϵ.
In another case, Fielding et al. (2020) explicitly assumes that ϵ = 1.The only practical implication is that their quoted measurement of f turb = v turb /v rel is too large by a factor of √ ϵ.Thus, f turb might have a weak dependence on the shape of the cooling curve.In their analysis of clouds in a turbulent medium, Gronke et al. (2022) also assumes ϵ = 1, but this may be valid since they consider externally driven turbulence.
Observable Predictions
It may be possible to observe v turb 's broken power-law phase dependence in real-world systems.For example, previous studies have already placed constraints on temperature and nonthermal motion in the circumgalactic medium of other galaxies by measuring the widths of absorption lines for elements with different atomic masses (e.g.Rudie et al. 2019;Qu et al. 2022).Similar measurements may also be possible for high velocity clouds, for which there an abundance of absorption (e.g.Fox et al. 2004) and emission line data (e.g.Tufte et al. 1998;Hill et al. 2009).One could also imagine using 21 cm emis-sion or Mgii absorption to extend such an analysis to probe the turbulent properties down to lower temperatures, where gas is atomic (e.g.Marchal et al. 2021).
It may also be possible to perform a similar exercise for gas in multiphase galactic outflows (e.g.Strickland & Heckman 2009;Reichardt Chu et al. 2022).
Additionally, one can perform more straight-forward comparisons against observational measurements of turbulent measurements in ∼10 4 K gas.However, given the simplifying assumptions in this work (described further in § 5.7) and the fact the drivers of turbulence may vary between different systems, such comparisons must be interpreted with great caution.Nevertheless, we find it encouraging that there is evidence that the Perseus molecular cloud has transonic turbulent Mach number (Burkhart et al. 2015), just like we see in a fair number of our simulations.We also find it encouraging that studies of CGM clouds (e.g Rudie et al. 2019;Qu et al. 2022) recover non-thermal broadening measurements within a factor of a few of 10 km s −1 , which nicely matches the turbulent velocities in our simulations.We leave further comparisons to future work.
What drives mixing?
We now return to one of the motivating questions, the origin of turbulence in the flow.From the results in this paper, the short answer appears to be that both shear and cooling drive the turbulence responsible for mixing.As we conclude in § 4.1.1,shear is the primary driver of turbulence at early times.After the cloud becomes partially entrained, v turb falls off before stabilizing at a lower value.The long-term support of a non-zero v turb value, as v rel goes to zero, suggests that some form of "cooling-induced mixing" mechanism takes over.To put this another way, the primary source of turbulent kinetic energy changes with time.At early times, turbulent kinetic energy primarily comes from the large relative shear velocities between fluid elements.At late times, it instead comes from the radial kinetic energy of inflowing material.
Possible origins for the late-time turbulence include rapid cooling driven pulsations in the cloud, (Gronke & Oh 2020a) 11 , or simply the net radial inflow driven by the initial shear-driven turbulence.This later explanation is supported by the correlation of v turb 's late-time magnitude with v inflow , which itself correlates with a run's cooling efficiency.We plan to provide a detailed analysis of the temporal evolution of v turb and its dependence on v rel in a follow up work.
A few other features are consistent with this conclusion.First, we find the rapid growth of surface area, when shear primarily drives mixing, and subsequent stabilization at a roughly constant value, when mixing is primarily driven by pulsations or radial inflow, to be consistent.Second, the minimal variance in the driving scale, as the cloud is elongated, is also consistent.At early times the driving scale is linked with the length of the wind-aligned axis of the cloud, of order R cl .Because the cloud's transverse extent doesn't change much with time, the typical radial separation between opposite inflow 'fronts' of the clouds should still be of order R cl at late times.Finally, the saturation of the inflow velocity after cooling-driven mixing has fully developed fits into this picture since the shear-driven contribution will have become subdominant.
Gronke & Oh (2020a) noted that the anti-correlation between the cold cloud mass growth rate and v rel might suggest that shear-driven turbulence from the KH instability might not fuel mass growth, and instead might be a competing destructive process.However, our most efficiently cooling M w = 1.5 runs with χ = 100, 300, 1000 have significant v rel when they start monotonically growing.In other words, mass growth at early times in these runs should primarily arise from shear-driven turbulence.With that said, mass growth is still negatively correlated with v rel since the surface area is still increasing.
The evolution in the v turb phase dependence is also consistent with this picture.When shear primarily drives turbulence at early times, turbulent kinetic energy is roughly constant with phase (as in non-radiative simulations where shear is the only turbulent driver).In contrast, when cooling drives turbulence, it does so primarily in regions with short cooling times, which explains why turbulence in the hot phase drops off.
What is the mixing timescale?
The canonical estimates for the characteristic mixing timescale are t cc and t shear .We find that the turbulent velocity scales with R β cl c −β s,hot t −β cool,min , where β is 0.25 at early times and 0.5 at late times.Notably, it has no dependence on M w for most of the cloud's evolution.Therefore, the characteristic mixing time has no v rel dependence.
With that said, the initial value of M w does affect the temporal evolution of v turb .Fig. 10 also provides some indications that the magnitude of v turb may have some dependence on M w at very early times.Comparing panels g to h (as well as f to g) reveal that the peak values of v turb , when v rel > 0.8, is larger in the higher M w run by more than the factor of √ 2 expected by Eq. 4 from differences in R cl .
Survival Criterion
There has been great interest in the literature about the minimum radius for cloud survival (e.g.Gronke & Oh 2018;Li et al. 2020;Sparre et al. 2020;Kanjilal et al. 2021;Abruzzo et al. 2022;Farber & Gronke 2022).We will provide more firm conclusions about this topic in an upcoming work (Abruzzo et al., in prep.).However, we do note that the our results are most consistent with the predictions of Li et al. (2020) with the corrections described by Sparre et al. (2020) for supersonic winds.
Convergence
What does it mean to resolve the cloud-wind interaction?The obvious ideal is to achieve point-wise convergence, but this is generally prohibitively computationally expensive except in rare cases (e.g., Lecoanet et al. 2016).Short of this ultimate goal there are lesser gradations of convergence that depend on the question at hand.The easiest quantity to achieve convergence in is the net mass growth of the cold phase.We show in Fig. 15c that the mass growth is fairly well converged for resolutions of R cl /∆x ≳ 8.This likely corresponds to some minimum threshold to resolve any turbulent mixing, and is consistent with previous findings (e.g.Gronke & Oh 2020a).The hardest quantity to achieve converge in is the 2d p − e phase distribution, which requires resolving the minimum cooling length (ℓ cool ; also known as the shattering length).Therefore, if one is interested in simply capturing the total amount of mass in the cold phase then the resolution requirements are much less onerous than if one is interested in capturing the detailed phase structure (or cloud morphology).The details of the phase structure can be extremely important for comparisons to observations since the pressure decrement that develops in under-resolved simulations occurs in precisely the region traced by commonly observed ions, such as Mgii (e.g., Nelson et al. 2021;Burchett et al. 2021).
Here we propose an intermediate convergence criterion for the large-scale morphology of cold structures which requires resolving the turbulent sonic length ℓ turb,sonic by several cells.This is in general less stringent than the requirement to resolve the minimum cooling length.At face value, the difficulty of resolving ℓ turb,sonic in galaxyscale simulations suggests that the detailed morphological properties of cool (∼10 4 K) gas, involved in TRML entrainment, within galactic outflows and the circumgalactic medium are unlikely to be correct.However, the implications of accurately capturing the morphol-ogy may be more complex in more realistic systems because of the way cloud shape and size couples to other physical process absent in our simulations.For example, in systems in which the hot phase is itself turbulent, such as in galactic wind simulations (e.g.Schneider et al. 2020), under-resolving ℓ turb,sonic may lead to artificially shattered clouds which will in turn be more likely to be destroyed than if they were able to remain coherent.Therefore, having ∆x < ℓ turb,sonic may prove to be essential for determining the overall phase structure and evolution of turbulent multiphase flows that are ubiquitous in and around galaxies.
This discussion about large-scale morphological convergence of cool gas in larger-scale models deserves elaboration on two finer points.First, it assumes applicability of our results about the emergent turbulent properties in the cloud-wind interactions; we discuss how the equilibrium T cl and shape of t cool (T ) affect this in the next subsection ( § 5.7).Second, we are extrapolating from simulations of isolated clouds, whereas larger-scale models often include multiple clouds in an outflow (e.g.Cooper et al. 2008;Kim & Ostriker 2018;Schneider et al. 2020).This is not an issue when the inter-cloud spacing is large enough for clouds to be treated individually, albeit with a hot phase that is already turbulent from upstream interactions.However, more work is required to make predictions when the inter-cloud separation is small (such work might use a multi-cloud setup akin to Alūzas et al. 2012;Banda-Barragán et al. 2020).
Comparison to prior work
At early times, when the KH instability is the primary driver of mixing, one might expect similarities between our runs and the TRML simulations of Fielding et al. (2020) and Tan et al. (2021).Unfortunately, it's difficult to draw direct comparisons since those works highlight properties after reaching a quasi-steady state.In contrast, our runs never reach such a state since v rel evolves with time.More meaningful comparisons could be made if the cloud was in a potential that was tuned to maintain v rel at late times.Additionally, Tan et al. (2021) point out that we would likely expect different v turb scaling to be dependent on geometry.Nevertheless, we find the presence of inflowing gas at early times to be encouraging (especially when juxtaposed with our adiabatic runs that don't have net inflow).The fact that v turb and the inflow velocity show signs of scaling with cooling efficiency is also encouraging.
Likewise, we expect similarities with Gronke & Oh (2020a) at late times when turbulence is driven by "cooling-induced mixing" .Although we broadly see similar qualitative evolution in the surface area, detailed comparisons of other properties are challenging.While both works measured v inflow , we expect differences in our methodologies will complicate comparisons of these quantities at late times.Gronke & Oh (2020a) used v inflow ∼ ṁcold /(Aρ w ) while we directly measure the velocity component normal to the e mix isosurface (the scaling doesn't change much if we use the e break isosurface).In other words, their measurements are weighted by mass flux and ours are weighted by surface area.We expect that this difference in methodology explains why our results indicate that inflow starts much earlier in our runs; early time inflow that doesn't correspond to mass growth won't be picked up by their measurements.Because our work focused on measuring v turb , rather than v inflow , we defer detailed scaling of v inflow to followup work.
Gronke & Oh (2020a) found that cold phase mass evolution's convergence in a M w = 6 simulation run at R cl /∆x = 8, 32 to be quite poor.In contrast we found that the cold phase mass evolution in our R cl /∆x = 8, 16 runs of our M w = 6 simulation to be fairly well converged.While it's possible that we could see differences at higher resolution, it's plausible this difference could arise from differences in the cloud temperature.The clouds in Gronke & Oh (2020a) had a temperature of T cl = 4 × 10 4 K.This translates to values of t cool,min and e cl that are factors of ∼ 5 and ∼ 11.7 larger.Consequently, we expect ℓ turb,sonic /R cl,crit to be 7.3 times smaller in their simulations, which means they could be under-resolving R cl according to our new resolution criterion.
More generally, one might ask "How does the choice of T cl affect our results?"given that the equilibrium T cl varies12 greatly among cloud-crushing and galactic outflow studies.For context, this work focuses on runs with T cl ∼ 5 × 10 3 K, while other works commonly include simulations with T cl ∼ 10 4 K (e.g.Li et al. 2020;Kanjilal et al. 2021;Abruzzo et al. 2022;Schneider et al. 2020) or T cl ∼ 4 × 10 4 K (e.g.Gronke & Oh 2018, 2020a;Abruzzo et al. 2022).We expect the applicability of our results is more-strongly tied to the shape of t cool (T ) over T cl ≲ T ≲ T w than the precise value of T cl .Fig. 7 suggests our results are minimally affected when t cool (T cl ) exceeds the minimum value of t cool computed over the temperature range.However, the applicability is less clear when t cool (T ) is minimized at T cl (i.e. if T cl ≳ 2 × 10 4 K for p/k B = 10 3 K cm −3 , Z ⊙ , z = 0) or at a value of T exceeding √ T cl T w .Finally, we note that some works also consider conditions with T cl < 500 K (e.g.Banda-Barragán et al. 2021;Farber & Gronke 2022).Further investigation is required to understand the applicability of our results in this context, but our above discussion about t cool (T )'s shape is relevant.
We next draw comparisons with works that studied multiphase gas in turbulent box simulations.For example, Gronke et al. (2022) initialized a pressure-confined cool (T cl = 4×10 4 K) cloud in a hot ambient background and studied how the system evolved while driving turbulence in the hot phase.Mohapatra et al. (2022) studied the turbulent properties of multiphase gas (comparable to ICM conditions) that emerged from driven turbulence and radiative cooling in a box of initially hot (T = 4 × 10 6 K) gas.These studies respectively observed that the amplitude of the first and second order velocity structure functions (⟨|δv|⟩(ℓ) and ⟨(δv) 2 ⟩(ℓ)) have lower amplitudes in the cold-phase gas than in the other phases, which is in good qualitative agreement with our results.We note that the sub-Kolmogorv scaling of our ⟨(δv) 2 ⟩(ℓ) measurements are more consistent with the hydrodynamic volume-weighted heating run from Mohapatra et al. (2022) than the mass-weighted run.However, as mentioned in § 3.3, the driving scale is not sufficiently resolved to remove the bottleneck effect's influence on the slope of ⟨(δv) 2 ⟩(ℓ).To be more concrete, we note that Mohapatra et al. (2022) illustrated that the driving scale must be resolved by more than 192 cells, in a non-radiative turbulence simulation, to remove the bottleneck effect's influence on the slope.For that reason, we refrain from making detailed comparisons.
Caveats
This work made a number of simplifying assumptions and omitted a variety of potentially relevant physical effects that could potentially modify our results.Future work should consider: Other sources of turbulence: We only analyzed the turbulence that emerged from two phases that initially had coherent velocities without turbulence.In reality, external processes, like supernovae, can drive turbulence in the wind; this likely alters the interaction's evolution and makes survival more difficult (e.g.Schneider et al. 2020).Additionally, differences in the initial cloud structure, due to turbulent driving before encountering a wind, can affect the rates at which mixing destroy clouds (e.g.Schneider & Robertson 2017;Banda-Barragán et al. 2019).
Thermal Conduction: The omission of thermal conduction from our simulations will certainly affect the morphology of the cold-phase (e.g.Brüggen & Scanna-pieco 2016;Li et al. 2020).However, we take solace in the fact that mass transfer through the TRML will be minimally affected in simulations where cooling is fast relative to the mixing time (Tan et al. 2021).
Magnetic fields: It is well known that magnetic fields can extend the lifetime of clouds (e.g.Dursi & Pfrommer 2008;McCourt et al. 2015).Banda-Barragán et al. (2018) showed that magnetic fields have a stabilizing effect on initially turbulent clouds embedded in a laminar wind.While realistic magnetic field strengths don't seem to strongly affect the criteria for survival through rapid cooling, they do have a number of other effects that will almost certainly affect the system's turbulent properties (Gronke & Oh 2020a).Among others, such effects include non-thermal support, which could alter cooling properties, suppression of the KH instability and alteration of cloud morphology, leading to higher surface areas (Gronke & Oh 2020a).
Cosmic Rays: Cosmic rays were also omitted from our simulations.They are a known sources of non-thermal pressure support, which may alter cooling properties (Butsky et al. 2020).They can also provide another mechanism for accelerating clouds (Wiener et al. 2019;Huang et al. 2022).
Gravity: Our simulations neglected the effects of gravity because we generally expect our χ ≤ 10 3 simulations to be Jeans stable.However, one could imagine that external gravitational fields could sustain an elevated shear velocity (Tan et al. 2023) and consequently influence the system's turbulent properties.
More realistic cooling: All of our simulations assume simplified equilibrium cooling and neglect self-shielding.However, given that all our simulations where the cloud survives have N Hi > 10 17.2 cm −2 , self-shielding may be relevant.Including more realistic cooling could modify our results (Farber & Gronke 2022), but we leave that for future work.
Viscosity: Our simulations do not have explicit viscosity (Li et al. 2020;Jennings & Li 2021).This may affect turbulent properties near the scale of turbulent dissipation.
CONCLUSION
We have investigated the multiphase turbulent properties that emerge from interactions between cool clouds and hot supersonic flows (or winds).The relative efficiency of turbulent mixing and radiative cooling in mixing layers govern the outcome of such interactions.To address difficulties associated with characterizing multiphase turbulence, our analysis employed three distinct methods to measure v turb .We found the following primary results for simulations, in which cooling is suffi-cient for the cloud to survive the interaction and become entrained: • Radiative cooling dramatically changes the v turb temperature13 scaling.In non-radiative simulations v turb has a scaling consistent with the sound speed's temperature scaling: v turb ∝ c s ∝ √ T .In runs with sufficient cooling for entrainment, this scaling only applies for gas colder than T break , the temperature where t cool is minimized.Above T break , the power-law slope starts near 0.5 and flattens to ∼0.Consequently, cold gas generally has larger turbulent Mach number and turbulent kinetic energy than hot gas.
• v turb has two stages of temporal evolution.
The shear velocity initially drives rapid growth of v turb at early times in the "pre-entrained" phase.
As the cloud becomes partially entrained, v turb drops off before stabilizing at a lower value, one that is of comparable magnitude to the average inflow velocity.
• The driving scale is of order the cloud radius throughout the cloud's entire evolution.
• The grid scale should exceed the minimum cooling length, ℓ cool ∼ min(c s t cool ) to resolve 2D phase structure.The 1D temperature phase structure is remarkably well-converged at lower resolutions.
• Our simulations suggest the existence of a minimum length scale for resolving turbulence, ℓ turb,sonic , for clouds with an equilibrium temperature of 5 × 10 3 ≲ (T cl /K) ≲ 2 × 10 4 .Under-resolving this scale seems to artificially amplify the violence of shattering.When this scale is resolved, the entrained cool phase is composed of larger clouds.
We thank M. Gronke for useful discussions and for sharing some sample code to compute the velocity structure function.We are grateful to James Bordner, Mike Norman, and the other enzo-e developers.GLB acknowledges support from the NSF (AST-2108470, XSEDE), a NASA TCAN award, and the Simons Foundation.DBF is supported by the Simons Foundation through the Flatiron Institute.Software: numpy (Harris et al. 2020), matplotlib (Hunter 2007), yt (Turk et al. 2011), scipy (Virtanen et al. 2020) Our approaches for characterizing v turb all build on the idea that a velocity field can be decomposed into a laminar part and a turbulent part.Consider an ideal turbulent flow in which the laminar part of the velocity field is uniform.In this scenario, the magnitude of the laminar part sets the average of the velocity field and the turbulent part sets the dispersion in the velocity values.For this reason, our methods for measuring a spatially averaged v turb (in a given gas phase) all measure this dispersion in one way or another.
Unfortunately, the flows considered in this work are more complex: the laminar portion of the flow has spatial gradients.Figure 16a illustrates these gradients for several velocity components measured on the log χ e/e cl = 1/6 isosurface of our χ = 1000, ξ sh = 27.8 simulation at 0.5t cc .In more detail, the panel shows the conditional distributions14 of multiple velocity components as a function of cos θ spherical , where θ spherical is the polar angle measured from the center of the inflow boundary.
Unless they are removed, such gradients can dominate or inflate the dispersion of the global velocity distribution, which can bias our measurements of v turb .Fig. 16b, suggests that this is less of an issue after early times (once v turb has had time to grow) because the dispersion from turbulence is larger relative to the laminar variations.However, it's clear that these gradients still remain problematic in the wind-aligned velocity component.Fig. 11b shows that large variations in the wind aligned velocity persist to later times, even as the cloud is accelerated.
We expect our v turb measurements from our geometric approach to be unaffected by this issue because it estimates v turb from the dispersion in v ϕ−like , which maintains a mean of zero at all times.However, the laminar variations will bias the measurements using our other approaches at early times.While one might expect our filtering measurements to be resilient to this effect, because it uses a local estimate of the laminar flow, at least some bias will remain given that these early-time gradients are most naturally described in spherical components.Throughout this work, we elect to just focus on turbulence in velocity components orthogonal to the wind direction, in our filtering and ⟨(δv) 2 ⟩(ℓ) measurements, in order to avoid biases from the wind-aligned velocity component.
As an aside, the resilience of our geometric approach to these biases are related to the definition of the velocity components.Consider ûr−like , which we define the unit vector parallel to the specific internal energy gradient (i.e.ûr−like = ∇e/||∇e||).Because this vector is always normal to the specific internal energy isosurfaces, we can define
Figure 1 .
Figure 1.Slice of a χ = 1000, ξ sh = 27.8,Mw = 1.5 simulation at 4.5tcc.This simulation has a resolution of 64 cells per cloud radius and the cloud is eventually entrained in the wind.The left two panels show the density and specific internal energy, which is T /µ scaled by physical constants.The right two panels show the high-pass filtered components of the velocity field transverse to vwind and the center panel shows the combined magnitude of these values.The insets highlight how the turbulent velocity has a clear temperature dependence.
Figure 2.Shows the phase dependence of v turb , measured via filtering, of the R cl = 64∆x run of our χ = 1000, ξ sh = 27.8,Mw = 1.5 simulation at 2.5tcc.The top panel includes contributions from all three velocity components.The bottom panel just includes contributions from the components transverse to vwind ; this is consistent with how v turb from filtering is measured throughout the remainder of this work.The solid orange line denotes the median while the dashed orange lines bound values between the 15th and 85th percentile.The dotted-dashed line shows v turb magnitudes that are equal to the sound-speed.The steep drop-off in v turb near Tw, is an artifact of the fact that wind is initially laminar.
Figure 3.Illustrates iso-temperature surfaces and derived v turb measurements for our the R cl /∆x = 64 run of our χ = 1000, ξ sh = 27.8,Mw = 1.5 simulation at 2.5tcc.Panel a shows a cut-away of five nested iso-surfaces measured at log χ e/e cl = 1/6, 1/3, 1/2, 2/3, 5/6 (for this system, T = 1.3 × 10 4 K, 3.3 × 10 4 K, 10 5 K, 3.3 × 10 5 K, 3.3 × 10 4 K, 10 6 K).The arrow illustrates the φ direction measured in the plane transverse vw.Panels b and c respectively show the normalized areaweighted distributions of the v normal and v ϕ−like velocity components measured on the iso-surfaces pictured in a. Panel d shows the standard deviation of the distributions from panel c (colored diamonds), as well as data derived from other isosurfaces (gray circles), plotted as a function of log χ e/e cl .
Figure 7.Like the top row of Fig.6except that the pictured simulations primarily vary the cloud temperature.Each simulation has χ = 100 and Mw = 1.5.We expect at higher resolution that the power-law slope below e break in the purple curve will be closer to 0.5 (i.e. the slope of the dashed black line).
Figure 8 .
Figure8.Like the top row of Fig.6except that the pictured simulations primarily vary in χ.We have made two compromises in presenting this data.First, we fix β to 0.25 for all panels.This is done as a simplification because β changes on a timescale related to χ.Second, the rightmost v turb panel compares simulations at a fixed value of v rel /vw rather than at a fixed time.The last panel typically compares the simulations at a point in evolution when v turb stabilizes (see § 4.1.3).However, that time seems to come much later in our χ = 10 4 simulation, after the simulation terminates.While we include the χ = 10 4 run for completeness, strong resolution dependence (see Table1) and the atypical shape of the cool-phase mass evolution may indicate that it is not well-converged.As noted in § 2, some material that started in the cloud leaks out of the domain at 6.5tcc, which coincides with the large drop-off in cool-phase mass.
Figure 9 .
Figure9.Like the top row of Fig.6except that the pictured simulations primarily vary Mw.The solid (dotted) lines show data from simulations with R cl /Mw = 37.6 pc (376 pc) and ξ sh = 5.73 (57.3).We note that the c s,hot t cool,min is 6.56 pc for all simulations in this plot.As we will show in panels e-h of Fig.10, v turb evolves more slowly in higher Mw runs.Consequently, the "late times" panel shows data from Mw = 0.75, 1.5 runs at 20tcc, and data from Mw = 3, 6 runs at 30tcc (we did not run the Mw = 6 simulation to late enough times or with a long enough domain for an optimal late-time comparison).
Figure 11.Points of a given color show v turb and (vw−v isosurface )/vw measurements for different sections of the e break isosurface from R cl /∆x = 64 run of our χ = 10 3 , ξ sh = 27.8 simulation.The points' colors indicate the simulation time that the measurement is associated with.The isosurfaces are split into bins based on each facet's position along the vwind .Each bin has a width of R cl ; there are more bins when the cloud is more elongated.The averages and standard deviations are all weighted by the area of each facet.While it is not shown, we have evidence indicating the data's slope may change when the principal dimensionless numbers are varied.
established that gas in χ ≤ 100 simulations is roughly distributed along the
Figure 13 .
Figure13.Illustrates how the evolution of various quantities in our χ = 1000, ξ sh = 27.8 simulation are affected by resolution.The top two rows show evolution of the coldphase mass and of the relative velocity between the cold and hot phases.Subsequent rows show evolution of quantities computed from the e break isosurface including surface area, average inflow velocity, and the turbulent velocity.
Figure 15.Illustrates resolution effects on the χ = 1000, ξ sh = 27.8 simulation at 7.5tcc.The top row shows ⟨|δv|⟩(ℓ) measurements for gas with 1/12 ≤ log χ e/e cl < 3/12 (panel a), the projected 1D phase distribution (panel b), and the bulk mass evolution for cold gas with ρ > ρmix (panel c).The dotted black line in panel a shows ⟨|δv|⟩(ℓ) ∝ ℓ 1/3 , the scaling expected for Kolmogorov turbulence.The brown shaded region in panel b denotes the gas phases considered in ⟨|δv|⟩(ℓ) while the vertical dotted line indicates the location of t cool,min .While the top row shows measurements from all resolutions, subsequent rows only show data for individual simulations.Panels d, f, h, j, and l shows the distribution of velocity difference magnitudes at the grid scale for gas with 1/12 ≤ log χ e/e cl < 3/12 (the average of this distribution is ⟨|δv|⟩(ℓ = ∆x)).The region enclosed by the grey dashed lines in these panels and panel a denote the range of cs values for the selected phase bin.Panels e, g, i, k, and m shows the density projection for each run.
Figure 16 .
Figure16.The probability density functions of several velocity components (in the cloud's rest-frame), as measured on the log χ e/e cl = 1/6 iso-surface for our χ = 1000, ξ sh = 27.8,R cl /∆x = 64 simulation at multiple times.The contours bound the region containing the most frequently occurring 68.4% of values at a given cos(θ spherical ).The fluctuations in a distribution's mode arises from the mostly-spherical laminar flow at early times.The dotted lines show the mean values of v r−like and v ϕ−like as functions of cos(θ spherical ).The vertical extent of a contour arises from turbulence (and are somewhat inflated by asymmetries in the flow).At early times, estimating v turb from the variance in any velocity component, other than v ϕ−like , without explicitly accounting for these laminar variations, will yield over-estimates.
Table 1 .
Table of simulations.χ Mw R cl (pc) tcc/t cool,mix t shear /t cool,min break = e min,cool , which coincides with the minimum of t cool . 5For now, we're just interested in α; the following subsections will discuss v turb,break .
|
2022-10-31T01:15:59.381Z
|
2022-10-27T00:00:00.000
|
{
"year": 2022,
"sha1": "0b9fdc4896d5498ce3ac82b3ca20bbcfb80b9f29",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.3847/1538-4357/ad1e51/pdf",
"oa_status": "GOLD",
"pdf_src": "ArXiv",
"pdf_hash": "0b9fdc4896d5498ce3ac82b3ca20bbcfb80b9f29",
"s2fieldsofstudy": [
"Physics",
"Environmental Science"
],
"extfieldsofstudy": [
"Physics"
]
}
|
235314975
|
pes2o/s2orc
|
v3-fos-license
|
Thread Lifting of the Jawline: A Pilot Study for Quantitative Evaluation
Introduction: The facial aging process produces changes that are characteristic of the superficial and deep fat framework and skin layers. Subdermal suspension with threads enables the sagging tissues to be lifted by means of a minimally invasive, closed procedure without surgical dissection. This observational study has been carried out on the basis of standardized tridimensional photographic analysis and measurement, aimed at determining objective, repetitive, and reliable evaluation of the soft tissue suspension technique. Materials and Methods: Eight participants were enrolled in this pilot study presenting with mild to moderate ptosis of the jawline tissues. Patient photographs were taken before (t0), immediately after threads implantation (t1), and at the following visit (t2). Each image captured before thread insertion was registered by the software and surface linear lengths in between the mentioned points were calculated. Results: The result showed an overall average improvement in the “tragus-to-marionette distance” (C-A) and the “tragusto-jowl distance” after a mean follow-up time of 8.16 months (t0-t2). All analyzed parameters improved significantly (P < 0.05) at t1 and at t2 with respect to t0. Conclusions: This pilot study suggest that facial tissue suspension by means of poli-lactic/poli-caprolactone threads is safe and effective in treating skin flaws that affect mild-to-moderate ptosis of the jawline up to 8 months.
IntroductIon
The facial aging process produces changes that are characteristic of the superficial and deep fat framework and skin layers.
The breakdown of collagen and elastic fibers takes place, causing a noticeable weakening in prominent facial regions such as the cheeks, mandibular line, and neck; the dermatochalasis of facial and neck soft tissues accounts for the distinctive signs of facial aging. [1] The introduction of a subdermal suspension with threads enables the sagging tissues to be lifted by means of a minimally invasive, closed procedure. [2] Its effectiveness is related to the focal nature of soft tissue ptosis, a procedure by which facial layers are mobilized without surgical dissection.
The limited morbidity and short downtime of these nonsurgical procedures resulted in practitioners and patients seeking less invasive modalities to achieve face tissue suspension, which guarantees acceptable longevity.
To the best of the authors' knowledge, published reports about the efficacy and longevity of thread lifting are merely based on nonstandardized photographic assessment and self-related questionnaires addressed to patients, even during long follow-up periods of large patient groups and after a statistical evaluation of the results. [3][4][5] The resulting treatment indications and patient selection recommendations are, therefore, quite broad. [6,7] This observational study has been carried out on the basis of the routine assumption that cosmetic procedures are very subjective and, beyond the mere judgment of patients and clinicians, is thus strongly recommended to bridge the gap in facial aging treatment.
The authors of this article aim at determining, by standardized tridimensional photographic analysis and measurement, the outcome of thread lifting of the jawline as this is the first study providing an objective, repetitive, and reliable evaluation of the soft tissue suspension technique.
Patient population
Between January 2017 and July 2018, eight participants were enrolled in this study: six women and two men. Men were aged between 49 and 58 years (mean age 53.5 years), whereas women were aged between 48 and 68 years (mean age 57.5 years).
Participants were asked to maintain the same skin care regimen throughout the study and four weeks before baseline, as well as to adhere to study procedures and attend all sessions within the timeline of the study.
The study protocol followed the ethical guidelines of the Declaration of Helsinki, and the informed consent form (ICF) for the treatment was acquired from all patients.
Exclusion criteria included any treatment in the one year before baseline, including • facial soft tissue filler, • ultrasound technology and/or radiofrequency on the face or neck, and • botulinum toxin A injections in the lower face or neck for the next year after thread insertion.
After proper disinfection with iodopovidone 10% in water was carried out, local anesthesia of the skin of the mandibular line was administered by subcutaneously injecting a solution of lidocaine 2% with epinephrine 1:100000 by an 80-mm blunt tip 23G cannula from an insertion point open by an 18G 40 mm needle in the pretragal area down to the jowls. After a 15-min expectation to obtain proper vasoconstriction of the superficial vessels of the jawline, a thread fixed on a couple of 10-cm needles was inserted into the pre-tragus at the Articularis (Ar) point, named as point C in the current study; the first needle was then pushed horizontally into the subcutaneous fat of the parotid-masseteric region toward the marionette line origin.
The exit point of this needle could be outlined below the Cheilon (Ch), named as point A in the current study; it is not recommended to insert the thread precisely at this landmark, as the underlying modiolus is noticeably a fixed structure.
The second needle was, therefore, pushed downward along the inferior border of the mandible in the same subcutaneous layer, toward the area of the mandibular ligament, topographically identified as the marionette line.
The ideal exit point lies at the lowest border of the jowl, named as point B in the current study. In current literature, this point refers to any anatomical landmark related to description of the jowl, as its formation is the result of an aging process and facial tissue ptosis, which is characterized by a lot of interobserver difference.
The following reference points were, therefore, taken as described [ Figure Once the thread was inserted into this V-shaped pattern, the tissues were gently spread along its length and fixed by the monodirectional barbs in the planned position, to reshape the jawline, to properly contour the jowls, and to restore the fullness at the gonial angle.
Image capture and analysis
Patient photographs were taken by using Vectra H1® (Canfield Scientific, Inc, Parsippany, New Jersey) before (t0), immediately after threads implantation (t1), and at the following visit (t2). All patients consented to the reproduction of recognizable photographs.
The system consisted of six cameras positioned in a triangulated configuration with respect to the subject, and each image was composed of high-resolution tridimensional surface geometry.
Each image captured before thread insertion was registered by the software, aligning the vertical vector in the midline of the face and the horizontal one with respect to the Frankfurt line.
The postoperative image was registered to the preoperative image by anthropometric surface landmarks that were least likely to move during thread insertion: bilateral lateral canthon (point Ex, Exocanthon), bilateral nostril (point Al, Alare), and bilateral point tragal notch (point Ar, Articularis).
These landmarks initialized the orientation of the superimposed images and registered them to each other. The surface linear lengths in between the mentioned points C-A and C-B were calculated by the "Vectra Analysis Module" dedicated software [see Table 1] on the patient images at t0 (pretreatment), t1(immediate post-treatment), and t2 at the next examination, and the changes were recorded, respectively, on the period ∂t0-t1 and ∂t0-t2.
Statistical analysis
The outcome to be evaluated was linear change of the suspended tissue along the jawline in the postoperative image relative to the preoperative image.
A comparison was made between the immediate (t0-t1) and the late (t0-t2) postoperative period.
Statistical analysis was performed by using GraphPad PRISM © Software. D'Agostino-Pearson normality test was performed to verify data distribution. Friedman test for repeated measures was performed to compare results at different time points. The level of significance for statistical analysis was set at P < 0.05.
results
In total, six out of the eight patients completed the study, with two dropping out during follow-up. Considering the 2 who dropped out, the mean age of the examined population is 58.8 years.
Descriptive statistics are shown in [Tables 2-5]. All analyzed parameters improved significantly (P < 0.05) at t1 and at t2 with respect to t0.
All the mentioned surface linear lengths were measurement retrieved by the Vectra H1 (Canfield Scientific, Inc, Parsippany, New Jersey) 3D software on patient images.
The result showed an overall average improvement of 4.24 mm in the "tragus-to-marionette distance" (C-A) immediately (t0-t1) after the thread implantation, to a mean value of 3.03 mm at the end of the 8.16 months of follow-up (t0-t2) time.
[ Regarding the "tragus-to-jowl distance" (C-B), the mean immediate (t0-t1) improvement recorded was 5.54 mm, which was maintained at 4.18 mm after a mean follow-up time of 8.16 months (t0-t2), ranging from 3 to 12 months. [ No adverse event was reported either during thread insertion or later, except for a slightly painful sensation at the pre-tragus point, where threads were inserted, which shows spontaneous resolution in all the cases.
dIscussIon
Since the earliest reports of facial surgical rejuvenation by Miller and Kolle, more durable and less invasive means of rejuvenating the face have been sought. [8,9] Between the 1980s and the 1990s, approaches to skin laxity and facial tissues ptosis through direct excision raised the peak of maximal invasivity. After Mitz and Peyronie defined the superficial musculoaponeurotic system (SMAS), rejuvenation methods evolved from skinonly rhytidectomy to a range of soft tissue repositioning and SMAS lift adaptations. [10,11] The first author who mentioned the concept of barbed sutures related to aesthetic applications of tissue suspension was the Georgian author Marlen Sulamanidze in the late 1980s; since the late 1990s, thread lifting developed as a minimally invasive technique relying on anti-ptosis threads and sutures. [1,12,13] The face has a layered structure made of skin, subcutaneous and deep fat, the SMAS, and voluntary muscles, which overlie a bony framework, and all these evolve with age.
The effectiveness of thread lifting is related to the known focal nature of soft tissues ptosis, as it is widely accepted that with aging some areas of the face sag more than others. This outlines the differences against a youthful face, where the smooth transition between different facial regions provides for the harmony of areas of concavity and convexity within the subject's frame. [14,15] Neglect of the fundamental principles of the topographic anatomy of facial aging and lack of the knowledge of the vectors that must be applied to achieve optimum tissue elevation prevent an improvement in treatment outcomes. Instead, all procedures should consider that this is not just a matter of "pulling the sheet over an unmodified mattress." [16,17] In the aging face, the jawline loses projection and definition due to gravitational ptosis and also because of the action of the Platysma muscle, which acts as a major depressor in this region; these come along with jowl fat hypertrophy and loosening of the masseteric cutaneous ligaments. The definition between the face and the neck becomes obscured given that these changes lead to an irregular jawline contour, which is almost sinusoidal in its appearance. Gravity, therefore, plays a key role in lower face aging, and thread lifting techniques have been conceived to deal with it. These must be addressed to treat moderate cutaneous ptosis requiring a relatively modest degree of suspension. [18][19][20][21][22] Superficial tissue suspension for the aging face allows one to take advantage of the residual fullness of the subcutaneous fat to achieve jawline skin repositioning along a vector that is postero-superiorly oriented. The sliding movement of the skin and subcutaneous fat due to gravity is, therefore, inverted and the tissues spread along the jawline to gently rejuvenate it.
Indeed, in case of severe tissue aging, traditional surgical lifting may be more suitable as it allows to actually reposition the full thickness of facial skin and deep volumes. [11,23] Further, the way the thread works histologically is not to be forgotten when dealing with longevity of results: A microscopic examination on implanted threads showed that they are covered with a solid fibrous membrane, which is especially pronounced around the barbs and confirms the stability and persistence of good clinical results. [18] In this prospective study, the authors quantitate the longterm linear changes (up to 8 months) along the mandibular border by employing the thread lifting technique mentioned in this article. The reported outcomes have been further validated by statistical analysis.
Patients seem to prefer minimally invasive procedures and have been willing to accept a more modest degree of aesthetic improvement in return for decreased morbidity and more rapid healing; however, disregarding this limitation will certainly lead to an early relapse of ptosis and to poor outcomes. [14] Regardless of the learning curve point at which the practitioner currently stands, it is mandatory to state that from a technical point of view "thread face-lifting" should be seen as a temporary rejuvenating procedure until patient aging requires further approaches.
Although the morphologic changes that occur with aging have been extensively and objectively characterized with conventional imaging techniques, 24,25] little or no parameters exist nowadays that suggest how much to define the jawline with respect to desirable outcomes and the assessment of facial aesthetic outcomes largely remains a subjective evaluation without an objective means of measurements.
The introduction of tridimensional stereophotogrammetry, as a method to make reliable measurements on photographs by using the coordinates captured simultaneously by two or more configured cameras from different angles, allows one to objectively compare the outcomes and their longevity.
The data calculated from a collection of points obtained along a three-axis-coordinates system allow one to obtain, once elaborated by the dedicated software, reliable measurements along with tridimensional images of outcomes and their comparison with the facial aesthetics before treatment. 10,11] Cosmetic procedures are very subjective and, beyond the mere judgment of patients and clinicians, it is thus strongly recommended to bridge this gap when dealing with facial aging treatment. [26] The aim of this pilot study is to quantitatively investigate and to validate the effectiveness in lifting sagging tissue for the correction of mild-to-moderate ptosis of the jawline in a small group of eight patients, by taking 3D images before and after thread insertion and by assessing and comparing, through a dedicated tool, linear changes over time.
Neither side effects nor complications were recorded: The presented operative protocol can, therefore, be considered as safe and reliable.
Our results showed that it is possible to achieve tissue repositioning, which may last up to 8 months, as per the recorded follow-up period.
This study, however, has at least some limitations that require consideration. The mean age of the study population is quite advanced (58.8 years, without considering two drop-offs); however, in daily practice, this rejuvenating technique is usually addressed to younger patients.
The outcomes have been assessed on the jawline in terms of a range of linear lengths; nevertheless, bidimensional measurements are a limited expression of the clinical outcomes. In this context, this has been chosen merely to help the reader correlate it with the jawline rejuvenation through its superficial reshaping via the sliding movement of the skin and subcutaneous fat. Further investigations on facial tissue volume assessment are, therefore, desirable.
As this is a pilot study, the limited number of treated subjects urges the authors to consider the presented outcomes as preliminary, although statistically validated, since it was not possible to carry out an age-related analysis.
Further, the follow-up duration is somewhat limited, with respect to PLLA-PCA thread longevity, which is claimed to be between 12 and 18 months; the authors agree with the latter claim and consider that the mentioned follow-up variability (3 to 12 months) is coherent with a pilot study, and it is reliable as the presented statistical evaluation.
Indeed, investigations with a longer follow-up are strongly advised.
To the best of our knowledge, this is the first objective, standardized photographic analysis of facial suspension through barbed, resorbable threads.
This clinical study and the statistical evaluation have been carried out to overcome the subjectivity of cosmetic procedure assessment in evaluating thread lifting, thus providing detailed and reliable results up to 8 months. Since the authors strongly believe that this kind of evidence has to be incorporated into clinical practice, even starting out from pilot studies such as the one presented here up to highquality studies, evidence based medicine is essential to our mission of providing better answers for our patients.
conclusIons
The results of this pilot study suggest that facial tissue suspension by means of poli-lactic/poli-caprolactone threads is safe and effective in treating skin flaws that affect mild-to-moderate ptosis of the jawline up to 8 months.
Their action in suspending facial tissues is immediate, effective, and reliably long-lived, with regard to the follow-up duration presented in our patient series.
Longer follow-up, larger patient groups, and studies on different facial areas are, indeed, needed to objectively assess the effectiveness of treating facial ptosis and of valuing the role of thread-lifting techniques in major facial rejuvenating procedures since the preliminary results experienced in this study.
Declaration of patient consent
The authors certify that they have obtained all appropriate patient consent forms. In their form/forms, the patient(s) has/have given his/her/their consent for his/her/their images and other clinical information to be reported in this journal. The patients understand that their names and initials will not be published and due efforts will be made to conceal their identity, but anonymity cannot be guaranteed.
Financial support and sponsorship
Dr. Diaspro is a consultant for Aptos LLC, and Dr. Luni is a consultant for Aptos LLC. No grants, equipment, or drugs were received for this article.
Conflicts of interest
There are no conflicts of interest.
|
2021-06-04T05:24:24.945Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "0c12558f81d49d957b5f494191eb526792c7b12e",
"oa_license": "CCBYNCSA",
"oa_url": null,
"oa_status": null,
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "eaf226d075c82fdd61f483f8679ffddb7274f6fd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
3509180
|
pes2o/s2orc
|
v3-fos-license
|
A purified, fermented, extract of Triticum aestivum has lymphomacidal activity mediated via natural killer cell activation
Non-Hodgkin lymphoma (NHL) affects over 400,000 people in the United States; its incidence increases with age. Treatment options are numerous and expanding, yet efficacy is often limited by toxicity, particularly in the elderly. Nearly 70% patients eventually die of the disease. Many patients explore less toxic alternative therapeutics proposed to boost anti-tumor immunity, despite a paucity of rigorous scientific data. Here we evaluate the lymphomacidal and immunomodulatory activities of a protein fraction isolated from fermented wheat germ. Fermented wheat germ extract was produced by fermenting wheat germ with Saccharomyces cerevisiae. A protein fraction was tested for lymphomacidal activity in vitro using NHL cell lines and in vivo using mouse xenografts. Mechanisms of action were explored in vitro by evaluating apoptosis and cell cycle and in vivo by immunophenotyping and measurement of NK cell activity. Potent lymphomacidal activity was observed in a panel of NHL cell lines and mice bearing NHL xenografts. This activity was not dependent on wheat germ agglutinin or benzoquinones. Fermented wheat germ proteins induced apoptosis in NHL cells, and augmented immune effector mechanisms, as measured by NK cell killing activity, degranulation and production of IFNγ. Fermented wheat germ extract can be easily produced and is efficacious in a human lymphoma xenograft model. The protein fraction is quantifiable and more potent, shows direct pro-apoptotic properties, and enhances immune-mediated tumor eradication. The results presented herein support the novel concept that proteins in fermented wheat germ have direct pro-apoptotic activity on lymphoma cells and augment host immune effector mechanisms.
Introduction
Current therapeutic approaches for patients with non-Hodgkin lymphoma (NHL) include chemotherapy, signal transduction inhibitors, radiation and immunotherapy; bone marrow transplantation has become more frequent for patients who fail initial therapies. Although these treatments are often initially successful, most patients eventually become refractory and PLOS ONE | https://doi.org/10.1371/journal.pone.0190860 January 5, 2018 1 / 20 a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 collected after centrifugation (9,500 x g, 4˚C, 35 minutes) and either freeze-dried and labeled FWGE or subject to fractionation as follows. To produce FWGP, the post-fermentation supernatant was precipitated with ethanol (70% final concentration) overnight at -20˚C and centrifuged (9,500 x g, 4˚C, 35 minutes); the pellet was frozen at -80˚C and lyophilized for 2-3 days until dry. Typically, 2 g of lyophilized powder were resuspended in 40 ml PBS and allowed to completely solubilize by stirring at 4˚C for up to 24 hours. Any insoluble material was discarded; the preparation sterilized by filtration through 0.2 μm PES membranes (Millipore) and applied to a Sephadex G50 column. The eluent was assessed for lymphomacidal activity and the most potent fractions were combined, vacuum-dried, re-dissolved in PBS and applied to a Superdex S200 column. Elution fractions were collected, assessed for lymphomacidal activity and the most potent fractions were combined, vacuum-dried, and designated as FWGP. Protein content was quantified by BCA assays (Thermo Fisher). Aliquots were stored at -80˚C until ready for use.
Cell lines and primary specimens
Lymphoma (Ramos, Raji, DOHH-2, Granta-519, Sudhl4, Chevalier, WSU-WM, BM35, DG75), T-cell leukemia (Jurkat), lung (H1650), breast (MCF-7) and hepatic (HepG2) cancer cell lines were purchased from ATCC (Rockville, MD) and grown in RPMI-1640 or DMEM supplemented with 10% heat-inactivated fetal bovine serum (HI-FBS), 100 units/ml penicillin G, and 100 μg/ml streptomycin sulfate at 37˚C in 5% CO 2 and 90% humidity according to ATCC recommendations. Fresh vials of cells were periodically thawed and used for in vitro experiments to ensure that changes to cells have not occurred over time/ passages in culture. For xenograft studies, a fresh vial of Raji cells was thawed 7-10 days before tumor cell implantation. YAC-1 cells were grown in RPMI medium supplemented with 10% HI-FBS. K562 cells were grown in Iscove's modified Dulbecco's medium supplemented with 10% HI-FBS.
Human peripheral blood mononuclear cells (PBMCs) from healthy donors were isolated from whole blood collected in citrated vacuum tubes using standard protocols. Blood was diluted 1:1 with PBS, layered over Ficoll-Paque Plus (GE Healthcare) and centrifuged for 30 minutes at 400 x g, 25˚C. The buffy coat was collected, washed twice with PBS and the cells were resuspended in RPMI-1640 supplemented with 10% HI-FBS, 300 mg/L glutamine and penicillin/streptomycin. Untouched natural killer (NK) cells were isolated from fresh PBMCs using a magnetic purification system (Miltenyi Biotech). Briefly, 10 8 PBMCs in 400 μl buffer were incubated with 100 μl biotin-antibody cocktail (5 minutes, 4˚C) and 200 μl NK cell microbead cocktail (10 minutes, 4˚C); the cell suspension was loaded in an LS column attached to a magnet and the NK-enriched unlabeled cells collected as the flow-through.
Direct cytotoxicity
Direct cytotoxic activity of FWGE was assayed by incubating 5 x 10 4 cells/well (96-well plates) in 100 μl culture medium with the indicated concentrations of FWGE for up to 72 hours at 37˚C, 5% CO 2 . Cell viability was assessed using an MTS-based assay (Promega) according to the manufacturer's instructions and compared to untreated controls. IC 50 values were calculated by fitting the dose-response data to a dose-inhibition curve using GraphPad Prism software. Cytotoxicity of heat-inactivated FWGE (80˚C, 90 minutes), proteinase K-treated FWGE (100 μg/ml, 37˚C, 1 hour) and the protein fraction FWGP were assayed in the same way. Three replicate wells per condition were used in 3 independent experiments.
Apoptosis and cell cycle
Raji or Ramos cells (1 x 10 6 /ml) were incubated with 200 μg/ml FWGP for 1, 3, 6, 12, 24 and 48 hours, washed with PBS and resuspended in 100 μl Annexin-V binding buffer (10 mM HEPES, 140 mM NaCl, 2.5 mM CaCl 2 , pH = 7.4) with 5 μl Annexin-V-Cy5 (BD Pharmingen) and 1 μg/ml Sytox Green (Thermo Fisher) according to the manufacturer's instructions. After staining for 15 minutes, cells were analyzed by flow cytometry using a FACSCanto instrument (BD); 30,000 events per sample were acquired. Untreated cells were stained as above and used as controls. Untreated, unstained or single-stained controls were used for compensation. To assess caspase activity, cells were incubated with FWGP or PBS control as above and stained with a Vybrant FAM Poly Caspases Assay Kit (Molecular Probes) according to the manufacturer's instructions. Briefly, 300 μl of cell suspension (1 x 10 6 cells/ml) were incubated with VAD-FMK FLICA reagent and Hoechst 33342 for the detection of activated caspases 1, 2, 4, 5, 6, 8 and 9, washed and analyzed by flow cytometry as above. Data were analyzed using FlowJo software. For cell cycle analysis, cells were fixed in ethanol, washed, and stained with 20 μg/ml propidium iodide (PI) as previously described [33]; data (50,000 events/sample) were acquired as noted above.
qPCR arrays
Quantitative real-time PCR (qPCR) was performed using the Apoptosis and Survival Tier 1-4 H384 panel (Bio-Rad PrimePCR) to examine over 350 genes associated with cell survival and apoptosis. Total RNA was extracted from control and treated (200 ng/μl) Raji cells at the indicated time points using an RNeasy kit (Qiagen) and reverse-transcribed with the iScript™ Advanced cDNA Synthesis Kit (Bio-Rad) according to the manufacturer's instruction. Reactions were run in a 7900HT instrument (Applied Biosystems) using SsoAdvanced™ Universal SYBR 1 Green Supermix (Bio-Rad). Data were normalized and analyzed with the PrimePCR analysis software (Bio-Rad). Selected genes were validated by immunoblotting.
Immunoblotting
Five million Raji cells were incubated with 200 μg/ml FWGP or PBS control in 5 ml culture medium at 37˚C, 5% CO 2 . At 2, 6, 12, 24 and 48 hours, 1-ml aliquots were collected, centrifuged and cells washed with PBS. Cell pellets were lysed in 100 μL of RIPA buffer (150 mM NaCl, 1% sodium deoxycholate, 0.1% SDS, 1% Triton X-100, 50 mM Tris-HCl, pH = 7.2) supplemented with protease inhibitors on ice for 30 minutes with occasional vortexing. Immunoblotting was done as previously described [34,35]. Briefly, cell lysates (50 μg protein/lane in reducing Laemmli buffer) were run on a 10% SDS-PAGE gel and transferred to nitrocellulose. Membranes were blocked with 5% BSA or 5% non-fat dry milk in PBS and incubated with primary antibodies (4˚C, overnight) diluted as indicated in 5% BSA in PBS with 0.01% Tween-20 (PBS-T). Membranes were washed with PBS-T and incubated for 1 hour at room temperature with HRP-labeled secondary antibodies, washed and developed with Luminata Crescendo (Millipore) detection reagent. Signal intensity was quantified using ImageJ software and normalized to load controls (GAPDH).
Killing assays
Killing assays were performed by incubating effector and target cells at the specified ratios for 4 or 24 hours, followed by flow cytometric quantification of double-labeled target cells. For mouse samples, 0.5 μl CFSE (stock = 10 mM in DMSO, eBioscience) were added to 5 x 10 5 target YAC-1 cells in 1 ml 5% HI-FBS/PBS in a 15-ml conical tube, mixed immediately and incubated for 5 minutes at room temperature. Labeling was stopped by adding 2-3 ml HI-FBS and culture medium to fill the tube. Cells were centrifuged (5 minutes, 300 x g, 24˚C), resuspended at 1 x 10 6 cells/ml in culture medium and allowed to recover overnight. Mouse splenocytes were T-cell depleted by incubating with 1.5 μg/10 6 cells anti-Thy1.2 (BioLegend, clone 30-H12) for 30 minutes at 4˚C, washing and incubating with rabbit serum complement (Cedarlane, Burlington, NC) at the lot-specific recommended dilution for 45 minutes at 37˚C. T-cell depleted (TCD) splenocytes were then washed twice and resuspended in culture medium. Twenty thousand CFSE-YAC-1 cells were incubated with TCD splenocytes in 96-well round-bottom plates, in a final volume of 200 μl containing recombinant human IL-2 (rhIL-2, 1,000 IU/ml, Biological Resources Branch, NCI, Frederick, MD). After 4 h, cells were centrifuged, washed with PBS and resuspended in 100 μl FVD eFluor 455UV (1:1000 dilution in PBS, Thermo Fisher) for 30 minutes at 4˚C. Cells were washed with 2% FBS/PBS and resuspended in the same buffer for acquisition in a Fortessa (BD) flow cytometer. Dead target cells were defined as the CFSE + FVD + population.
Killing activity of human PBMCs was assayed by incubating PBMCs with the indicated concentrations of FWGP for 20 h at 37˚C, 5% CO 2 . Cells were then washed twice with culture medium and counted; 2.5 x 10 4 viable PBMCs/100 μl/well were incubated with an equal number of Ramos (target) cells for 24 h. Cytotoxicity was assessed using the DELFIA EuTDAbased assay (Perkin Elmer) and normalized to controls (target cells incubated with untreated PBMCs).
Animals
For xenograft experiments, female 6-8-week-old nu/nu mice (Harlan, Indianapolis, IN) were maintained in micro-isolation cages under pathogen-free conditions at the UC Davis animal facility. All procedures were conducted under an approved protocol according to national and institutional guidelines. Three days after whole body irradiation (400 rads), Raji human lymphoma cells (1 x 10 6 in 100 μl PBS) were implanted subcutaneously on the left flank. Either on the day of tumor implantation (preemptive), or once approximately 300 mm 3 tumors had been established (~20 days), mice were randomly divided into treatment groups (n = 8-10). Treatment (FWGE, FWGP or PBS) was administered by gavage once daily 5 days per week for the duration of the study. Tumors were measured twice per week using a digital caliper; tumor volumes were calculated using the equation: (length x width x depth) x 0.52. Tumor responses were categorized as follows: cure (C, tumor disappeared and did not re-grow by the end of the 84-day study); complete regression (CR, tumor disappeared for at least 7 days but later regrew); partial regression (PR, tumor volume decreased by 50% or more for at least 7 days then re-grew).
Mice were euthanized when the tumor reached 15 mm in any dimension, if they showed signs of distress, or at the end of the 84-day study. Toxicity was assessed by twice-weekly measurement of weight, activity, and blood counts for the first 28 days, then weekly for the rest of the 84-day study period. Standard assessment of toxicity was performed by the UC Davis School of Veterinary Medicine Laboratory Animal Clinic.
For xenograft experiments with NK cell-depleted animals, female 6-8-week-old nu/nu mice were implanted with Raji cells as above and treatment with FWGP started once tumors had been established (defined as day 0). Anti-asialo-GM1 (Wako, Richmond, VA) was administered as 25 μl (according to lot-specific titration by the manufacturer) intraperitoneal injections on days 0, 10, 20 and 30.
For studies of FWGP in immunocompetent animals, BALB/c 8-month-old female mice (Envigo) were treated with FWGP (140 mg/kg) by daily gavage for 3 days. On day 4, splenocytes were collected by dissecting spleens into 3 ml of cold RPMI and disrupting the tissue through 100-μm mesh. The suspension was further dissociated by passing sequentially through 20, 21 and 23 gauge needles, 3 times each. Red blood cells were lysed with ACK buffer (Thermo Fisher) for 5 minutes at room temperature, washed and resuspended in culture medium.
Statistical analysis
In vitro cytotoxicity data were analyzed by a two-tailed, unpaired Student's t-test. Experiments with 3 or more groups were analyzed using ANOVA or 2-way ANOVA with post-tests for multiple comparisons as indicated in each figure. For Kaplan-Meier curves, an "event" was defined as tumor volume reaching at least 1500 mm 3 . Each individual mouse was ranked as 1 (event occurred) or 0 (event did not occur) and the time to event (in days) was determined.
When an individual was ranked as 0, a time to event of 88 days was recorded. Chi-squared and p values were determined by the Log-rank test. All statistical analysis was performed using GraphPad Prism software (San Diego, CA). Statistical significance is indicated as à p<0.05, Ãà p<0.01, ÃÃà p<0.001 and ÃÃÃà p<0.0001.
Ethics
All animal work has been conducted according to relevant national and international guidelines under approved protocols from the University of California Davis Institutional Animal Care and Use Committee (AAALAC accreditation #000029; PHS Animal Assurance #A3433-01; USDA Registration #93-R-0433). Human cells were collected from discarded leukapheresis bags under protocols approved by The University of California Davis Institutional Review Board Administration. Informed written consent was obtained at the time of collection. The need of consent for the of discarded, anonymized leukapheresis bags was waived by the ethics committee. No patient was recruited or sample collected for the sole purpose of this study.
FWGE has potent in vitro lymphomacidal activity
The FWGE used in these studies was produced in-house by fermenting raw wheat germ with Saccharomyces cerevisiae. To assess if the in vitro activity of FWGE was equivalent to the commercially available product (AVEMAR 1 ) in vitro cytotoxicity assays with both agents were done; the killing activity was equivalent (see S6 Fig). We initially assessed the cytotoxic activity of FWGE on two Burkitt lymphoma cells lines (Raji and Ramos) and the Jurkat T-cell leukemia cell line as compared to primary human B cells (Fig 1A). After 72 hours, FWGE showed considerable cytotoxic activity in the three cancer-derived cell lines with IC 50 = 120, 250, and 275 μg/ml for Jurkat, Ramos and Raji, respectively. However, FWGE was substantially less cytotoxic to normal human primary B cells, as evidenced by an IC 50 = 582 μg/ml, 2-5 times higher than that observed in malignant cells. Pretreatment of FWGE with proteinase K or heat completely abrogated its cytotoxic activity (Fig 1B), suggesting that the active component(s) of FWGE is a peptide. Moreover, the cytotoxicity of FWGE was dependent on a minimum of 8 hours of fermentation (data not shown).
Direct cytotoxic activity of fermented wheat germ proteins
Soluble proteins in FWGE were ethanol-precipitated, dissolved in PBS, and passed through a Sephadex G50 column. The eluent fractions were assessed using SDS-PAGE and found to be between 10 and 200 kDa (not shown). When assessed for cytotoxicity using Raji cells, fractions 4-8 were the most potent; 50 μg/ml killed 70-90% of Raji cells (Fig 1C). These fractions were collected, vacuum-dried overnight, re-dissolved in PBS and further size-separated using Superdex S200. Eluted fractions were again assessed using SDS-PAGE and most were 10-100 kDa (not shown). All eluted fractions were assessed for cytotoxicity; fractions 3-6 killed 80-90% of Raji cells (Fig 1D). These fractions were combined for further analysis; they were now termed FWGP. FWGP was then assessed for cytotoxic activity and in a dose-response experiment using a panel of malignant cell NHL cell lines that represented a broad array and the most common B cell NHL subtypes; IC 50 ranged from 20-150 μg/ml ( Table 1). FWGP also showed cytotoxic activity against H1650 and A549 (lung carcinoma, IC 50 = 144 and 70 μg/ml, respectively) and HepG2 cells (hepatic carcinoma, IC 50 = 245 μg/ml), but very modest or no activity against MCF-7 cells (breast cancer, IC 50 = 630 μg/ml). Comparison of IC 50 of FWGE and FWGP in Raji and Ramos cells suggests that FWGP is significantly more potent in both cell lines (120 vs 39 and 250 vs 70 μg/ml, respectively). Since wheat germ agglutinin (WGA) is known to be cytotoxic [36] we sought to determine if WGA in these preparations was contributing to the lymphomacidal effect. WGA depletion by immunoprecipitation had no effect on the cytotoxic activity of FWGP (S1 Fig). WGA depletion was confirmed by immunoblot analysis (not shown). To investigate the mechanisms by which FWGP exerts direct lymphomacidal activity, we assessed FWGP-treated Ramos cells for apoptosis by staining with Annexin V, with Sytox Green counterstaining to differentiate late apoptotic/necrotic cells. A significant increase in the apoptotic population was observed in Ramos cells treated with FWGP for as little as 1 hour (35.1 ± 3.8%) when compared to untreated controls (12.8 ± 6.4%), and reached a maximum (46.8 ± 10.4%) after 24 hours of treatment (Fig 2A). The late apoptotic/necrotic population consistently increased over time, and was significantly higher than untreated controls after 24 and 48 hours of treatment. Similar results were obtained for Raji cells (not shown). Activated caspases were detected in 47.25 ± 17.75% of Ramos cells treated with FWGP for 48 hours, versus 9.24 ± 0.46% of untreated (control) cells (p<0.01, Fig 2B). This increase was maintained after 72 hours of incubation (p<0.05). No statistically significant difference was observed at early time points; however, caspase activation was apparent as early as 24 hours (16.15 ± 0.07% vs 9.52 ± 0.06% for treated vs control, respectively). Cell cycle analysis indicated a decrease in the G 0 /G 1 population with a concomitant increase in the S population (Fig 2C and S2 Fig), this became more evident after 24 hours of incubation with FWGP and maintained, albeit to a lesser degree, through 72 hours. There was no significant change in the G 2 /M population. These results suggest that FWGP blocks progression through the S phase of the cell cycle. In agreement with the apoptosis results previously described, there was a marked increase in the subG 1 population, indicative of dead or dying fragmented cells.
To further examine the direct effects of FWGP on cancer cells at the molecular level, we performed qPCR on treated (2, 6, 12, 48 h) and control Raji cells using an apoptosis and survival pre-designed panel. Consistent with the apoptotic phenotype presented above, treatment with FWGP resulted in early (2h) upregulation of pro-apoptotic genes of the BCL2 family (BAK1, BAD, BAX, BCL10), followed by downregulation of anti-apoptotic AKT1 and upregulation of tumor suppressor TP53 (6h), and upregulation of caspase genes (12-48h, S3 Fig).
Pro-apoptotic members of the tumor necrosis factor superfamily (TRAIL receptors 1 and 2, TNF) were also upregulated at early time points, as well as the Fas receptor and FADD. In agreement with cell cycle arrest at G1, we observed downregulation of cyclin-dependent kinase CDK1 and upregulation of CDK inhibitors p21, p27 and p16 (S3 Fig, see also S1 Table for complete qPCR data). To validate some of the qPCR data Immunoblot analysis of selected proteins showed FWGP induced a marked decrease in AKT and increase in BAK, BAD and p53 protein levels by 24-48 h (Fig 2D), consistent with earlier changes in message levels.
In vivo lymphomacidal activity of fermented wheat germ extract and fermented wheat germ proteins The in vivo lymphomacidal effects of FWGE were assessed using nude mice bearing Raji xenografts. Mice with established tumors (>100 mm 3 ) were treated with FWGE (250, 500 and 1000 mg/kg); after 12 weeks of treatment there was a significant reduction in the tumor volume in the treated groups when compared to untreated controls (average tumor volume ± SEM for increasing doses and control = 1166±324, 944±404,1064±383, and 1475±287 mm 3 respectively. Fig 3A). To examine how the initial tumor volume influenced FWGE efficacy, treatment was initiated on the same day the xenografts were implanted (pre-emptive). This resulted in significantly less growth of the tumor in mice treated at the same doses (250, 500, and 1000mg/kg) compared to untreated controls (587±274, 24±18, 903±381 vs 1475±287 mm 3 respectively, Fig 3B). Interestingly, the intermediate dose (500 mg/kg) was consistently (S4 Fig) the most effective in both treatment schemas. Our in-house produced FWGE had similar in vivo activity as (S4 Fig). Survival was 100% at the end of the 12-week study when animals were treated preemptively and 50% when treated after tumors were established, compared to 18% for the untreated control (Fig 3C). In preemptive studies, 8/8 animals treated with the 500 mg/kg dose showed at least partial regression compared to 5/8 animals in the higher and lower doses and 1/8 in the control group (Fig 3D). No toxicity was observed at any dose, as evidenced by no changes in activity or body weight (Fig 3E) as well as normal renal To compare the in vivo efficacy of our semi-purified protein extract (FWGP) to FWGE, Raji-bearing mice were treated with FWGP (140 mg/kg) or FWGE (500 mg/kg). Since 1g of FWGE typically yielded~280 mg of total protein after ethanol precipitation and size exclusion chromatography, the FWGP dose of 140 mg/kg was chosen as equivalent to the most efficacious dose of FWGE. As shown in Fig 3F, FWGP was found to have comparable in vivo activity at one third the dose (by total protein) of crude FWGE. The tumor volumes at 12 weeks were 673±218 and 833±308 mm 3 for FWGP and FWGE, respectively, versus 1411±323 mm 3 for untreated controls. This result not only confirms in vivo activity of the protein fraction but also indicates increased potency. Previous reports suggested small molecules such as benzoquinones were responsible for FWGE activity [20]. Our process to produce FWGP from FWGE eliminates small molecules and leaves primarily proteins. When we examined the FWGE fraction that included everything below 3.4 kDa, no efficacy was found in in vivo models (Fig 3F) supporting the hypothesis that benzoquinones do not play a major role in the efficacy of FWGE or FWGP.
R-CHOP (rituximab + cyclophosphamide + doxorubicin + vincristine + prednisone) is the current standard of care for aggressive NHL. We compared R-CHOP to FWGP as well as the combination of R-CHOP and FWGP and evaluated the combination therapy in mice with established Raji tumors. As shown in Fig 3G, 140 mg/kg FWGP was as effective as the R-CHOP regimen (tumor volume at 10 weeks = 782±134 and 665±177 mm 3 , respectively, compared to 1703±150 mm 3 for controls). Nine of 10 animals treated with a combination of FWGP + R-CHOP showed complete regression, with a tumor volume at the end of the experiment of 91± mm 3 (representing only 1 animal that did not achieve a CR; 9/10 animals showed complete regression with no palpable tumor).
FWGP enhances NK cell-mediated tumor eradication
Although rigorous studies are lacking, FWGE has been reported to have immunomodulatory properties, including stimulatory effects on mouse lymphocytes in vitro [11], immune-restoring effects in thymectomized animals [9] and decreased expression of MHC-I on the tumor cell surface [13]. Since we demonstrated efficacy in the nu/nu xenograft model and this model lacks T cells, but retains natural killer (NK) cell numbers and activity, we hypothesized that FWGP could make cancer cells more susceptible to NK cell surveillance. To test the hypothesis that the observed in vivo efficacy of FWGP is, at least in part, due to increased NK anti-tumor activity, we performed xenograft experiments combining FWGP treatment and NK cell depletion. Consistent with previous results, FWGP treatment resulted in significant tumor reduction (tumor volume at 5 weeks = 877±280 versus 2093±395 mm 3 for FWGP and PBS, respectively). However, animals treated with FWGP and concomitantly depleted NK cells had tumor volumes of 2104±541 mm 3 with the tumor volume being no different from PBS-treated controls. (Fig 4). Animals treated with the depleting antibody (anti-ASGM1) had no effect (tumor volume = 2161±571 mm 3 ). NK cell depletion was confirmed by flow cytometry (S6 Fig). To investigate the effects of FWGP on the intact immune system, we treated immunocompetent BALB/c mice with FWGP for 3 days and examined PBMC subsets isolated from the spleen on day 4. Immune cell subpopulations (T cells, B cells, granulocytes, monocytes) in treated animals were not significantly different from controls, except for a modest increase in NK cell numbers (10.5±0.8% vs 7.4±0.3%, p = 0.0039, S7 Fig). However, we observed a significant increase in NK-cell killing activity, as assayed by flow cytometry of T-cell depleted splenocytes (effector) and CFSE-labeled YAC-1 (target) cells (Fig 5A). NK cells from treated animals killed 59.1±5.5 and 25.4±2.4% of target cells at E:T ratios of 50:1 and 10:1 respectively, while cells from control animals killed 39.5±2.8 and 13.1±5.7% (p = 0.0020 and 0.0324, respectively for 50:1 and 10:1 ratios). This result is further supported by increased degranulation of NK cells from treated vs control animals (106.5±24.0 vs 54.4±6.4% at E:T = 0.1:1), measured by CD107a staining intensity in the CD3 -CD49b + subpopulation of T-cell depleted splenocytes (Fig 5B).
FWGP stimulates human NK cells
To test if the results obtained in mice can be extrapolated to human NK cells, we performed an ex vivo experiment by incubating PBMCs from healthy donors with 1 or 10 ng/μl FWGP overnight. In agreement with the results from mouse experiments, when compared to untreated controls, incubation of human PBMCs with FWGP resulted in an increase in the CD3 -CD56 + population (2.2±0.4 vs 2.6±0.5 vs 5.1±0.6 for 0, 1 and 10 ng/μl, respectively; p<0.01), with no changes in the CD3 + CD56 -/+ populations. The increase in the NK cell compartment was driven by an increase in the CD56 dim subset, with no changes in CD56 bright cells (Fig 6A and 6B). Importantly, FWGP caused increased production of IFNγ in human NK cells (MFI = 9305±694 vs 10733±1358 vs 19000±1010, p<0.01), and increased surface levels of the early activation marker CD69 (MFI = 2982±669 vs 5738±1283 vs 12091±899, p<0.01 and p<0.05, 6C and 6D). Finally, to examine the effect of FWGP on killing activity, human PBMCs were incubated with increasing concentrations of FWGP overnight, washed and incubated with target cells (Ramos) for 24 hours. There was a dose-dependent increase in killing activity of PBMCs (Fig 6E). The lowest dose tested (12.5 ng/μl) resulted in 87.88±3.22% viable cells, a small but significant decrease when compared to controls (99. 97±2.6%, p<0.05). At the highest dose tested, only 38.0±3.8% (p<0.0001) target cells remained alive, representing a 2.6-fold increase in killing activity. To ensure that the effects of FWGP on PBMC numbers did not confound the interpretation, PBMC cell numbers were examined after incubation with the indicated dose of FWGP. As seen with B lymphocytes (see Fig 1A), the effect of FWGP on PBMC numbers was modest, however to ensure that this did not confound interpretation of the cytotoxicity assay, PBMC numbers were adjusted and normalized prior to each experiment.
Discussion
Cancer patients are more frequently turning to complementary medicine and nutraceuticals, especially when standard treatment options fail. FWGE is a nutraceutical that has been reported to possess unique "cancer-fighting" characteristics [28]. Since its first description in the late 1990s [9][10][11], anti-cancer activity has been reported for a variety of human tumors [15,17,19,20,[22][23][24][37][38][39]. Most of these studies evaluate cytotoxic activity on cancer-derived cell lines either in vitro or in xenograft models. A few studies report FWGE has immunomodulatory properties [11,13,[27][28][29]40], however these studies lack a rigorous analysis. Here we provide in-depth examination of the cytotoxic effects of FWGE on lymphoma cells in vitro and in vivo, explore the mechanisms of action, and present evidence that it enhances NK-cell mediated tumor eradication. We further found that these activities are present in a protein subfraction (FWGP), contradicting the current paradigm that benzoquinones are responsible for FWGE anti-cancer properties.
FWGE produced in-house by fermenting wheat germ with S. cerevisiae had equivalent lymphomacidal activity to commercially available AVEMAR 1 and demonstrated significant in vitro activity in a panel of 9 NHL cell lines that represent the majority of clinically-relevant subtypes of NHL. The LD 50 nearly doubled for primary human B cells, suggesting FWGE has greater activity in malignant versus nonmalignant B cells. While there have been, to our knowledge, no attempts to purify and identify the active components of FWGE, a benzoquinone (DMBQ) has been suggested to be the active constituent [9,10,27,39] and is used to quantify and standardize the activity of AVEMAR 1 . However, this has not been proven beyond an observational correlation, and indeed early studies indicated that DMBQ alone cannot be responsible for the immunostimulatory properties of FWGE [11]. Our results suggest that peptide components of FWGE are responsible for the anti-cancer activities we report here, since such activities: i) are present in an ethanol-insoluble extract, further purified to 10-100 kDa molecular weight components; ii) the activity is lost upon treatment with proteinase K, and iii) the active component(s) are heat-sensitive. WGA is known to be cytotoxic [36]; however, WGA-depleted FWGP remained highly effective, demonstrating that the lymphomacidal effects of FWGP are not mediated by this agglutinin. Previous studies have suggested that FWGE mediates cell killing, in part, by directly inducing apoptosis [12,41]. We confirmed apoptosis-inducing activity of FWGP by increased Annexin-V staining and increased caspase activity of NHL cells that had been treated with FWGP. FWGE has been reported to induce cell cycle arrest [39] by blocking progression through the G1 phase [14], and possibly by downregulating cyclin D1 [22]. Our results, however, suggest that FWGP blocks successful completion of the S phase, as the treatment of NHL cells with FWGP resulted in a decrease in the G1 population with a concomitant increase in the S-phase population. It may be argued that FWGE components absent in FWGP may be responsible for the G1 blockade previously reported. Regardless, our results of in vitro cytotoxicity, apoptosis and cell cycle analysis confirm that the cytostatic/cytolytic properties previously reported for FWGE are present in a protein subfraction, FWGP.
FWGE and FWGP reproducibly demonstrated effective in vivo lymphomacidal activity at several doses. Importantly, our semi-purified fraction FWGP showed higher potency. While there was a clear and reproducible dose-response effect, it was interesting that the intermediate dose was the most effective, producing a greater than 10-fold reduction in tumor volume after 24 weeks of therapy when compared to the control; when compared to the lowest dose there was a 5-fold reduction in tumor volume. Furthermore, the FWGE purification fraction that contained small molecules (<3400 Da) had no significant efficacy, again suggesting that small molecules such as DMBQ are not the active components for the response seen in vivo.
Many immune-based therapeutics are more effective with lower tumor burdens [42] thus xenograft studies using FWGE were repeated using a preemptive approach; this demonstrated even greater efficacy. However, higher doses were consistently inferior in efficacy. FWGE has a wide therapeutic window, however, cytotoxic effects are indeed seen at very high doses in normal lymphocytes in vitro (this study and [12]). While PD/PK measurement were beyond the purpose of this study, it is possible that higher oral doses of FWGP result in blood levels high enough to off-balance anti-tumor activity with immune cell toxicity. Although merely hypothetical, this possibility remains appealing in view of our results that indicate that NK-cell mediated tumor eradication is a strong component of FWGP mechanism of action in vivo. This has important implications when considering that this agent in the form of AVEMAR 1 , is available to the public and no dose-finding studies have been done.
These results support the hypothesis that FWGP enhances innate anti-tumor immunity. FWGE has been reported to increase blastic transformation of peripheral blood T cells by concanavalin A [11], to reduce graft survival in a coisogenic skin transplantation model [11] and to reduce production of IL-4 and IL-10 in a systemic lupus erythematosus model [27], supporting its immunomodulatory properties.
Of particular interest to this work, FWGE has been reported to induce downregulation of MHC-I proteins in tumor T and B cell lines [13], leading to the hypothesis that this would make tumor cells more "visible" to NK cells and hence improve immune tumor eradication. While this may indeed be true, our results further suggest that FWGP activates NK cells per se, as we observed an increase in the degranulation response and increased NK-mediated killing activity in tumor-free, immunocompetent BALB/c mice treated with FWGP. Although initially thought to recognize and eliminate their targets with fast kinetics and no prior sensitization, it is now recognized that NK cells attain full effector function only after they have been licensed by engaging self MHC-I [43]. In addition, NK cells need to be primed, for example, by transpresentation of interleukin 15 [44], and interleukin 18 has been reported to regulate NK cell IFNγ production [45]. "Conditioning" through constant triggering of Toll-like receptor 3 has been proposed to ensure immediate potent NK cell response to cytokine stimulation [46], although the ligands required for such conditioning are unknown. Whatever the mechanisms are for NK cell hyporesponsiveness to tumors, it is tempting to hypothesize that components of FWGP promote a more responsive state of NK cells. Further studies focusing on the ability of FWGP components to trigger/block NK cells stimulatory and/or inhibitory receptors may answer this question. Finally, our in vivo studies used oral administration of FWGP. The gut microbiota influences both local and systemic immune function [47,48], and has been shown to influence cancer response to immunotherapy [49]. Therefore, the effects of FWGP on gut microbiota and immunity warrants further investigation.
Conclusions
While novel targeted chemotherapy and immune-therapeutic approaches have revolutionized the way lymphoma is treated, many patients will eventually succumb to it. The toxicity of many of the currently available drugs limits their efficacy, particularly in the elderly, whom lymphoma most commonly afflicts. Here we present evidence that a protein fraction from fermented wheat germ has direct lymphomacidal activity in vitro. This activity is dependent on protein components and not DMBQ as previously reported, since protease or heat treatment resulted in loss of activity. Importantly, a protein extract from fermented wheat germ has in vivo lymphomacidal activity, yet it has no appreciable toxicity even at the highest doses tested. Remarkably, treatment with FWGP alone was as effective as the R-CHOP regimen which is the standard of care for many patients with lymphoma. This activity was dependent on NK cells, as efficacy was lost upon NK cell depletion. Furthermore, treatment of tumor-free, immunocompetent animals resulted in increased NK cell killing activity, increased degranulation and increased IFNγ production upon ex vivo stimulation. Translation of this product into allopathic medicine could constitute a novel non-toxic alternative for NHL patients. Furthermore, its use in conjunction with the current standard of care could allow for lower doses of chemotherapy, thereby overcoming toxicity limitations which would have a significant impact in patients' outcome and quality of life. Clinical studies should assess the efficacy of the current formulation of this promising therapeutic. It is clear that it will be necessary to identify the active compound(s) of FWGP. Studies are currently ongoing in this regard. Only genes mentioned in the main article are shown. The complete data are available as S1 Table. (TIF)
|
2018-03-18T18:59:28.526Z
|
2018-01-05T00:00:00.000
|
{
"year": 2018,
"sha1": "8a3cffcee68171be46ac694f820b5577b72f009f",
"oa_license": "CC0",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0190860&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "892c7c2c67f66440492c38a8b32b84ad5f760928",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
30528072
|
pes2o/s2orc
|
v3-fos-license
|
Antihyperglycemic effect of thymoquinone and oleuropein, on streptozotocin‑induced diabetes mellitus in experimental animals
Background: Diabetes mellitus is one of the most important diseases related with endocrines. Its main manifestation includes abnormal metabolism of carbohydrates and lipids and inappropriate hyperglycemia that is caused by absolute or relative insulin deficiency. It affects humankind worldwide. Objectives: Our research was aimed to observe antihyperglycemic activity of thymoquinone and oleuropein. Materials and Methods: In this study, rats were divided into six groups, 6 rats in each. Diabetes was inducted by streptozotocin (STZ). The level of fasting blood glucose was determined for each rats during the experiment, doses of thymoquinone and oleuropein (3 mg/kg and 5 mg/kg) for both, were injected intraperitoneal. Pancreatic tissues were investigated to compare β -cells in diabetic and treated rats. Result and Conclusion: It was found that thymoquinone and oleuropein significantly decrease serum Glucose levels in STZ induced diabetic rats.
INTRODUCTION
The increasing number of the aging population, consumption of high-calorie value diet, obesity and sedentary lifestyle has significantly increased number of diabetics worldwide generally and particularly among Saudis. [1] Diabetes mellitus (DM) is characterized by increased plasma glucose concentrations resulting from absolute or relative deficiency of insulin, insulin resistance, or both, leading to metabolic abnormalities in carbohydrates characterized by hyperglycemia. [2] DM is also associated with enhanced production of free radicals that further complicates the condition leading to oxidative stress, cardiovascular abnormalities, renal failure, neurodegeneration, and immune dysfunction. [3] Type I DM is notorious to damage irreversibly the pancreatic β islets, which produce insulin. Prevention and control of DM is a major challenge and requires a change in lifestyle towards more physical activity and low-calorie intake avoiding sedentary habits. However, many people find it difficult to change their lifestyle and search for easy alternatives. Some traditional constituents of food that can reduce appetite, glucose absorption from gastrointestinal tract, glucose synthesis in liver, serum glucose level, body weight, and can augment glucose induced secretion of insulin from β islets in pancreas, may prove to be useful for prevention and control of DM. [4] Use of antioxidant-rich diet may improve antioxidant defense mechanism and protect against oxidative damage caused by free radicals. [5] During the past few years, some of the newly discovered bioactive drugs isolated from glucose reducing plants have shown antihyperglycemic activity with promising efficacy comparable to currently available oral hypoglycemic agents used in clinical therapy. [6] World Health Organization recommends that people must use traditional medicine to satisfy their principal health needs. [7] It is reported that a great number of medicinal plants are in use to control the DM. [8,9] Nigella sativa L. belongs to family Ranunculaceae and different parts of plant are in use for medicinal purposes to cure various diseases. [10] N. sativa (of the family Ranunculaceae) is a plant that is synonymous with nigella cretica and is commonly known as black cumin, fennel flower, or nutmeg flower despite being unrelated to the common cumin (Cuminum Cyminum), fennel (Foeniculumvulgare), and nutmeg (the Myristica genus). Other names of the plant include Kalonji seeds. [11] And Ajaji, black caraway seed, and HabbatuSawda. [12] It appears to be a fairly well regarded medicinal herb, with some religious usage calling it the remedy for all diseases except death' (prophetic hadith), [13] and Habatul Baraka "The Blessed Seed". [12] The seeds are the main medicinal component, although a seed oil taken from the seeds (black cumin oil or black seed oil) also possess the same bioactives. The currently available literature reports that the plant is having antioxidant activity due to presence of bioactive molecules which are mainly concentrated in fixed or essential oil including tocopherols, phytosterols, polyunsaturated fatty acids, thymoquinone, ρ-cymene, carvacrol, t-anethole and 4-terpineol. [14] Several studies on the hypoglycemic effect of N. sativa and thymoquinone in diabetic animals have shown positive results. [15] Thymoquinone has demonstrated to contain strong antioxidant properties. [16] Moreover, suppresses expression of inducible nitric oxide synthase in rat macrophages. [17] Since long olive tree (Olea europaea L.) leaves have been widely used in treatments in European and Mediterranean countries. They have been used in the human foods as extracts, herbal teas, and powder and contain many potentially bioactive compounds that may possess antioxidant, antihypertensive, antiatherogenic, anti-inflammatory, hypoglycemic, and hypocholesterolemic properties. [18] The bioactivity of olive tree byproduct extracts may be related to antioxidant and phenolic components such as oleuropein, hydroxytyrosol, oleuropein aglycone, and tyrosol. [19] Many studies have shown that oleuropein main constituent of olive leaf extract (up to 6-9% of dry matter in the leaves) have a wide range of pharmacologic and health-promoting properties. [20] Specially, oleuropein has been related to improved glucose metabolism. It is also reported to possess an antihyperglycemic effect in diabetic rats. [21,22] The hypoglycemic and antioxidant effects of oleuropein have been reported in alloxan-diabetic rabbits. [23] In streptozotocin (STZ)-induced diabetic rats, olive leaf extract has decreased serum concentrations of glucose, lipids, uric acid, creatinine, and liver enzymes. [24] The mechanism through which olive leaf extract reduces hyperglycemia is still not well-recognized. [23]
Preparation of plant material
Leaves of O. europaea L. (IDC4.3). Family oleaceae cultivated in Rafha, KSA were collected in October, 2014 from the gardens of Rafha. The plant was identified and authenticated by Department of Natural Products and Alternative Medicine Faculty of Pharmacy, Northern Border University. The active constituents of olive leaf have a wide number of ingredients with the chief constituent oleuropein (60-90 mg/g). [25] The air-dried powder of O. europaea L. leaves was extracted by percolation with 70% Ethyl alcohol. The combined ethanolic extracts were concentrated under vacuo at 40°C to dryness. The concentrated ethanolic extract was suspended in distilled water and defatted with hexane then with ethyl acetate to give crude ethyl acetate extract rich in oleuropein.
N. sativa (IDC 221.5) seeds were purchased from Al Qassim area, Kingdom of Saudi Arabia. These were identified and authenticated by Department of Natural Products and Alternative Medicine Faculty of Pharmacy, Northern Border University.
The essential oil of N. sativa seeds was prepared by hydrodistillation using standard method according to Saudi Pharmacopoeia (Clevenger). The obtained oil was dried over anhydrous sodium sulfate and stored at 20°C in a dark bottle until to analysis.
The volatile oil was analyzed by gas chromatography/ mass spectrometry analysis and the identification of its components was done by comparing their retention times and mass fragmentation pattern to those of the available reference samples and or Wiley's mass spectral database in addition to the published ones. [26] The percentage composition of the essential oil components was determined by computerized peaks area measurements.
STZ was obtained commercially from Sigma-Aldrich Co. Germany, for induction of DM in experimental animals.
Experimental animals
Adult male Wistar rats (body weight range 250-300 g), 10-11 weeks of age, were obtained from the Animal House of King Fahd Medical Research Center, King Abdul Aziz University, Jeddah, Saudi Arabia. They were housed and maintained at 22°C under a 12-h light/12-h dark cycle, with free access to food and water.
Induction of diabetes
The rats were made to fast overnight before the induction of diabetes by a single intraperitoneal injection of 60 mg/kg STZ freshly dissolved in distilled water. [27] Hyperglycemia was confirmed 4 days after injection by measuring the tail vein blood glucose level with an Accu-Check Sensor Comfort glucometer.Only the animals with fasting blood glucose levels ≥250 mg/dl were selected for the study.
Experimental design
In this study, total 36 rats were used out of which 30 were diabetic, and 6 were normal. They were divided into 6 groups, 6 in each as follows: Group 1 was control group given only normal saline, without induction of diabetes and given normal diet. Group 2 with STZ but without any treatment (control diabetic) and given the same diet for group one. Group 3 was given STZ and treated with 3 mg/kg oleuropein. Group 4 was given STZ and treated with 5 mg/kg Oleuropein. Group 5 was given STZ and treated with 3 mg/kg thymoquinone. Group 6 was given STZ and treated with 5 mg/kg. Treatments were given for 56 days by intraperitoneal injections.
Blood glucose was determined using glucose (hk) assay kit Sigma-Aldrich Co. Germany.
Histopathological examination
The expert histopathologist at King Fahad Medical Research Center, King Abdulaziz University, Jeddah, Saudi Arabia, performed the histological examination. The experimental animals were euthanized with a lethal dose of sodium pentobarbital, and histological samples of pancreas were fixed. Hematoxylin and eosin stain technique was used to stain the histopathology slides. Under the microscope, number and status of β islets of Langerhans were observed.
Statistical analysis
The data were expressed as mean ± standard deviation (SD) Statistical analysis of data for blood glucose was performed one-way analysis of variance followed by Tukey's post-hoc test was used to compare differences among the experimental groups The statistical analysis was performed using statistical package (IBM Corp. Released 2010. IBM SPSS Statistics for Windows, Version 19.0. Armonk, NY).
RESULTS
The effect of oleuropein and thymoquinone on blood glucose was presented as mean values ± SD in Table 1.
The negative control (nondiabetic) and positive control (diabetic) group showed a change in blood glucose levels during experiment time there was an increase in the positive control group [ Figure 4].
On the 1 st and the 8 th week, there was a gradual increase in blood glucose mean values ± SD levels due to the administration of STZ (451 mg/dl) and (462 mg/dl), respectively. The mean ± SD of blood glucose concentration in Group 5 (3 mg/kg TQ) decreased in the 8 th week, and in Group 6 (5 mg/kg TQ) decreased in the 4 th week. It indicates Group 1 is negative control, Group 2 is positive control, values are mean±SD (n=6), *P<0.05 compared to positive control. SD: Standard deviation; STZ: Streptozotocin that 5 mg dose is more effective in decreasing STZ-induced hyperglycemia. Moreover, this result was significant when compared to positive control group P < 0.05 [ Figure 5].
The mean ± SD of blood glucose concentration in Group 3 (oleuropein 3 mg/kg) decreased in the 4 th week and increased in the 2 nd week and this was significant compared to the control positive group P < 0.05 regarding Group 4 (oleuropein 5 mg/kg) there was a decrease in blood glucose concentration in the 4 th week [ Figure 6]. Table 2 shows the mean body weight ± SD of all experimental groups. There was a significant decrease in the weight compared to the duration. An increase in the body weight for the positive control (diabetic) group in the week 8 was observed [ Figure 7-9].
DISCUSSION
Diabetes mellitus is a chronic metabolic systemic disease characterized by hyperglycemia. Oxidative stress is thought to increase in a system where the rate of free radical production increases, and/or the antioxidant mechanisms are impaired. In recent years, the oxidative stress-induced free radicals have been implicated in the pathology of insulin dependent DM. [16,28,29] In the present research, we tried to see the effect of thymoquinone and oleuropein in STZ induced DM in rats. Results of this study showed that diabetic rats exhibited a significant increase in blood glucose level after injection of STZ. This is similar with the researches that have been done throughout the world for induction of diabetes. [27] In this study, we used STZ (60 mg/kg), which has been used by many researchers to induce experimental diabetes. [27] A study on the induction of diabetes by STZ in rats showed that after 3 days, a dose of 60 mg/kg STZ made the pancreas swell and degeneration in the β-cells leading to experimental diabetes in rats [ Figure 1]. [30] As insulin is the main, recognized hormone that keeps the serum glucose levels in normal range. The normal function of insulin-releasing cells (β-cells in the pancreas), as well as other cellular mechanisms, are distorted by oxidative stress, which would contribute in the induction of DM. [31,32] In our study, we found that 5 mg/kg dose of thymoquinone showed a significant hypoglycemic effect in STZ-induced diabetic rats by decreasing the fasting blood glucose levels which are in agreement with Alimohammadi et al. [27] The most probable mechanism of the effect may be the preservation of β-cells in the pancreas as was found in the histopathological examination, shown in the [ Figure 2].
Several studies support the fact that olive leaf extracts are rich in oleuropein and hydroxytyrosol and these compounds confer the antioxidant activities to the olive leaves. [19] Thus, these phenolic compounds could prove to be beneficial in the protection against metabolic diseases associated with oxidative stress such as diabetes. [20] In the present study, we found a significant decrease in blood glucose level in the 4 th week with 3 mg/kg and 5 mg/kg of oleuropein.
In the previous studies two possible mechanisms have been suggested to explain the hypoglycemic effect of the olive leaf extract, oleuropein: [33] (1) Improved glucose-induced insulin release, and (2) increased peripheral uptake of glucose. It has been observed that oleuropein in olive leaves has been shown to accelerate the cellular uptake of glucose, leading to reduced plasma glucose. As oleuropein has been found to be a glycoside, that potentially accesses a sodium-dependent glucose transporter (SGLT1) found in the epithelial cells of the small intestine, to get its entry into the cells. Experimental data indicated an interaction between dietary flavonolmonoglucosides with the intestinal SGLT1 and inhibited Na-independent glucose uptake. Other mechanism through which olive leaf extract might produce its hypoglycemic effect is through the inhibition of pancreatin amylase activity. [33] As glucose-induced release of insulin is directly proportional to preservation of β islets in the pancreas and histopathological examination [ Figure 3] shows no improvement in the damage to the β islets caused by STZ, after treatment with olive leaf extract so this mechanism cannot be associated with hypoglycemic effect of olive leaf extract.
Our results and findings are in agreement with other findings regarding the hypoglycemic effect of oleuropein. It has been observed that decreased activities of hepatic antioxidant enzymes, superoxide dismutase and catalase found in diabetic rats were restored by the use of oleuropein and hydroxytyrosol, thereby attenuating the oxidative stress associated with diabetes. [34] The intraperitoneal injections of thymoquinone caused peritoneal inflammation in the experimental rats along with an increase in the weight and size of the liver.
CONCLUSIONS
Based on the findings of this study, it was revealed that intraperitoneal administration of N. sativa (thymoquinone) significantly decreases hyperglycemia in STZ-induced DM in the rats. However, it was also observed that intraperitoneal administration of thymoquinone also produced peritonitis and hepatomegaly in the experimental animals. A reduction in glucose levels was also found with olive leaf extract oleuropein.
It is suggested that, further studies with larger sample size and different parameters will be helpful to determine exact antidiabetic dose, its mechanism of action and safer route of administration especially for thymoquinone as it is having a potential to be used as hypoglycemic agent in humans.
|
2018-04-03T04:22:53.232Z
|
2015-10-01T00:00:00.000
|
{
"year": 2015,
"sha1": "60a23189141772910eab056fe4689a2f8b4c41d9",
"oa_license": "CCBYNCSA",
"oa_url": "https://europepmc.org/articles/pmc4653335",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "bfae33b9dc1dde53cfe4d7d6e62434ddac019063",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
252282610
|
pes2o/s2orc
|
v3-fos-license
|
Assessment of electrical dyssynchrony in cardiac resynchronization therapy: 12-lead electrocardiogram vs. 96-lead body surface map
Abstract Aims The standard deviation of activation time (SDAT) derived from body surface maps (BSMs) has been proposed as an optimal measure of electrical dyssynchrony in patients with cardiac resynchronization therapy (CRT). The goal of this study was two-fold: (i) to compare the values of SDAT in individual CRT patients with reconstructed myocardial metrics of depolarization heterogeneity using an inverse solution algorithm and (ii) to compare SDAT calculated from 96-lead BSM with a clinically easily applicable 12-lead electrocardiogram (ECG). Methods and results Cardiac resynchronization therapy patients with sinus rhythm and left bundle branch block at baseline (n = 19, 58% males, age 60 ± 11 years, New York Heart Association Classes II and III, QRS 167 ± 16) were studied using a 96-lead BSM. The activation time (AT) was automatically detected for each ECG lead, and SDAT was calculated using either 96 leads or standard 12 leads. Standard deviation of activation time was assessed in sinus rhythm and during six different pacing modes, including atrial pacing, sequential left or right ventricular, and biventricular pacing. Changes in SDAT calculated both from BSM and from 12-lead ECG corresponded to changes in reconstructed myocardial ATs. A high degree of reliability was found between SDAT values obtained from 12-lead ECG and BSM for different pacing modes, and the intraclass correlation coefficient varied between 0.78 and 0.96 (P < 0.001). Conclusion Standard deviation of activation time measurement from BSM correlated with reconstructed myocardial ATs, supporting its utility in the assessment of electrical dyssynchrony in CRT. Importantly, 12-lead ECG provided similar information as BSM. Further prospective studies are necessary to verify the clinical utility of SDAT from 12-lead ECG in larger patient cohorts, including those with ischaemic cardiomyopathy.
• The standard deviation of activation time (SDAT) derived from body surface mapping (BSM) is considered a reliable tool for assessing electrical dyssynchrony.
• This was confirmed in patients with non-ischaemic cardiomyopathy and cardiac resynchronization therapy (CRT) by the observation that changes in SDAT calculated from BSM corresponded to changes in reconstructed myocardial metrics of depolarization heterogeneity (total activation time, SDATm).
• Importantly, we demonstrated that the measurement of SDAT using 12-lead electrocardiogram (ECG) provides similar results as 96-lead BSM.
• The lowest SDAT values corresponded to the narrowest QRS duration during CRT with a pacing configuration that enables fusion with normal conduction (i.e. AV delay 20 ms shorter than spontaneous PQ interval).
• This observation warrants further clinical studies on the utility of 12-lead ECG for SDAT measurement.
Introduction
Cardiac resynchronization therapy (CRT) is a recommended nonpharmacological therapy for patients with heart failure (HF) and reduced ejection fraction (HFrEF) and intraventricular conduction abnormalities. It provides both symptomatic relief and survival benefit and is associated with fewer hospitalizations for HF. 1,2 However, not all patients respond favourably to CRT. Besides clinical parameters such as aetiology, HF, or gender, QRS duration and morphology is another predictor of the outcome and is used as an inclusion criterion in all available clinical trials. Given that the primary goal of CRT is to restore electrical synchrony through optimally timed biventricular pacing, quantifying electrical dyssynchrony appears to be essential for the prediction of CRT outcomes and optimizing the pacing settings. The clinically used method to measure electrical dyssynchrony is QRS duration derived from standard 12-lead electrocardiogram (ECG), but the correlation of QRS duration with the response to CRT is generally not very high. 3 More recently, body surface mapping (BSM) or the ECG imaging (ECGi) approach was proposed as a better alternative. In this respect, different indices for assessing depolarization heterogeneity have been suggested to improve CRT programming and LV lead placement. 4 Among them, the standard deviation of activation time (SDAT) calculated from BSM is one of the most prospective indices of electrical dyssynchrony. Recent studies showed that the SDAT might help guide the CRT LV lead placement and device optimization. 5,6 Some studies have explored the feasibility of the limited array of leads, using a special belt with 40 leads. 7 Interestingly, no data comparing the directly diagnostic yield of 12-lead ECG compared with BSM are available. Therefore, the goal of this study was two-fold: (i) to compare the values of SDAT in individual CRT patients with reconstructed myocardial metrics of depolarization heterogeneity using an inverse solution algorithm and (ii) to compare SDAT calculated from 96-lead BSM with a clinically easily applicable 12-lead ECG.
Study population
We studied 19 patients with non-ischaemic dilated cardiomyopathy and HFrEF. They all had CRT systems implanted at least 6 months before BSM measurements. All patients were in sinus rhythm with QRS duration ≥120 ms and New York Heart Association (NYHA) Classes II and III HF, and were on optimal medical therapy prior to CRT for at least 3 months.
The local ethics committee approved this study protocol and all patients gave informed consent. Patients were evaluated during a regular outpatient visit.
Body surface map and 12-lead electrocardiogram
Multichannel ECG signals were recorded from standard limb leads and unipolar 96 chest leads (interelectrode distance 3-5 cm) using a computer mapping system ProCardio-8 (16 bits; bandwidth 0.05-200 Hz; sampling frequency 1 kHz). 8 Front and back thorax ECG electrodes were organized in 12 strips with 8 leads consistently (Figure 1). Six chest leads composed precordial leads and were used for the analysis of standard 12-lead ECG.
All ECG signals were prefiltered with a digital bandpass bidirectional Butterworth filter with cut-off frequencies of 0.5 and 40 Hz. R-peak positions were computed by Pan-Tompkins algorithm 9 on Limb Lead II. Each lead was segmented into N (N being the number of beats in the recordings) ECG complexes, defined as segments between 350 and 450 ms before and after each R-peak position, respectively. Then, in order to remove possible ectopic beats, all ECG beats were compared among each other, computing the Pearsons' correlation coefficient; only ECG complexes with a Pearsons' correlation coefficient >0.85 were selected and executed in order to obtain the median ECG beat.
The median ECG beat was processed in order to compute the QRS onset and the QRS end automatically. 10 The activation time (AT) point was defined as the minimum of the first-time derivative of potential during the QRS complex. Electrocardiogram signals with inappropriate quality due to bad contact with skin were discarded.
Thus, AT was automatically determined for each ECG lead in reference to the earliest AT in the set of 96 or 12 leads. Calculation of SDAT was performed automatically using either 96 thorax leads or standard 12 leads. QRS duration was measured manually in each of three standard limb leads by an independent observer who was not familiar with other measurements or the patient's status. The averaged value of QRS duration was used for the analysis.
Computed tomography examination
All patients underwent a non-contrast computed tomography scan of the chest after BSM recordings with the array of BSM electrodes in situ (SOMATOM Flash, dual-source scanner, 128-slice; Siemens Healthineers, Erlangen, Germany). The resulting images were used subsequently for the reconstruction of patient-specific models, incorporating thorax, myocardial surface, blood cavities, and the 96 electrode positions of BSM.
Reconstructed myocardial activation times
To compare the SDAT values with the calculated myocardial activation sequences in each patient [i.e. total AT (TAT)], the latter parameter was noninvasively determined using the method of ECGi described by Boonstra et al. 11 For this purpose, the above patient-specific models of the chest with electrode positions were employed. The estimation procedure to localize the activation sequence is a two-step process. In the first step, the stimulation site is roughly determined by the 3D direction of the QRS axis. In the subsequent step, the option of fusion activation with the intrinsically activated His-Purkinje system is added to the initial estimate. 11 Both the foci positions and timing are optimized in a subsequent iterative procedure, such that the correlation between simulated and measured ECG signals is optimal. The resulted myocardial sequence of ventricular depolarization was used to obtain the myocardial parameters of the ventricular depolarization heterogeneity. Total AT was calculated as a difference between maximal and minimal myocardial ATs. The standard deviation of reconstructed myocardial AT (SDATm) was also computed.
Study protocol
The BSM data were recorded in sinus rhythm (pacing off) and during six distinct pacing configurations in each patient, including atrial pacing, sequential left ventricular (LV) or right ventricular (RV), and biventricular pacing (BVP). All pacing modes were set at the same rate 10 b.p.m. above the sinus rhythm rate. Sequential LV or RV pacing modes were programmed with an AV delay of 120 ms. BVP modes were programmed with AV delay 120 ms and VV delay 0 ms, with AV delay programmed 20 ms shorter than the spontaneous PQ interval and VV delay 0 ms, and with AV delay programmed 20 ms shorter than the spontaneous PQ interval and VV delay −40 ms. The BSM data were also obtained at the baseline programmed setting before and after applying the experimental pacing protocol to exclude the mutual influence of pacing modes. The BSM data were recorded within 30 s for each programmed pacing mode, starting 15 s after initiation of pacing.
Statistics
Data are expressed as mean and standard deviation or as the median and interquartile range (IQR) if asymmetric distribution. Statistical analysis was performed with the SPSS package (IBM SPSS Statistics 23). A paired Student's t-test and repeated measures analysis of variance with post hoc analysis by a Bonferroni adjustment were applied for paired and multiple comparisons, respectively. Reliability analysis with intraclass correlation coefficient (ICC, two-way mixed model, absolute agreement type) was performed to assess the similarity of electrophysiological parameters obtained from 12-lead ECG and 96-lead BSM. Pearson's pairwise test was applied to find a relation between body surface parameters of dyssynchrony with myocardial depolarization heterogeneity. The differences were considered significant at P < 0.05.
Patient data
The clinical characteristics of 19 patients (58% male) studied are summarized in Table 1. The patients were 60 ± 11 years old with a QRS duration of 167 ± 16 ms and median LVEF 25 (IQR 10) prior to CRT implanting. All were in sinus rhythm with true left bundle branch block (LBBB) pattern on ECG. The patients had BVP set empirically after implant with an AV delay −20 ms of the intrinsic PQ interval with a VV delay of 0 ms. At the time of the study, all patients had a CRT device implanted minimum 6 months (median period after the implant was 13 months with IQR 7-66). Twelve patients improved their functional status by NYHA Class I, and the rest remained unchanged (2.47 ± 0.8 vs. 1.74 ± 0.7, P < 0.00002, Figure 2). Left ventricular ejection fraction (LVEF) improved in all subjects-in 12 by 10%, in 5 by 5%, and in 2 by 20% or more %.
Standard deviation of activation time as a measure of electrical dyssynchrony
During intrinsic sinus rhythm, atrial pacing, and both sequential RV and LV pacings, the averaged SDAT was predictably higher than the averaged SDAT value obtained in BVP modes. For BSM, it was 32.5 ± 5 vs. 24.2 ± 5 ms (P < 0.0001), and for 12-lead ECG, it was 33.5 ± 7 vs. 24.8 ± 6 ms (P = 0.0002). The lowest value of SDAT was found for two BVP configurations: sequential BVP (AV delay 120 ms, VV delay 0 ms) and sequential BVP (AV delay −20 ms of intrinsic PQ, VV delay 0 ms; Figure 3A). The values of SDAT obtained from 96-lead BSM did not differ from SDAT derived from 12-lead ECG in all pacing modes.
To validate the SDAT as a measure of electrical dyssynchrony, TAT and SDATm reconstructed from ECGi were compared. The changes of TAT and SDATm during different pacing configurations were similar to body surface-derived SDAT (Figure 3). In intrinsic rhythm, the LV activation was delayed due to LBBB and this was reflected by a high dispersion of AT values. Similarly, AT dispersion persisted to the similar extent during the RV or LV sequential pacing. However, myocardial dispersion of depolarization was low during different protocols of BVP, especially during sequential BVP with AV delay −20 ms of intrinsic PQ and VV delay 0 ms ( Figure 3C and D). Statistically significant correlation was found between 96-lead SDAT and both TAT (r = 0.539, P < 0.0001) and SDATm (r = 0.510, P < 0.0001). Standard deviation of activation time derived from 12-lead ECG also demonstrated significant correlation with TAT (r = 0.555, P < 0.0001) and SDATm (r = 0.513, P < 0.0001).
The QRS duration measured from standard limb leads significantly decreased during BVP in comparison with sinus rhythm, atrial pacing, or sequential RV or LV pacing ( Figure 3B). The most pronounced shortening of QRS duration was found for sequential BVP with AV delay set 20 ms shorter than spontaneous PQ interval and VV delay of 0 ms. This set-up corresponds to configuration with the highest degree of fusion with spontaneous intraventricular conduction. For this configuration, the lowest value of SDAT was found. However, the correlation of QRS duration with SDAT derived from 96-lead BSM or 12-lead ECG was of medium level (r = 0.500, P < 0.0001 and r = 0.473, P < 0.0001, respectively).
96-Lead body surface map vs. 12-lead electrocardiogram
Standard deviation of activation time values obtained from 96-lead BSM were close to those derived from 12-lead ECG in sinus rhythm. Similarly, analogous conformity between two measurement methods was demonstrated for SDAT values during different pacing modes, including atrial pacing, sequential LV or RV pacing with AV delay 120 ms, and BIV pacing modes ( Figure 4).
Reliability analysis with intraclass correlation coefficient (ICC) was performed to test how strongly the values of SDAT quantified by 96-lead mirror the values SDAT from 12-lead ECG. ICC range between
Discussion
The results of this study can be summarized as follows: (i) a significant correlation was found between reconstructed myocardial metrics of depolarization heterogeneity (TAT, SDATm) and SDAT derived both from 96-or 12-lead ECG in sinus rhythm and during different pacing configurations, thus validating the SDAT parameter as a measure of dyssynchrony, (ii) compared with spontaneous activation and RV or LV pacing, BVP with the two specific pacing regimes resulted in the lowest value of SDAT, implying that this strategy provides the most efficient electrical resynchronization, (iii) although the QRS duration was shortest for these two BVP configurations, medium-sized correlation was found between the QRS duration and SDAT, supporting the use of SDAT for assessment of electrical dyssynchrony, and most importantly, and (iv) SDAT values obtained from 96-lead BSM were similar to SDAT calculated from the 12-lead ECG, demonstrating strong agreement between both ECG recording methods. If confirmed, it may simplify non-invasive assessment of electrical dyssynchrony.
Standard deviation of activation time as a measure of electrical dyssynchrony
Earlier studies have documented that SDAT derived from BSM has the potential to predict clinical response to CRT. More recently, SDAT obtained from limited body surface multichannel ECGs (ECG Belt) has been proposed as the equally reliable parameter, 12 which was validated through comparison with ECGi method of reconstructed ATs on the surface of the heart. 7 This parameter could be used for guidance of the LV lead placement 5 and/or for optimization of CRT programming. 6 It was also found useful in patients less likely to be improved by. 13 In our study, we performed similar comparison of ECGi-derived parameters of electrical dyssynchrony such as TAT or SDATm with SDAT from 96-lead BSM and also from 12-lead ECG. Compared with the previous study, which found a strong correlation between values of SDAT calculated from reconstructed epicardial potentials (ECGi) and body surface potentials, 7 our findings indicated lower strength of the relationship between SDAT values derived from BSM and reconstructed myocardial ATs. These differences may relay to different inverse solution approaches. An alternative explanation for different results may be related to another difference in methodology. While the above study used only reconstructed epicardial potentials, 7 our technique also employed endo-and epicardial myocardial ATs, including septal. It is also important to emphasize that SDAT as a metric of electrical dyssynchrony has not been validated by direct measurements of myocardial ATs on the surface of the heart. On the other hand, other studies demonstrated the correlation of SDAT with acute haemodynamic response to pacing. 5 Also recent comprehensive review of the literature identified SDAT as a most promising non-invasive parameter for assessment of electrical dyssynchrony. 4 Importantly, our study compared SDAT values in individual pacing setups in a cohort of patients with non-ischaemic cardiomyopathy and found the lowest values for two of them, both BVP regimes, supporting the previous studies. Interestingly, these two BVP setups did use 0 VV delay and AV delay fixed at 120 or 20 ms shorter than the spontaneous PQ interval. Such settings should allow in patients with preserved AV conduction relatively high degree of fusion of paced wavefronts with a spontaneous activation via the conduction system. Such presumption appears to be confirmed by parallel shortening of the mean QRS duration in those two pacing configurations and by some correlation between SDAT values and the QRS duration. Interestingly, BVP configuration with advanced LV wavefront (i.e. VV delay −40 ms) resulted in higher SDAT values. This configuration is also characterized by a broader QRS complex.
Based on our previous experience, we use a pragmatic approach for the setting of AV delay in CRT patients without AV block in our practice. We use AV delay 20 ms shorter than spontaneous AV interval or, alternatively, fixed AV delay of 120 ms. Interestingly, this study confirmed that these configurations provided the shortest SDAT. Many patients with this setting had improvement in functional status and all had some improvement in LVEF. Based on these findings, further studies of SDAT as a metric of electrical dyssynchrony have to be performed, evaluating this parameter in different categories of patients, including responders and non-responders to CRT.
Body surface map vs. 12-lead electrocardiogram
The most important finding of our study is that SDAT can be calculated from the 12-lead ECG instead of 96-lead BSM with the same diagnostic yield. The 12-lead ECG-derived SDAT values were in good agreement with BSM-derived SDAT values for all pacing configurations and for the intrinsic ventricular activation, i.e. sinus rhythm or atrial pacing ( Figure 4). Further prospective studies of SDAT derived from 12-lead ECG as a metric of electrical dyssynchrony need to be conducted to confirm our preliminary results. Especially subjects with ischaemic cardiomyopathy have to be studied.
Interestingly, the advantage of SDAT obtained from the BSM system over the QRS duration was demonstrated in an earlier study using native rhythm and baseline CRT settings. 6 The authors did not find a correlation between these two metrics of electrical synchrony. When compared with QRS duration measured in 12-lead ECG, SDAT from BSM demonstrated better predictive ability for LV remodelling in response to CRT. 12,14 These observations indicating SDAT as a more sensitive metric of electrical dyssynchrony than QRS duration were not supported by our results. Changes in QRS duration paralleled SDAT changes during different pacing modes, and there was a reasonable correlation between SDAT and QRS. This controversy may reflect the differences in the studied cohorts. Our study specifically included non-ischaemic patients with LBBB, who have more homogeneous intramyocardial conduction compared with ischaemic patients and a higher probability of being responders. SDAT, ms
Limitations
The study was performed on a relatively small population of patients with non-ischaemic cardiomyopathy and true LBBB. This limits the generalization of the results to patients with ischaemic cardiomyopathy or patients with other patterns of intraventricular conduction abnormalities. On the other hand, we selected non-ischaemic cardiomyopathy as a model for such studies focused on proof of concept. Notably, the measured parameters in the studied population had normal distribution, and data analysis demonstrated statistically significant results. Another limitation of this study relates to 12-lead ECG electrode positions, since precordial unipolar leads (V1-V6) were selected from the array of 96-lead BSM, and thus, the position of these electrodes slightly varied from the standard position of these leads. Finally, we do not have measurements that would allow a comparison of SDAT values in individual pacing configurations with their haemodynamic benefit.
Conclusions
Using BSM for the non-invasive assessment of electrical dyssynchrony, we found the lowest SDAT values for the two BVP configurations, suggesting the most efficient electrical resynchronization. In addition, a significant correlation was found between simulated myocardial metrics of depolarization heterogeneity obtained from ECGi and SDAT derived from BSM, supporting the usefulness of SDAT. Most importantly, SDAT calculated from the 12-lead ECG provided similar results as BSM.
Funding
This study was supported by the research grant NV18-02-00080 from the grant agency AZV (Ministry of Health of the Czech Republic).
|
2022-09-16T06:17:13.550Z
|
2022-09-15T00:00:00.000
|
{
"year": 2022,
"sha1": "d06c05180082df6474f85f411a751fd04fa82221",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "b46b3ad059f68d85fe75d794f4d949ba15ec60da",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
55243153
|
pes2o/s2orc
|
v3-fos-license
|
An attempt of CNC machining cycle ’ s application as a tool of the design feature library elaboration
This paper presents a novel approach to a problem of the design feature library elaboration. As a tool of the design feature library development CNC machining cycles were proposed. Because of the great number of commercially available CNC machine controllers, with different CNC machining cycles definitions, it was necessary to make a decision about a research methodological framework, it is the selected CNC machine controller. Taking into account the criterion of popularity as the research framework the selected group of Sinumerik CNC machine controllers was chosen. Presented in the paper idea of the feature library development is based on an assumption saying that it is possible to find a relationship between a particular CNC machining cycle and the simple design feature or even compound design features. Identified, thanks to this assumption, set of the design features could be the base for elaboration of the design feature library. This solution, it is the feature library next gave opportunity for elaboration of the feature based design modelling module (FBDMM) working in the SIEMENS NX system environment. Hence, the FBDMM module can support both a designer and CNC machine programmer which is possible due to received in the module modelling paradigm. In FBDMM module the removal feature based modelling technique is received. 1 Methods of the design modelling with design features Whilst considering design modelling methods with application of the design feature library, which is implemented in any CAD/CAM/CAE system, it is possible to distinguish the following three modelling methods: • socalled removal method (technological) which mimics removal manufacturing processes like turning, milling, drilling, reaming and so on. In this method a design is modelled by performing subsequent Boolean’s operations, so in each single operation a particular design feature is subtracted from the blank in order to get a finished part model. The schema of modelling process with technological method in the Figure 1a is shown. One of the most important drawbacks of this method is lack of possibilities of DOI: 10.1051/ , 06019 (2017) 71120601 112 MATEC Web of Conferences matecconf/201 IManE&E 2017 9 © The Authors, published by EDP Sciences. This is an open access article distributed under the terms of the Creative Commons Attribution License 4.0 (http://creativecommons.org/licenses/by/4.0/). the design feature library preparing only by means of the design features with a simple design shape structure. In most cases it is needed to work out at least a part of the library with compound design shape features. The biggest advantage of the technological method is its similarity to real manufacturing processes, and it gives possibility of working out a CAPP system with really simple inference mechanism. Well-designed structure of the original modelling module implemented in a CAD/CAM/CAE system environment with the properly worked out a set of the design rules allows to a certain degree to control a design modelling process. Consequently, a designer during the modelling process is supervised in order to follow given modelling standards. In the considered modelling standard the particular design process stages should respond to the subsequent manufacturing process stages. It means that consecutively performed Boolean’s operations would respond to certain manufacturing process operations or cuts. As a result, at the first stage of modelling process the “rough” shape of the model, which corresponds to the product stage after roughing, is achieved. At the second stage of modelling process the shaped features of the model are made – the stage after profiling, but taking into consideration that in CAD systems product model is always modelled taking nominal dimensions into account, in principle, after the second modelling stage the final shape of the product is achieved. • so called additive method (constructional) in opposite to removal modelling method a model design shape come into being as a result of performing of the subsequent unite operations in the sense of the Boolean’s operations. A product model is built by adding of the design features together. From manufacturing technology point of view it corresponds to additive manufacturing, in which the product is manufactured with one of the 3D printing – rapid prototyping methods or classic manufacturing technologies like bonding, welding etc. The schema of constructional modelling process in the Figure 1b is shown. Like in case of the technological method the biggest drawback of constructional method is necessity of creation of the design features with a compound design shape. This problem can be illustrated by the example of the shaft pin with the splineway manufactured on its surface. In case of the constructional method it would be necessary, at the design feature geometrical shape identification stage, to create a compound shape. As a result, in the considered case it would be the difference between the pin geometrical shape and splineway shape in the sense of the Boolean’s operation. As a consequence, it is easy to notice that it is almost impossible to work out the design feature library including all possible combinations of the compound design feature structures. • so called hybrid method (design-technological), the product model is created by uniting or subtracting particular objects from each other. The schema of modelling process with hybrid method in the Figure 1a is shown. Looking for similarities in the manufacturing realm it corresponds to manufacturing using both removal and bonding manufacturing techniques. Taking into consideration drawbacks and merits of the particular methods the biggest usefulness in the process of the CAPP system creation shows the first above mentioned method, it is technological modelling method. It comes from fact that this method is in its sense similar to manufacturing processes being realized in real industrial conditions. In addition, as a consequence of deployment of the technological method it would be possible to work out such structure of the CAPP system user interface that the manufacturing process would be designed in the background somewhat.It means that a designer making a design and its design structure would make a process plan, simultaneously. In such case the order of the design features in the design structure would correspond to the order of the manufacturing operations or cuts in the process plan structure. This assumption let us to make the CAPP knowledge based system with simple inference mechanism. Consequently, DOI: 10.1051/ , 06019 (2017) 71120601 112 MATEC Web of Conferences matecconf/201 IManE&E 2017 9
Methods of the design modelling with design features
Whilst considering design modelling methods with application of the design feature library, which is implemented in any CAD/CAM/CAE system, it is possible to distinguish the following three modelling methods: • socalled removal method (technological) which mimics removal manufacturing processes like turning, milling, drilling, reaming and so on.In this method a design is modelled by performing subsequent Boolean's operations, so in each single operation a particular design feature is subtracted from the blank in order to get a finished part model.The schema of modelling process with technological method in the Figure 1a is shown.One of the most important drawbacks of this method is lack of possibilities of the design feature library preparing only by means of the design features with a simple design shape structure.In most cases it is needed to work out at least a part of the library with compound design shape features.The biggest advantage of the technological method is its similarity to real manufacturing processes, and it gives possibility of working out a CAPP system with really simple inference mechanism.
Well-designed structure of the original modelling module implemented in a CAD/CAM/CAE system environment with the properly worked out a set of the design rules allows to a certain degree to control a design modelling process.Consequently, a designer during the modelling process is supervised in order to follow given modelling standards.In the considered modelling standard the particular design process stages should respond to the subsequent manufacturing process stages.It means that consecutively performed Boolean's operations would respond to certain manufacturing process operations or cuts.As a result, at the first stage of modelling process the "rough" shape of the model, which corresponds to the product stage after roughing, is achieved.At the second stage of modelling process the shaped features of the model are made -the stage after profiling, but taking into consideration that in CAD systems product model is always modelled taking nominal dimensions into account, in principle, after the second modelling stage the final shape of the product is achieved.• so called additive method (constructional) in opposite to removal modelling method a model design shape come into being as a result of performing of the subsequent unite operations in the sense of the Boolean's operations.A product model is built by adding of the design features together.From manufacturing technology point of view it corresponds to additive manufacturing, in which the product is manufactured with one of the 3D printing -rapid prototyping methods or classic manufacturing technologies like bonding, welding etc.The schema of constructional modelling process in the Figure 1b is shown.Like in case of the technological method the biggest drawback of constructional method is necessity of creation of the design features with a compound design shape.This problem can be illustrated by the example of the shaft pin with the splineway manufactured on its surface.In case of the constructional method it would be necessary, at the design feature geometrical shape identification stage, to create a compound shape.As a result, in the considered case it would be the difference between the pin geometrical shape and splineway shape in the sense of the Boolean's operation.
As a consequence, it is easy to notice that it is almost impossible to work out the design feature library including all possible combinations of the compound design feature structures.• so called hybrid method (design-technological), the product model is created by uniting or subtracting particular objects from each other.The schema of modelling process with hybrid method in the Figure 1a is shown.Looking for similarities in the manufacturing realm it corresponds to manufacturing using both removal and bonding manufacturing techniques.
Taking into consideration drawbacks and merits of the particular methods the biggest usefulness in the process of the CAPP system creation shows the first above mentioned method, it is technological modelling method.It comes from fact that this method is in its sense similar to manufacturing processes being realized in real industrial conditions.In addition, as a consequence of deployment of the technological method it would be possible to work out such structure of the CAPP system user interface that the manufacturing process would be designed in the background somewhat.It means that a designer making a design and its design structure would make a process plan, simultaneously.In such case the order of the design features in the design structure would correspond to the order of the manufacturing operations or cuts in the process plan structure.This assumption let us to make the CAPP knowledge based system with simple inference mechanism.Consequently, the inference mechanism is built of a set of inference rules that interlink a certain design feature with strictly ascribed manufacturing feature.Despite unquestionable advantages of such solution its effectiveness is in principle limited to a group of products such as shafts, sleeves and disks.
-additive design features -removal design features
Machining cycles as base for the design feature library elaboration
Machining cycles are commonly considered as the important element of CNC control programmes.They allow to automise the CNC programming process, and they also limit the programmes size.Thanks to them it is possible to change manufacturing parameters in an easy and quick way.The concept of machining cycle is understood as a constant parameterized subprogram stored in the control system memory used for programming typical manufacturing operations such as turning, drilling, milling, threading etc. Machining cycles programming process is very often supported by the dialog programming module like in case of Sinumerik control system.In such case a CNC programmer is visually assisted in order to make them familiar with the particular machining cycle parameters and their permissible values [5][6].Taking into account they characteristic the machining cycles can be roughly grouped in to the three groups it is: (i) drilling cycles, (ii) milling cycles and (iii) turning cycles.As it was mentioned in the abstract paragraph because of the great number of commercially available CNC machine controllers, with different CNC machining cycles definitions, it was necessary to make a decision about a research methodological framework, it is the selected CNC machine controllers group.Taking into account the criterion of popularity,in the considered case in Polish industry conditions, as the research framework the selected group of Sinumerik CNC machine controllers was chosen.Although, presented in the paper result can be generalized for other CNC machine controllers like these offered by Fanuc, Heidenhain or Okuma producers.In our work we were focused on the group of Sinumerik 810D, 840D and 840Di CNC controllers.The idea of the feature library development is based on a general assumption saying that it is possible to find a relationship between a particular CNC machining cycle and the simple design feature or a compound design features.
An overview of the machining cycles for Sinumerik 810D/840D/840Di
From definition machining cycles are motion sequences defined for drilling, boring/reaming, tapping etc. according to standard DIN 66025.These machining cycles are called in the form of subroutine with the unique cycle name and appropriate set of the parameters.In Sinumerik 810D/840D/840Di control systems there are five drilling and five boring/reaming machining cycles available.The short description of these cycles in the Table 1 is given [1,2].
Drilling, centering CYCLE81
The tool drills at the programmed spindle speed and federate to the programmed final drilling depth.
Drilling, counterboring CYCLE82
The tool drills at the programmed spindle speed and federate to the programmed final drilling depth.A dwell time can be allowed to elapse when the final drilling depth has been reached.
Deep-hole drilling CYCLE83
The tool drills at the programmed spindle speed and federate to the programmed final drilling depth.
Operation is performed with a depth infeed of a maximum definable depth executed several times.The tool can be retracted in order to swarf remove or for chip breaking.
Rigid tapping CYCLE84
Rigid tapping with programmed spindle speed and federate to the programmed thread depth.
Tapping with compensation chuck CYCLE840
Taping with compensation chuck operation with programmed spindle speed and federate to the programmed thread depth.Boring 1 CYCLE85 Different feedrates for boring and retraction.
Boring 2 CYCLE86
Oriented spindle stop, definition of retraction path, retraction in rapid traverse, definition spindle direction of rotation.Turning cycles in most CNC machine controllers are used for programming of the following manufacturing cuts: turning, boring, grooving, threading or manufacturing undercuts.In addition, there are some hole manufacturing cycles in Sinumerik 810D/840D/840Di control systems.They are derived from above mentioned boring/reaming cycles excluding specific milling manufacturing cycles for example machining of the holes patterns.There are seven turning machining cycles available in Sinumerik 810D/840D/840Di control systems.The short description of these cycles in the Table 2 is given.
Grooving CYCLE93
Turning of symmetrical and asymmetrical groves for longitudinal and travers machining on straight contour elements.
Undercut CYCLE94
Machining of undercuts of E and F forms in accordance with DIN509 with the usual load on a finished part.
Stock removal CYCLE95
With this cycle, contours can be machined in the longitudinal and facing directions, inside and outside.This cycle is freely selectable for roughing, finishing, complete machining).
Thread undercut CYCLE96
Cycle used for thread undercuts machining of A, B, C, and D forms in accordance with DIN 76 on parts with a metric ISO thread.Thread cutting CYCLE97 Turning of threads.
Thread chaining CYCLE98
It allows to produce several concatenated cylindrical or tapered threads with constant lead in longitudinal or face machining, all of which can have different thread loads.
Extended stock removal CYCLE950
The same as for CYCLE95, but the finished part contour profile cannot have any relief cuts must be continuous.For roughing the programmed infeed is set precisely, the last two roughing steps are divided equally.Roughing it performed to the programmed to final machining allowance.Finishing is performed in the same direction as roughing.
Because, at the current stage of our research we are focued only on rotational symmetric parts such as shafts, sleeves and shilds, the machining cycles analisis was limited to gruops of boring/reaming and turning cycles.The analisys of the usebillity of milling cycles for design features identification was omitted.Taking into account the set of available machining cycles, in the consiedered case these of the Sinumerik 810D/840D/840Di machine contollers it can be stated that is possible to work out a set of the desing features that can be next treated as the base for feature based design modelling module elaboration.
In the Figure 2 aproposed structure of design feature classes worked out on the basis of the usuablity of the machine cycles is shown.It could be easily noticed thatthis structure is strictly oriented on machining cycles.Moreover, the characteristic feature of it is its simplicity compere to other structures for example presented in [3].This simplicity results from fact that the internal structure of the particulars classes is more complex than in case of the complex class structures but with less complex structures of the classes.
Conclusions
In the paper an attempt of application of the machining cycles in order to build the design feature library is presented.At the current state of the research the proposed structure of the design features is strictly oriented on manufacturing operations that can be performed with a lathe and milling machines equipped with Sinumerik 810D/840D/840Di CNC controller, but the achieved result are promising enough to say that they can be applied for any other CNC controller.This result will be used for creating of the prototype of the CAPP system.
Boring 3 CYCLE87
Spindle stops M5 and program stop M10 at drilling depth, continued machining after NC Start in rapid reverse, definition of spindle direction of rotation.Boring 4 CYCLE88 As for CYCLE87 plus dwell time at drilling depth Boring 5 CYCLE89 Boring and retraction at the same speed.Row holes HOLES1 Drilling of a row holes i.e. a number of holes that lie along a straight line or a grid of holes.Hole circle HOLES2 Drilling a circle of holes.The type of holes must be determined by the drilling cycle.
Fig. 2 .
Fig. 2.The proposed structure of the design features.classes.Grooves in the design features class structure are represented by the Groove class and its derivative classes GrooveLongitudinal, and GrooveFacing.
|
2018-12-06T23:15:45.069Z
|
2017-01-01T00:00:00.000
|
{
"year": 2017,
"sha1": "bc1236ec952d13f1c2684f162f003e40135a1702",
"oa_license": "CCBY",
"oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2017/26/matecconf_imane2017_06019.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "bc1236ec952d13f1c2684f162f003e40135a1702",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
8882271
|
pes2o/s2orc
|
v3-fos-license
|
A Mechanism Design Approach for Coordination of Thermostatically Controlled Loads with User Preferences
This paper focuses on the coordination of a population of thermostatically controlled loads (TCLs) with unknown parameters to achieve group objectives. The problem involves designing the device bidding and market clearing strategies to motivate self-interested users to realize efficient energy allocation subject to a feeder capacity constraint. This coordination problem is formulated as a mechanism design problem, and we propose a mechanism to implement the social choice function in dominant strategy equilibrium. The proposed mechanism consists of a novel bidding and clearing strategy that incorporates the internal dynamics of TCLs in the market mechanism design, and we show it can realize the team optimal solution. This paper is divided into two parts. Part I presents a mathematical formulation of the problem and develops a coordination framework using the mechanism design approach. Part II presents a learning scheme to account for the unknown load model parameters, and evaluates the proposed framework through realistic simulations.
I. INTRODUCTION
Demand response has attracted considerable research attention over the recent years, and is regarded as one of the most important means to improve the efficiency and reliability of the future smart grid. A natural way to achieve demand response is through various pricing schemes, such as Real Time Pricing (RTP), Time of Use (TOU) and Critical Peak Pricing (CPP) [1], [2]. Many validation projects [3] have been carried out to demonstrate the performance of these pricing schemes in terms of payment reduction, load shifting, and peak shaving. These price-based methods either directly pass the wholesale energy price to end-users [2] or design pricing strategies in heuristic ways [4]. It is thus hard to achieve predictable and reliable aggregated response, which is essential in various demand response applications, such as energy capping, load following, frequency regulation, among others.
To achieve accurate and reliable load response, aggregated load control has been extensively studied in the literature. A S. Li, and W. Zhang simple form of aggregated load control is the direct load control (DLC), for which the aggregator can remotely control the operations of residential appliances based on the agreement between customers and the utility company. While traditional DLC is mainly concerned with peak load management [5], [6], recent research effort focuses more on the modeling and control of different kinds of aggregated loads, such as data center servers [7], [8], hybrid electrical vehicles [9], [10] and thermostatically controlled loads [11]- [15], to participate in various demand response programs. Some of these DLC methods require fast communications between the aggregator and individual loads. The communication overhead can be reduced using advanced state estimation algorithms [16], [17] that can accurately estimate load state information without frequently collecting measurements from the loads.
Another important paradigm of aggregated load control is the market-based coordination. It borrows ideas from economics [18] to coordinate a group of self-interested users to achieve desired aggregated load response [19], [20]. Different from DLC, the market-based coordination affects the load response indirectly via an internal price signal. The internal price can be dramatically different from the wholesale price due to specific group objectives. For instance, in [21] and [22], a market-based approach is proposed to efficiently allocate thermal resources among offices only based on local information. In [23] and [24], a multiagent based control framework is proposed to integrate distributed energy resources for various coordination objectives. A distributed algorithm is developed in [25] and [26] for the utility company and users to jointly determine optimal prices and demand schedules via an iterative bidding and clearing process. In [27], [28], a group of smart buildings are coordinated through an internal price signal to provide frequency regulation services to the ancillary market. In addition, the Pacific Northwest National Laboratory launched the GridWise R demonstration project to validate the marketbased coordination strategies for residential loads [29]. The demonstration project involved 112 residential houses in Washington and Oregon, and showed that the market-based coordination strategies could reduce the utility demand and congestion at key times.
Although the aggregated dynamics of TCLs may significantly affect the performance of the control strategies, many existing market-based coordination strategies either neglect this internal dynamics or use a simplified model to characterize it. In this paper, we consider the coordination of a group of TCLs to maximize the social welfare subject to a peak energy constraint, where the internal dynamics of TCLs are taking into account. This coordination problem poses several challenges. First, the user utilities are private information, making it rather challenging for the coordinator to achieve group objectives with incomplete information. Second, many existing works adopt the Nash equilibrium concept [30], [31], which requires multiple iterations between the agents and the coordinator to achieve the optimal social outcome. The real time implementation of such coordination algorithms requires considerable communication resources. Third, many existing literature [19] assumes accurate load models with known parameters. However, the Gridwise R demonstration project [29] suggests this is not always the case. In practice, the information each user sends to the coordinator can only depend on local measurements, such as room temperature and "on/off" state. Therefore, an estimation scheme is needed for the users to compute their bids only based on online measurements.
The key contribution of this paper lies in the development of a market-based coordination framework for residential air conditioning loads with a systematic consideration of all the aforementioned challenges. In this paper, we formulate the coordination problem as a mechanism design problem [18], [32]. The price-responsive loads are modeled as individual utility maximizers, while the group objective is encoded in the social choice function, which is to maximize the social welfare subject to a peak energy constraint. We propose a mechanism and show it can implement the social choice function in dominant strategy equilibrium. Such solution concept does not require iterative information exchanges between the coordinator and the individual loads, and can be implemented with limited communication resources. The proposed mechanism contains a novel bidding and clearing strategy that incorporates the internal dynamics of the thermostatically controlled loads into the market mechanism design, and we show that it can realize the team optimal solution. Different from many existing works [29], [26], the problem is addressed with a systematic consideration of various practical factors, such as heterogeneous load dynamics, private information of individual users, unknown parameters of the load model, communication resources for the information exchange, etc. All these factors are brought up based on the observations in the GridWise R demonstration project [29]. They are important not only for customer privacy protection and the end user's engagement, but also for the cost-effective implementation of the real-time control strategies. Once our framework is properly implemented, it can accurately achieve the desired load responses, and improve the operational efficiency of the distribution system in an economically feasible way.
The rest of the paper proceeds as follows. A motivating example based on a real-world demonstration project is presented in Section II, followed by a problem formulation Section III. A mechanism is constructed in Section IV to im-plement the optimal energy allocation. Simulation results and the joint state-parameter estimation framework are presented in the companion paper [33].
II. MOTIVATING EXAMPLE
The framework proposed in this paper is largely motivated by the Pacific Northwest GridWise R demonstration project [29], where a 5-minute double-auction market is created to coordinate a group of TCLs to cap the aggregated peak energy. Each device is equipped with a smart thermostat that can measure the room temperature and communicate with the coordinator. Before each market period, the device measures its room temperature, T c , and submits a bid to the coordinator. The bid should consist of the load power and the bidding price. Since the rated power of the load is different from its actual power due to environmental disturbances, in practice each device is required to bid the measured average power of the most recent market period during which the load is on. The bidding price is determined by a bidding curve as shown in Fig. 1, where P avg is the average clearing price of certain price history (e.g., 24 hours), σ is the standard variation of the clearing prices during the given history, and T min , T desired and T max are user-specified minimum, desired, and maximum temperature, respectively. We denote the bidding power and price as Q bid and P bid , respectively. In addition, each user can specify energy use preferences through a smart thermostat interface (see Fig. 2). This user preference will affect the slope of the bidding curve.
The coordinator collects all the bids and orders the bids in a decreasing sequence, P 1 bid , . . . , P N bid . With the associated power sequence, Q 1 bid , . . . , Q N bid , a demand curve can be constructed to map the clearing price to aggregated power. Fig. 3 illustrates how the demand curve is constructed. This curve is then used to determine the market clearing price that respects the feeder capacity constraint: when the total demand is less than the feeder capacity, the market clearing price is equal to the base price, P base (Fig. 4), which is the wholesale energy price plus a retail modifier as defined by the tariff of American Electric Power (AEP) [34]; otherwise the market price, P c , is determined by the intersection of the demand curve and the feeder capacity constraint (Fig. 5).
After the market is cleared, each device receives the energy price and adjusts its setpoint, T set , according to a response curve as shown in Fig. 6. This setpoint modifies the system dynamics and affects the temperature trace of the TCL, and therefore affects the bid of each user for the next market period. Notice that all the bidding and user response processes are executed by a programmable controller, and the user only needs to specify his/her preferences via the thermostat interface. To initialize the market process, the user needs to specify T min , T max , T desired and K, the device needs to measure the temperature and the power of the last "on" cycle, and the coordinator needs to collect all the bids, estimate the power of the unresponsive loads, Q uc , and the feeder capacity constraint, D.
Apart from the GridWise R project, a similar demonstration project is also implemented in AEP, Ohio [35], which Figure 5. When the total demand is greater than the feeder power constraint, then the clearing price is determined by the intersection of demand curve and feeder capacity constraint.
Temperature Price Figure 6. The user response to the price. For any given price, the devices determine the temperature setpoint according to this curve.
involves more households and more sophisticated market bidding design. These projects provide insights for the coordination of residential loads from the practical point of view. However, the bidding and pricing strategies are designed in a heuristic way, which may result in constraint violations and market inefficiencies. To address these challenges, there is a strong need to develop a general coordination framework that can serve as a theoretical foundation to improve the performance of the control scheme and help to design other similar market-based coordination strategies.
III. PROBLEM FORMULATION
Consider a coordination problem for a group of TCLs, where the coordinator allocates energy to users to maximize social welfare subject to a feeder capacity constraint. Each device is assumed to be equipped with a smart thermostat that has two main functions. First, it allows the user to specify energy use preferences via an interface such as the sliding bar shown in Fig. 2 to indicate one's trade-off between comfort and cost. Second, before each market period it submits a bid to the coordinator based on user's preference and local device measurement, such as power consumption, "on/off" states, and local temperature. The coordinator collects the user bids, determines the energy price, and broadcasts the price to all the devices. Each device will then adjust the temperature setpoint in response to the energy prices to maximize the individual utility, which modifies the system dynamics and therefore affects the user bids for the next period. In the considered scenario, we assume that each user is a price taker, indicating that there is no single TCL that has large enough power to significantly affect the market price. This is a standard assumption when the market involves a large number of players [18, chap. 12.F], [36], [37].
The rest of this section provides formal mathematical descriptions of the main components of the proposed framework.
A. User Preferences and Utility
Assume that there are N self-interested users. Each user needs to determine the temperature setpoint to obtain an energy allocation that maximizes his individual utility (the user's comfort minus the electricity cost). In other words, each user is confronted with the trade-off between comfort and electricity cost: when the electricity price is high, the device will adjust the temperature setpoint to save electricity cost at the sacrifice of some user comfort. Formally, a function V i : R → R can be used to represent the comfort level for each user with energy allocation a i . Assume that represent the private information of user i. Denote E m i as the energy consumption for the ith load if it is "on" during the entire period, which gives a i ≤ E m i . The individual utility maximization problem can be formulated as follows: where P c is the energy price. Let h i : R → R be the optimal solution to the optimization problem (1), we have: We assume that h i is continuous and non-increasing with respect to P c for each i = 1, . . . , N . Notice that the user can not directly choose his optimal energy allocation. Instead, he can only determine the temperature setpoint, which affects the energy consumption through the load dynamics.
B. Individual Load Dynamics
Let η i (t) ∈ R n be the continuous state of the ith load. Denote q i (t) as the "on/off" state: q i (t) = 0 when the TCL is off, and q i (t) = 1 when it is on. For both "on" and "off" states, the thermal dynamics of a TCL system can be typically modeled as a linear system: Many existing works use a first-order linear system to describe the TCL dynamics [11], [16], [17], where η i (t) only consists of the room temperature. Although the first-order model is adequate for small TCLs such as refrigerators, it is not appropriate for residential air conditioning systems, which require a 2-dimensional linear system model considering both air and mass temperature dynamics [12]. Such a second-order model is typically referred to as the Equivalent Thermal Parameter (ETP) model [38]. In this paper we focus on the second-order ETP model, which includes the firstorder model as a special case.
Typical values of these parameters and the factors that affect these parameters can be found in [12].
The power state of the TCL is typically regulated by a hysteretic controller based on the control deadband [u i (t) − δ/2, u i (t) + δ/2], where u i (t) is the temperature setpoint of the ith TCL and δ is the deadband. Let T i c (t) denote the room temperature of the ith load. In the cooling mode, the load is turned off when T i c (t) ≤ u i (t) − δ/2, and it is turned on when T i c (t) ≥ u i (t) + δ/2, and remains the same power state otherwise. This hysteretic control policy can be described as: For notation convenience, we define a hybrid state z i (t) = [η i (t), q i (t)] T , which consists of both the temperature and the "on/off" state of the load. Let [t k , t k + T ] be the kth market period, then the energy consumption of each load during the kth period depends on the system state and setpoint control u i (t). In this case, the private information consists of system state and model parameters. Therefore, the energy consumption of each load can be represented as e i (u i (t k ), z i (t k ), ϕ i ). This energy consumption function can be derived by calculating the portion of time that the system is on over the entire market period (details of this calculation are presented in Section IV). An example is shown in Fig 7, where a second-order ETP model is used and the initial room temperature is 72. function can be written as e i (u i (t k ), θ i (t k )). Notice that the private information for users is time varying, as it contains the system state.
After the market is cleared, each user wants to determine the control action u i (t k ) such that the resulting energy consumption equals the optimal solution to (1). Since the optimal control depends on the energy price, we can define a user response function, Therefore, the optimal energy allocation function h i as defined in (2) should satisfy the following: The left-hand side of equation (5) represents the optimal energy allocation for a given price, while the right-hand side arises from the physical property of the individual loads, and indicates that the user can specify the control action u i to match the actual energy consumption to the optimal allocation. An example of function h i is shown in Fig. 8, where the response curve is piecewise linear (as shown in Fig. 1) and the initial room temperature is 72.8 • F. To derive the function h i (·; θ i (t k )), we first determine the control setpoint based on the market price using the response curve (Fig. 1), then calculate the corresponding energy consumption based on the energy function e i (·, θ i (t k )). Since the energy function e i (·, θ i (t k )) depends on the system dynamics (3) and the control policy (4), the load dynamics are incorporated in function h i through this process.
C. Problem Statement
The coordinator obtains energy from the wholesale market at a cost denoted as C N i=1 a i . We assume that C(·) is differentiable and convex. The energy is then allocated to users via a price signal to maximize the social welfare, which can be defined as Therefore, the coordinator's optimization problem can be formulated as follows: subject to: where D is the maximum energy for the aggregated loads. Without loss of generality, we assume that D ≤ N E m i . Note that the feeder capacity constraints considered in the GridWise R demonstration project can be represented by the total energy constraint. This is because the feeder capacity constraint is mainly due to the consideration of the thermal characteristics of the feeder. The instantaneous power can exceed the feeder power limit without causing damages to the grid, as long as the energy over a certain period is effectively capped to protect the feeder from overheating.
The optimization problem (6) defines a Stackelberg game [39], where the coordinator first makes control decision to maximize the social welfare, then the individual users choose energy consumption to maximize individual utility based on the coordinator's control decisions. In such Stackelberg games, the upper bound on the social welfare can be typically characterized by the team optimal solution [39], which is the optimal solution to the following team problem: subject to: . . , N, In the above team problem, the coordinator and the users cooperatively maximize the social welfare subject to the feeder capacity constraint. In general, the team solution results in a higher social welfare than the solution to (6), since the coordinator's optimization problem (6) is more restrictive: one only needs to find an energy allocation to maximize the social welfare to solve the team problem, while in the coordinator's optimization problem, we also need to find a price to satisfy the additional constraint in (6). However, such a clearing price may not always exist for an arbitrarily given team optimal solution.
Example 1: As an example, consider two users with V 1 (a 1 ; θ 1 (t k )) = a 1 , V 2 (a 2 ; θ 2 (t k )) = 3a 2 . The energy cost for the coordinator is C(a 1 + a 2 ) = 2a 1 + 2a 2 . The team problem is to maximize the social welfare subject to an energy constraint, i.e.: subject to: The team optimal solution is a 1 = 0, a 2 = 1. However, according to (1), given any energy price, a i is either 0 or 2.
Therefore, the coordinator can not find a price to realize the team optimal solution.
To address this concern, we introduce the concept of realizable energy allocation: Definition 1: The energy allocation vector, a = (a 1 , . . . , a N ), can be realized by P c , if a i = h i (P c ; θ i (t k )) for all i = 1, . . . , N .
It is clear that not all the energy allocations can be realized. In this paper, we have assumed that V i is concave and continuously differentiable, and h i is continuous and nonincreasing. We will show in Section V that under these conditions, there is always a price to realize the team optimal solution. In other words, the upper bound given by the team optimal solution is tight. Therefore, the problem of the paper can be formulated as follows: Problem 1: Design the bidding and clearing strategy, such that the cleared price realizes the team optimal solution a * .
The coordinator's optimization problem (6) can not be directly addressed using standard optimization techniques, since the individual valuations are unknown to the coordinator. For this reason, to achieve the group objectives, the coordinator needs to design a bidding strategy to collect information from the individual users, and then determines the price based on the user bids.
Remark 1: The market design for many traditional assets are well-understood. For instance, in energy market, generators can be simply characterized by an output range depending on its ramp rate during each market period. However, the internal dynamics of TCLs are more complex and depend more on the environment, and thus cannot be handled in the same way. Therefore, an important contribution of this paper is to incorporate the dynamics of TCLs in the energy market design. In addition, although this paper only considers the load dynamics within one market period, it is the preliminary step towards establishing a fully dynamic version of the problem where multiple market periods are taken into account.
IV. A MECHANISM DESIGN FRAMEWORK
In this section, we adopt the mechanism design approach to solve Problem 1. First the problem is formulated as a mechanism design problem, then a mechanism is constructed to implement the desired social outcome. In addition, a realistic bidding strategy with a simplified message space is proposed to reduce the communication overhead.
A. The Mechanism Design Problem
Mechanism design studies how to aggregate the individual preferences into a social choice while the individual's actual preferences are not publicly observable. In a mechanism design problem, each user is assumed to selfishly take actions to maximize the individual utility, while the coordinator makes the collective choice that achieves various group objectives. Since the individual utility is unknown to the coordinator, he can require each user to submit a bid to collect information. In this case, the key problem for the coordinator is to align individual objectives with system-level objectives: a proper bidding and pricing strategy needs to be designed, such that when each user selfishly maximizes the individual utility, the resulting outcome also achieves the desired group objectives (for example, maximizes the social welfare). The rest of this subsection introduces basic concepts in mechanism design. Let x ∈ X be the outcome of the mechanism that consists of the energy allocation and the energy price, i.e., x = (a 1 , . . . , a N , P c ). The utility of each user (comfort minus electricity cost) depends on the outcome. Moreover, we assume that at time t k , each user can privately observe his utility, U i , over different outcomes. In other words, we can model this by supposing that user i privately observes a parameter θ i that determines his utility. Notice that we drop the dependence of θ i on t k throughout the rest of the paper for notation convenience. In mechanism design, θ i ∈ Θ i is usually referred to as the user i's type [18, p. 858], where Θ i denotes the set of all the possible types. In our problem, the user type contains the system state, z i (t k ), and the model parameter, ϕ i , in particular: where As the user preferences are private, to determine the optimal energy price, the coordinator also needs to require each user to submit a bid to reveal some information. Formally, this can be formulated as a message space M = M 1 × · · · × M N , where M i denotes the space of messages (bids) the ith user can communicate to the coordinator. The structure of M i depends on particular applications. For example, in the demonstration project, each device submits a price and a quantity, then we have (P i bid , Q i bid ) ∈ M i . In [25] each device submits the slope of the demand curve, β i , in which case β i ∈ M i . After collecting all the user bids, the market is cleared with an energy price and a corresponding energy allocation. The clearing strategy can be represented by an outcome function, g : M → X , that maps the user bids to an outcome, x. The message space and the outcome function together fully characterize the rules governing the procedure for making the collective choice. This is typically referred to as a mechanism [18], which can be denoted as Γ = (M 1 , . . . , M N , g(·)).
Each user observes θ i privately and determines what to bid to maximize his utility. This process can be represented by a bidding strategy m i : Θ i → M i that maps the user type to a message. There are many solution concepts for a mechanism, such as Nash equilibrium, Bayesian Nash equilibrium, etc. Of particular interests to our framework in this paper is the dominant strategy equilibrium. Denote m −i as the collection of strategies of all the users other than i, then the dominant strategy equilibrium is defined as follows: Definition 2 (Dominant Strategy Equilibrium [18]): The strategy profile (m * 1 (·), . . . , m * N (·)) is a dominant strategy equilibrium of mechanism Γ = (M 1 , . . . , M N , g(·)) if for all i and all θ i ∈ Θ i ,
Remark 2:
In a Nash equilibrium, each agent plays the equilibrium strategy only when he has correct forecast of the actions of other agents. When such knowledge is unavailable, it usually takes multiple iterations for the coordinator and the users to reach the equilibrium strategy of the game. In contrast, dominant strategy equilibrium is a very strong and robust solution concept, where a rational agent always follows the equilibrium strategy regardless of other agent's action. In other words, even when one does not know the actions of others, he still plays the equilibrium strategy. This enables each user to only bid once at each market period, which significantly reduces the communication overhead of the proposed framework.
The equilibrium strategy characterizes the individual's selfinterested behavior: each user is an individual welfare maximizer. However, in the coordinator's point of view, a more interesting question is to find the best choice for the overall social welfare. For this reason, a social choice function f : Θ → X can be defined to represent the desired social outcome of the coordinator. More specifically, f (·) determines what outcome will be chosen by the coordinator when he knows all the private information. In our problem, f consists of the optimal price to the optimization problem (6) and the resulting energy allocation. If we define θ = (θ 1 , . . . , θ N ), the conflict between the personal interest and social interest can be captured by the concept of implementation: Definition 3 (Implementation [18]): A mechanism Γ = (M 1 , . . . , M N , g(·)) implements the social choice function f (·) in dominant strategies if there exists a dominant strategy equilibrium m * (·) of Γ, such that g(m * 1 (θ 1 ), . . . , m * N (θ N )) = f (θ) for all θ ∈ Θ. In the above definition, g(m * 1 (θ 1 ), . . . , m * N (θ N )) represents the resulting outcome of individual maximization, while f (θ) denotes the desired social outcome. The concept of implementation characterizes the social choice that can be realized when all the users take actions to selfishly maximize the individual utility. To this end, Problem 1 can be equivalently stated as follows: Problem 2: Design a mechanism to implement the social choice function f (·) that maximizes the social welfare subject to a feeder capacity constraint, i.e., f (θ) = (h 1 (P * c ; θ i (t k )), . . . , h N (P * c ; θ i (t k )), P * c ) and P * c is the solution to the optimization problem (6). Furthermore, P * c realizes the team solution.
The design of a mechanism includes specifying the message space and the outcome function for each user. In the mechanism design problem, the coordinator needs to design the message space and the market clearing rule such that the optimal social welfare can be implemented when each user selfishly maximizes the individual utility. In the meanwhile, the feeder capacity constraint needs to be respected.
B. Constructing the Mechanism
Let f (θ) = (a * 1 , . . . , a * N , P * c ) be the social choice function that maximizes the social welfare subject to the feeder capacity constraint. Specifically, P * c is the optimal solution to (6), and f (θ) satisfies the following condition: This subsection constructs a mechanism to implement f (·). Consider a mechanism Γ * , where each device is asked to submit function h i (·; θ i ). Due to the assumption on the convexity of V i , it can be verified from (1) that the curve h i (P c ; θ i ) is non-increasing with respect to P c . In this case, the message space is the function space of all possible h i (non-increasing and continuous functions). Notice that the user's actual bids may deviate from function h i , unless they are motivated to bid h i . Let b i (·; θ i ) be a non-increasing and continuous function that represents the user's actual bid. The aggregated demand curve b(·; θ) can be obtained by adding individual bidding functions, i.e., b(·; θ) = N i=1 b i (·; θ i ). In this mechanism, each user is required to submit a function, which requires considerable communication resources. This bidding strategy will be simplified in the next subsection to reduce the communication overhead.
Here we propose the following outcome function g(b 1 , . . . , b N ) = (a * 1 , . . . , a * N , P * c ) to clear the market: where C ′ represents the derivative of the cost function C(·), and the market price P * c is the smaller of P * andP . According to (13) and (14), P * is the marginal production cost of procuring N i=1 a * i amount of energy, whileP is the energy price at which the aggregated demand is equal to the maximum allowed amount. Since b i is continuous and non-increasing, and we have assumed that D ≤ N E m i , P exists. Intuitively, the social welfare is maximized when the market price equals the marginal production cost, i,e, P * c = P * . However, in equation (14), the function b is nonincreasing with respect to price, indicating that any feasible price that respects the feeder capacity constraint should be greater thanP . Therefore, in the proposed outcome function, the clearing price equals to P * whenever P * >P , and equals toP otherwise. When the energy price is determined, the allocation exactly follows the user bids, i.e., a * i = b i (P c ; θ i ). For illustrating purpose, we construct the following example to show how to derive the optimal solution from the proposed clearing strategy.
Example 2: Consider 100 users with V i = − 1 2 a 2 i + (i − P c )a i . Assume that after proper scaling, the maximum energy consumption for each user is 1. The individual utility maximization problem can be formulated as follows: subject to: The optimal solution to this problem is: In addition, let us assume that the real time price is 20, and the maximum 5-minute energy due to the feeder capacity constraint is 50, i.e., P * = 20 and D = 50. According to (17), when P c = 99, only the 100th user consumes 1 unit of energy, and the aggregated energy is 1. When P c = 98, the 99th and the 100th user consume 1 unit of energy, respectively, and the corresponding aggregated energy is 2, and so forth. Therefore, the price that corresponds to the energy limit is 50, i.e.,P = 50. SinceP > P * , we conclude that P * c =P . The rest of this subsection discusses some properties of the proposed mechanism.
This result follows easily from the price taker assumption. For completeness, we provide the proof to Proposition 1 in the Appendix. In the proposed mechanism, the optimal bid of each user does not depend on the bidding decisions of others. This is a very important property, since in our particular problem, each user does not know other user's preferences or actions. Therefore, if the bidding decision of one user has to depend on the action of another, then the equilibrium strategy can not be achieved unless all the users have accurate predictions on other user's action, which may not be a reasonable assumption. In addition, we also want to comment that result of Proposition 1 only holds when there are a large population of users such that the influence of an individual user on the market price is neglectable. In other cases (such as the oligopolistic market), the mechanism needs to be designed differently. Now we can establish the following key property of the proposed mechanism: Proposition 2: The proposed mechanism Γ * implements the social choice function f (·). Furthermore, the resulting market clearing price realizes the team optimal solution.
The detailed proof of this proposition can be found in the Appendix.
C. Realistic Bidding Strategy
The proposed mechanism provides a general solution to the coordination problem formulated in this paper. In realworld applications, directly submitting function h i requires considerable communication resources, and might impinge on the customer privacy. Therefore, in this subsection we explore the structure of function e i (·; θ i ) and h i (·; θ i ) to simplify the message space and reduce the communication overhead.
In this paper we assume that the TCL consumes a constant power when it is "on", and consumes no energy when it is "off". For this reason, the energy consumption function e i (·, θ i (t k )) can be derived by calculating the portion of time that the system is on during the entire market period. For example, assume that the system is "on" at the end of the k − 1th period. When the initial temperature η i (t k ) is given, the state trajectory of the linear dynamic model (3) can be derived as η i (t) = e Ait η i (t k ) and I is the identity matrix. When the trajectory hits the boundary of the control deadband defined in (4), the power state will switch and the system is off. Therefore, the trajectory of the system state η i (t) and the power state q i (t) for the entire period can be derived, and the portion of time that the system is "on" can be calculated based on q i (t). In particular, consider a system in cooling mode. If the load is "on" at the end of the (k − 1)th period, i.e., q i (t − k ) = 1, we have the following (the case for q i (t − k ) = 0 can be derived similarly): where α = t 0 q i (t)dt is the portion of time that the system is on, and T i f (t k ) is the room temperature at t k + T given that the system is on during the entire period between t k and t k + T , which satisfies the following: T i f is defined in (19) to characterize the condition in which the load is "on" for the entire period and therefore consumes the maximum energy. Intuitively, if the room temperature at t k is less than the lower bound of the control deadband (T i c (t k ) ≤ u i (t k ) − δ/2), the power state will be "off" until the room temperature hits the boundary of the deadband. On the other hand, if u i (t k ) ≤ T i f (t k ) + δ/2, it indicates that the load is always "on", and the room temperature does not hit the boundary for the entire period.
Due to the complicated nature of the hybrid system dynamics, directly submitting the function h i may require considerable communication resources in the real time implementation. To reduce the message space, we approximate h i with a step function as illustrated in Fig. 9, where c 1 and c 2 are computed based on the control setpoint and user type. For notation convenience, define c 1 = e i (u 1 , θ i ) and c 2 = e i (u 2 , θ i ) , where u 1 and u 2 are the temperature setpoint control corresponding to c 1 and c 2 , respectively. For example, using the second-order ETP model (3) and control policy (4), u 1 and u 2 for the ith device can be obtained as: [1,0], and the power state of the ith TCL is "on" at t − k .
Price Energy approximated Figure 9. The energy response curve h i and its approximation.
The step function in Fig. 9 can be fully characterized by two scalars: P i bid and Q i bid , where P i bid is the middle point of c 1 and c 2 , while Q i bid is the power consumption when the device is on during the market period. In this case, the message space of each user M i is reduced from a function space to a space of R 2 + , and each bid is of the form [P i bid , Q i bid ]. Remark 3: Compared to the DLC strategies, the proposed approach has both advantages and disadvantages. In many demand response applications, the DLC strategies can achieve the group objectives with limited communication resources. Some can even learn the user response behaviors via the input/output signals. On the other hand, the aggregate load response in the considered problem depends on the time varying outside temperature, the solar radiation, and the distribution of the room temperature, which has rather complicated dynamics. The proposed market-based coordination approach enables the coordinator to produce very accurate aggregated response. This is very important for the power system applications, where the safe operations of the grid is critical. Therefore, DLC and the proposed coordination strategy are complementary to each other, and can be applied to different scenarios according to the practical considerations of the particular problem.
Remark 4: The proposed bidding strategy assumes the knowledge of ETP model parameters. In practice these parameters are difficult to derive, and the ETP model used in the framework may be inaccurate in terms of characterizing the real energy consumption of the TCLs. To address these challenges, we present a joint state and parameter estimation framework in our companion paper [33], which enables users to compute bidding prices only based on local measurements.
V. CONCLUSION
This paper presents a market mechanism for the coordination of thermostatically controlled loads, where a coordinator manages a group of TCLs using pricing incentives to maximize the social welfare subject to a feeder capacity constraint. In the paper, a mechanism is proposed to implement the desired social choice function in dominant strategy equilibrium. This mechanism consists of a novel bidding strategy that incorporates information on both the load dynamics and the time-varying user preferences. It is proven that under the proposed mechanism, the coordinator can not only maximize the social welfare but also realize the team optimal solution. Future work includes formulating the fully dynamic market-based coordination framework with multiple periods and extending the results to energy storage devices and deferrable loads such as plug-in electric vehicles, washers, dryers, among others.
A. Proof of Proposition 1
When each device submits h i as the bid, we have b i (·; θ i ) = h i (·; θ i ). According to (11), each user will receive an energy allocation that satisfies a * i = h i (P c ; θ i ). Based on (2), we have: a * i = arg max 0≤ai≤E m i V i (a i ; θ i ) − P c a i . Therefore, when b i (·; θ i ) = h i (·; θ i ), the resulting energy allocation maximizes the utility of each user. According to Definition 2, the strategy profile (h 1 (·; θ 1 ), . . . , h N (·; θ N )) is a dominant strategy equilibrium of the proposed mechanism.
B. Proof of Proposition 2
Notice that the social choice function characterizes the optimal solution to the coordinator's optimization problem (6), and the team solution provides an upper bound on the social welfare for (6). Therefore, to prove Proposition 2, it is sufficient to show that the proposed pricing strategy realizes the team solution.
Based on Proposition 1, b i = h i . Therefore, we have the following relations: In addition, the KKT condition for the ith user's individual utility maximization problem (1) is as follows: where u i 1 and u i 2 are the Lagrangian multiplier satisfying: Define u = P * c −C ′ N i=1 a * i , then equation (21) becomes: According to (20), when N i a * i < D, we have P * c = P * = C ′ N i=1 a * i , therefore, u = 0. When N i a * i = D, we have P * c =P , and therefore, u =p − p * . Since h i is nonincreasing, we have u ≥ 0. This indicates that u, u i 1 and u i 2 are the Lagrangian multipliers of the team problem, and (23) is exactly the KKT condition for the team problem (7). Since the team problem is a concave optimization problem, the KKT conditions are also sufficient. Thus a * = (a * 1 , . . . , a * N ) is the team solution. This completes the proof.
|
2015-03-09T14:42:15.000Z
|
2015-03-09T00:00:00.000
|
{
"year": 2015,
"sha1": "445d8bf604e77529bc1518cea1563e3e302c9fd6",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "14cb7bc44fa945a5d09e942e5dbeb0bfc00084dc",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
}
|
16524050
|
pes2o/s2orc
|
v3-fos-license
|
Calpain: a molecule to induce AIF-mediated necroptosis in RGC-5 following elevated hydrostatic pressure
Background RIP3 (Receptor-interacting protein 3) pathway was mainly described as the molecular mechanism of necroptosis (programmed necrosis). But recently, non-RIP3 pathways were found to mediate necroptosis. We deliberate to investigate the effect of calpain, a molecule to induce necroptosis as reported (Cell Death Differ 19:245–256, 2012), in RGC-5 following elevated hydrostatic pressure. Results First, we identified the existence of necroptosis of RGC-5 after insult by using necrostatin-1 (Nec-1, necroptosis inhibitor) detected by flow cytometry. Immunofluorescence staining and western blot were used to detect the expression of calpain. Western blot analysis was carried out to describe the truncated AIF (tAIF) expression with or without pretreatment of ALLN (calpain activity inhibitor). Following elevated hydrostatic pressure, necroptotic cells pretreated with or without ALLN was stained by Annexin V/PI, The activity of calpain was also examined to confirm the inhibition effect of ALLN. The results showed that after cell injury there was an upregulation of calpain expression. Upon adding ALLN, the calpain activity was inhibited, and tAIF production was reduced upon injury along with the decreased number of necroptosis cells. Conclusion Our study found that calpain may induce necroptosis via tAIF-modulation in RGC-5 following elevated hydrostatic pressure.
Background
Calpains are calcium-activated neutral protease, which belongs to the family of cytosolic cysteine proteinases. They form heterodimers which are composed of a large 80 kDa catalytic subunit and a common 30 kDa regulatory subunit [1]. The calpains are widely distributed in most of the mammal tissues. The calpains are also implicated in physiological and pathophysiological processes, such as cytoskeletal reorganization, signal transduction pathways, cell cycle regulation and certain apoptosis pathways. The dysfunction of calpains is related to certain diseases like cataract, Parkinson's disease and Alzheimer's disease [2][3][4][5].
During cerebral hypoxic-ischemia, there's an overload of intracellular calcium which activates calpains, as a result neuronal apoptosis is triggered via caspase-3 activated pathway [6]. Recent studies show that calpains, caspase-3, caspase-8 and caspase-9 are all up-regulated in experimental retinal detachment, which suggests calpains are involved in caspase-dependent photoreceptor death [7]. Pharmacological inhibition of phosphodiesterase 6 (PDE6) induces retinal degeneration in rod and cone-enriched retinal explants with activation of caspase-3, calpain and poly (ADP-ribose) accumulation, which suggests a potential connection between calpain activation and apoptosis [8]. However, besides its role in apoptosis, a new feature of calpains has been found recently. Cellular necrosis, which is mediated by recombinant clostridium perfringens b-toxin (rCPB) occurs upon the activation of host cell calpains [9]. Another study reported that calpains may be involved in necroptosis as well [10]. Calcium-dependent calpain is activated by increasing calcium concentration in cytoplasm in N-methyl-N'-nitro-N'-nitrosoguanidine (MNNG)-treated cells. The activated calpains cleaves BID (BH3 interacting domain death agonist) to trucked BID sequent; tBID redistributes from the cytosol to mitochondria where it regulates BAX (Bcl-2-associated X protein) activation. Once activated, BAX provoked mitochondrial tAIF release and resulted in necroptosis [10,11].
High intra-ocular pressure (HIOP) is identified as one of the characteristics of glaucoma and it is the main factor that causes visual functional damage [12]. Related studies have been confirmed that elevation, volatility and continuous rise of intraocular pressure (IOP) could cause the death of retinal ganglion cells (RGCs), retinal pigment epithelium cells, etc. and eventually lead to vision loss [13,14]. Rapid elevation of IOP is a critical susceptible factor that causes acute glaucoma. The study indicated that RGCs necroptosis could also exist at the early stage of aHIOP (acute high intra-ocular pressure) [15]. Our further investigation suggested that up-regulation of receptorinteracting protein 3 (RIP3) might be involved in cellular mechanism of RGCs necroptosis [16]. As reported previously, RIPs is not the only cellular pathway which modulates early neuronal necroptosis, other cellular pathways may also participate in this process [17][18][19]. As mentioned earlier, the potential role of calpain involved in necroptosis in RGC after aHIOP is still unknown and further studies need to be done to evaluate it. Moreover, the various types of complex pathophysiological mechanisms in aHIOPinduced retinal injury, including retinal hypoxic-ischemia, accumulation of excitatory amino acid and inflammatory molecules, etc. needs to be considered [20,21]. But little is currently known about whether there is necroptosis in RGC under elevated hydrostatic condition, which is an initial factor in aHIOP-induced retinal injury in vivo, hence, we investigated the existence of necroptosis in elevated hydrostatic condition states by PI-staining and flow cytometry, and then evaluated the effect of calpain in necroptosis of RGC under elevated hydrostatic condition. We expect that the results will lead to a better understanding of the cellular mechanism of early RGC necroptosis and looking for rational interventional targeted therapy in the future.
Cell culture
The mouse retinal ganglion cell line (RGC-5) was contributed by Department of Ophthalmology, Second Hospital of Ji Lin University in China [22]. RGC-5 cells grew in Dulbecco's Modified Eagle Medium (DMEM) (HyClone Laboratories, Inc. UT) and supplemented with 10% fetal bovine serum (FBS, HyClone Laboratories, Inc. UT), 100 U/ml of penicillin and 100 μg/ml of streptomycin (HyClone Laboratories, Inc. UT). The RGC-5 cells used in the experiment were within 2-3 passages post-thawed to minimize the variability in the assays based on our observations. The density of RGC-5 cells was around 80% in 6 ml culture media in 50 ml flask before EHP (elevated hydrostatic pressure).
Cell injury and ALLN or Nec-1 usage
A pressurized incubator was designed to expose the cells to an elevated hydrostatic pressure as described in Ju's study [23]. After 2 hr exposure in this pressure system, the pressure present in three different values: 100 mmHg, 60 mmHg and 30 mmHg, cells were then moved to conventional culture incubator to recover at each recovery time point (6 hr, 12 hr and 24 hr). ALLN (Merck, Germany) and Nec-1 (Sigma-Aldrich, USA) were dissolved in Dimethyl sulfoxide (DMSO) for storage in 10 mmol/L and 1 mg/mL, which were pretreated in 10 μmol/L for 24 hr before cell injury.
PI staining
At each recovery time point (6 hr, 12 hr and 24 hr), the coverslips were washed in 0.01 M PBS for 3 min, and incubated in 10 μg/ml PI-dye solution at 37°C. After that, cells were fixed in 4% PF (Paraformaldehyde) and washed in PBS counterstaining with DAPI, and then covered with anti-fading mounting medium before microscopic examination. Control RGC-5 was incubated simultaneously in a conventional incubator at 37°C. Quantitative analyses were conducted using approximately 20 merged images (magnification = 40×) to estimate the frequency of cell necrosis.
Flow cytometry
The cells attached to the flasks were trypsinized followed by a gentle wash. Resuspending the cells in 200 μl of 1× binding buffer, and then added 5 μl of 20 μg/ml Annexin V and 10 μl of 50 mg/ml PI, incubated at RT for 15 min in the dark. After the cells were washed and analyzed by FACS Calibur (Becton, Dickinson Company, USA). The percentages of cells in each quadrant were analyzed using ModFit software (Verity Software House Topsham, USA). Statistical results of flow cytometry were conducted by calculating the PI+ cells numbers. All the results were repeated for three times.
Calpain activity assay
Calpain activity was determined by cleavage of the substrate Ac-LLY-AFC (Abcam, USA). The attached cell were digested, and then centrifuged for 1 min in a microcentrifuge (10,000 × g) then the supernatant was transferred to a fresh tube, a part of the supernatant were used to measure the protein concentration. The fluorescence was measured after 60 min incubation at 37°C along with the substrate in the reaction buffer. Fluorescence was recorded in a Fluoroskan Ascent Fluorimeter (Labsystems, Eragny-Parc, France), and the final results were expressed as Relative Fluorescent Unit (RFU) per milligram protein of each sample. One-way analysis of variance (one-way ANOVA) was performed to test differences in average value between groups. All results were presented as mean ± SD. A value of p < 0.05 was considered statistically significant.
RGC-5 has the same features with retinal ganglion cells
First we verified that the RGC-5 cells in our culture conditions expressed BDNF and NGF, but not GFAP ( Figure 1A). As shown in Figure 1B, the cells expressed RGC marker Brn3a and Thy1.1. The transcription factor Brn3a was expressed and appears to localize at the nuclei of RGC-5 cells. On the other hand, the expression of Thy 1.1 was detected in the cytoplasm of RGC-5 cells. Negative control immunostaining (second antibody only) showed no positive signal (data not shown), confirming the specificity of the antibodies. Therefore, expression of RGC marker genes and protein indicated that the cells cultured were RGCs.
Necroptosis happened at early stage of EHP
Propidium iodide (PI) is one of the dyes that emit red fluorescent under the excitation of 535 nm with the combination of DNA double strain through necrosis cell's membrane, but it does not penetrate into live cells. Thus, PI dye can be used to distinguish the necrosis cell from normal ones [24][25][26][27]. DAPI is one of the dyes that emit blue fluorescent under the excitation of 340 nm with the combination of DNA double strain through 4% PF fixation. It can identify PI-staining to be a realistic cell but not false-positive dying [28]. The double labeling of PI and DAPI showed that there was no significant PI staining in 30 mmHg insults (data not shown). With the counterstaining of DAPI, a few PI-positive cells were observed in the condition of 60 mmHg (Figure 2). In our observation time point, the number of PI-positive cells gradually increased to 13.45% in 12 hr and decreased to 4.32% afterward in 24 hr ( Figure 3B). In the condition of high pressure in 100 mmHg, necrosis also occurs at 6 hr ( Figure 3A). The number of PI-positive cells reached 15.35% at 12 hr, and gradually decreased at 24 hr ( Figure 3B). These results indicated that within our observation time point, there was a small number of necrosis in RGC-5 cell line under 60 mmHg and the degree of cell necrosis increased following the pressure elevation ( Figure 3B). During the same time point, we observed that as the pressure level elevated, the number of necrosis cells increased too. This result indicates that RGC-5 necrosis is affected in a pressure-dependent manner. Correspondingly, it is suggested that under 100 mmHg there is more necrosis cells in RGC-5. In order to be consistent with the conditions in vivo [29], we believed that 100 mmHg insult condition was suitable for the next step of our experiment for necroptosis detection.
It's difficult to precisely identify between necroptosis, apoptosis or necrosis in various methods until recently [30]. Nec-1 is the specific inhibitor of necroptosis, as reported in other studies the cell number of necrosis decreased when using it [31,32]. In order to analyze necroptosis, we detected the PI-positive cells to analyze cellular necroptosis by flow cytometry with PI/Annexin V double staining using Nec-1. In our model of cell injury in 24 hr in 100 mmHg, we found that the ratio of necrosis cells (PI+) was about 13% ( Figure 4B), but the percentage decreased to nearly 5% ( Figure 4C) within the use of Nec-1 which have significant difference in statistical analysis ( Figure 4D). These results indicate that with the exposure of Nec-1, necrosis is inhibited, thus necroptosis occurred after EHP.
Calpain is up-regulated following elevated hydrostatic pressure
Immunofluorescence staining results showed that calpain is mainly present in cytoplasm in normal control group in RGC-5 ( Figure 5A); meanwhile, no difference in distribution was observed between injury group and normal controls. Generally, in contrast with the normal controls, significantly more distinct and heavier calpain immunoreactive was found with strong cellular labeling of calpain in 6 hr and 12 hr groups in high pressure models, but weaker labelling in 24 hr group. The western blot results showed that calpain exhibited mainly as a single 75 kDa band in all groups ( Figure 5B). The bands in high pressure groups were apparently thicker and larger than those of normal control groups. The bands in 24 hr group were thinner and smaller than those in injury groups and tended to be normal. Statistical analysis of IDV indicated that high pressure up-regulated the expression of calpain in the early stage (Figure 5C), significantly more distinct calpain bands were shown in 12 hr injury group (p < 0.05). It demonstrated that calpain distribution have no difference between injury groups and normal controls, the protein expression levels increased at first and then decreased as time extend within 1 day, which reached the maximum detection in 12 hr group.
AIF cleavage product decreased after calpain inhibition
At first, the bands of tAIF in high pressure groups of 6 hr, 12 hr and 24 hr increased and then decreased as time extended the maximum detection in 6 hr group ( Figure 6A). The statistical analysis of IDV suggested that high pressure up-regulated the cleavage of tAIF in the early stage ( Figure 6B), significantly more distinct tAIF band were shown in 12 hr injury group (p < 0.05). After ALLN addition, no significant changes of tAIF band in all four groups were detected ( Figure 6C), the statistical analysis of IDV also confirmed about these results ( Figure 6D), which indicated that there is no difference in protein expression levels in tAIF production in all groups.
Calpain activity assay
With both the inhibitor and injury groups preteated with ALLN for 24 hr, the calpain activity assay appeared to be higher in the inhibitor group (Figure 7). However, at 12 hr there were significant changes between the inhibitor and injury group (p < 0.05). It suggested that ALLN may effectively inhibit up-regulation of calpain activity following hydrostatic pressure treatment.
ALLN may decrease the rate of RGC-5 necrosis
Under 100 mmHg insult, our study showed the occurrence of necroptosis in RGC-5 for 2 hr but recovery at 24 hr. Therefore, the cells were treated under this condition with the addition of ALLN. After that, we analyzed cellular necroptosis by using flow cytometry with PI/ Annexin V double staining to detect whether it could decrease the rate of necrosis in RGC-5 under high pressure condition after calpain activity has been inhibited. The results showed that the ratio of necrosis cells is about 12% (Figure 8B), the percentage decreased to nearly 8% upon adding ALLN ( Figure 8C) in the injury group in 24 hr. Similar with Nec-1 treatment, it demonstrated that the ratio of RGC-5 necrosis decreased when treated with ALLN under high pressure condition. Meanwhile, statistical analysis indicated that there were significant changes in PI-positive cells upon adding ALLN compared to normal control group and EHP group ( Figure 8D), these results indicated that RGC-5 necroptosis in the early stage may be related to the upregulated calpain activity.
Discussions
HIOP is one of the main features and risk of visual impairment with ganglion cell death in glaucoma [33][34][35]. Some research demonstrated that acute increase or continuous arise of intra-ocular pressure (IOP) could lead to visual injury or cell death in retinal ganglion cells, pigment epithelial cells and finally resulting in vision lost. Particularly, retinal ganglion cell death is the most important behavior [36,37] in acute glaucoma. The pathological research model of acute glaucoma is divided into animal model and cell model [16,38,39]. Elevated hydrostatic pressure (EHP) in cultured cell line is commonly used in glaucoma cell model in vitro [40]. In this study, we used open cycling air pressure culture system to gain hydrostatic pressure, the system could set pressure values as required and may easily gain more information via high-throughput experiments [41].
In our study, we chose three pressure values (100 mmHg, 30 mmHg and 60 mmHg) to detect possible levels in necrosis after injury. Among these three pressure values, 100 mmHg represented acute glaucoma (high pressure), 30 mmHg represented low pressure in glaucoma [40] and 60 mmHg was a maximum pressure value in human IOP [42]. The results indicated that the number of PI positive cells increased first and then decreased followed by the time passage in 100 mmHg and 60 mmHg. This is consistent with our previous study on rat (110 mmHg for 1 hr) [16], but there was a small number of PI-positive cells in 30 mmHg (data not shown). It was inconsistent with the chronic glaucoma mouse model following 20-30 mmHg [43], one possibility may be that glaucoma is a neuronal degenerative disease; the symptom might take a longer time to exhibit. Therefore, the fact that having a few obvious necrosis cells in our acute insult for a short time seems to be reasonable. Joo's study have found that necrosis cells existed in ganglion cell layer in 160-180 mmHg insult for 90 min following HIOP at 4 hr, but a few necrotic cell was observed after 24 hr [44], but we found some necrosis cells existed in 24 hr, it might be due to the higher pressure and also cells being kept longer in their study. It may also be related to the complexity in the micro-environment (neighboring cells and the secretion of various factors in interstitial cells). Our experiment demonstrated that cell model after EHP could accurately reflect the injury degree of RGCs. Moreover, Nec-1 has become one of the widely recognized inhibitors in necroptosis [31]. Based on our data, the necroptosis can be found after high pressure insult (100 mmHg), and the proportion was reflected by the fact that the necrosis rate decreased to about 5% after adding Nec-1. Therefore, we speculated one signaling molecules that may induce necroptosis of RGC-5 after EHP.
Previous studies have shown that beside calpain mediated apoptosis, calpain can also induce necroptosis through intracellular signal pathway. Calpain facilitates BAX activation and activated BAX favors the release of tAIF from mitochondria to the cytosol which could induce necroptosis [10,11]. Calpain may be one of the important regulatory molecules participating in necroptosis in the cells like fibroblast, nephrocyte, HeLa cell and vascular endothelial cells, etc. Some early necroptosis may be mediated by calpain [9][10][11]45]; however, the biological function of calpain-mediating early neuronal necroptosis in nervous system especially the visual nervous system is largely unveiled. Our western blot results showed that calpain protein levels appeared to decrease after an initial increase, and reached maximum at 12 hr following elevated hydrostatic pressure condition in RGC-5. In this study, the immunofluorescence staining results showed that calpain was mainly present in the cytoplasm of RGC-5. Moreover, in contrast with other groups, significantly more distinct and heavier calpain immunoreactive were shown in 12 hr group. Both of these results showed that the expression of calpain was significantly up-regulated in RGC-5 following elevated hydrostatic pressure condition accompanied with the phenotype of the increased ratio of necroptosis. This observation in regards to the expression profile of calpain consists of what we found in gene chip detection in RGC-5 following elevated hydrostatic pressure in our previous experiment (our previous unpublished data). In contrast, necroptosis was reduced to a certain extent when exposed to ALLN, a specific calpain inhibitor targeting the activity rather than the expression [46], measured by flow cytometry. Collecting all these results together, it suggested that calpain may play an important role in early RGC-5 necroptosis following elevated hydrostatic pressure condition.
In our experiment, with ALLN intervention the expression of calpain remained high while the activity was markedly inhibited in RGC-5 following elevated hydrostatic pressure condition. Moreover, tAIF (the cleaved form of AIF) didn't significantly increase, which resulted in less effect on RGC-5. Meanwhile, the number of necroptosis cells didn't significantly increase either. It further demonstrated that calpain in RGC-5 was mediated by the downstream molecule of tAIF to modulate necroptosis pathway. Overall, the necroptosis regulated by calpain mediated by tAIF may be a mode of RGCs death induced by elevated hydrostatic pressure. Regarding to another risk factor, cellular hypoxic exposure is not only the widely used nervous system injury model [47], but also one of the important pathophysiological mechanisms of aHIOP [48]. Therefore, it is worthwhile to further investigate whether necroptosis is mediated by calpain in hypoxic exposure. It will be helpful for comprehensive understanding about RGC-5 necroptosis in aHIOP. Taken all together, the result provided novel evidence for the molecular mechanism (non-RIP3 pathway) research of the early RGCs in aHIOP and new interventional target for reducing early RGCs necroptosis in aHIOP patients.
Conclusion
Our study found that calpain may induce necroptosis via tAIF-modulation in RGC-5 following elevated hydrostatic pressure.
|
2017-06-20T20:23:49.931Z
|
2014-05-12T00:00:00.000
|
{
"year": 2014,
"sha1": "01a0ce03fe10e3f967ed084279598f672c56f68c",
"oa_license": "CCBY",
"oa_url": "https://bmcneurosci.biomedcentral.com/track/pdf/10.1186/1471-2202-15-63",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "01a0ce03fe10e3f967ed084279598f672c56f68c",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
251815707
|
pes2o/s2orc
|
v3-fos-license
|
Tissue immunostaining of candidate prognostic proteins in metastatic and non-metastatic prostate cancer
Prostate cancer (PCa) lacks specific markers capable of distinguishing aggressive tumors from those with indolent behavior. Therefore, the aim of this study was to evaluate the immunostaining of candidate proteins (PTEN, AKT, TRPM8, and NKX3.1) through the immunohistochemistry technique (IHC) on patients with metastatic and non-metastatic PCa. Tissues from 60 patients were divided into three groups categorized according to prognostic parameters: better prognosis (n = 20), worse prognosis (n = 23), and metastatic (n = 17). Immunostaining was analyzed by a pathologist and staining classifications were considered according to signal intensity: (0) no staining, (+) weak, and (++ and +++) intermediate to strong. AKT protein was associated (p = 0.012) and correlated (p = 0.014; Tau = − 0.288) with the prognostic groups. The immunostaining for TRPM8 (p = 0.010) and NKX3.1 (p = 0.003) proteins differed between malignant tumor and non-tumoral adjacent tissue as well as for proteins in cellular locations (nucleus and cytoplasm). TRPM8 was independently associated with the ISUP grade ≥ 4 (p = 0.024; OR = 8.373; 95% CI = 1.319–53.164). The NKX3.1 showed positive and predominantly strong immunostaining in all patients in both tumoral and non-tumoral adjacent tissues. All metastatic samples had positive immunostaining, with strong intensity for NKX3.1 (p = 0.021; Tau = − 0.302). In the non-metastatic group, this strong protein staining was not observed in any patients. This study confirmed that NKX3.1 is highly specific for prostate tissue and indicated that NKX3.1, AKT, and TRPM8 may be candidate markers for prostate cancer prognosis.
Introduction
According to the World Health Organization (WHO), prostate cancer (PCa) is the fourth most incident cancer, with approximately 1.4 million new cases worldwide. In men, it is the second most common type after lung cancer (Global Cancer Observatory 2020; Culp et al. 2020;Sung et al. 2021) and the fifth leading cause of death (Bray et al. 2018;INCA 2019;Sung et al. 2021). For each year of the 2020-2022 triennium, 625,000 new cases of cancer are estimated in Brazil (INCA 2019), with prostate cancer being the second most common type (Global Cancer Observatory 2020; INCA 2019). Currently, in Brazil, digital rectal examinations and prostate-specific antigen (PSA) measurements are used as screening methodologies for PCa, and patients with abnormalities in the exam and/or PSA dosages above 10 ng/ mL are referred for a transrectal ultrasound-guided needle biopsy (Sociedade Brasileira de Urologia 2018; Porcaro et al. 2019;Vendrami et al. 2019).
In addition, it is known that PSA is an excellent marker for identifying prostatic alterations; however, it is not specific and exclusive to malignant alterations (Vendrami et al. 2019;Lomas and Ahmed 2020). In this context, the search for specific biomarkers that could become potential molecular markers for PCa, capable of predicting clinical and pathological complications in the patient, is important and is under study. Immunohistochemistry (IHC) is a widely used tool in clinical routine to confirm diagnoses with tissue markers (Giannico et al. 2017;Orakpoghenor et al. 2018;Comperát 2019).
Some molecules are already being studied by IHC as they are considered potential candidates as markers of PCa, among them those involved in the cell survival pathway, phosphatidylinositol-3-kinase/serine-threonine kinase/mammalian target of the rapamycin complex (PI3K/AKT/mTOR), in addition to transient melastatin 8 (TRPM8) and NK3 homeobox 1 (NKX3.1) stand out, as they play an important role in prostate carcinogenesis.
It is known that the PI3K/AKT/mTOR signaling pathway is one of the pathways that is most dysregulated in cancer (Koundouros and Poulogiannis 2018), and an aberrant expression of this pathway has already been demonstrated in studies in the early and late phases of PCa (Taylor et al. 2010;Sreenivasulu et al. 2018). Studies also show that deletion of the PTEN tumor suppressor gene in PCa is very common, being very present in metastatic and castration-resistant tumors (Robinson et al. 2015;Wozniak et al. 2017;Jamaspishvili et al. 2018).
The TRPM8 channel is a homotetramer formed by subunits, showing 8 putative glycosylation sites and an immunogenic epitope. It is highly expressed in the prostate, as its ion channel functions as a testosterone receptor, suggesting a role in the regulation of androgenic responses (Asuthkar et al. 2015b). Furthermore, evidence demonstrates that it has an important role in the development and progression of neoplasms, especially in PCa, being overexpressed in malignant tumor tissue compared to nonmalignant tissue. This protein is present in PCa refractory hormone and with a high Gleason score (Yee 2015).
The tumor suppressor gene NKX3.1 is a member of the NK family of homeobox genes, participating in cell specification and organogenesis processes in several species. In humans, this gene is primarily related to normal prostate development. Its loss of expression leads to defects in prostate protein secretion and ductal morphogenesis and contributes to prostate carcinogenesis (Abate-Shen et al. 2008).
Therefore, the current study aimed to evaluate tissue immunostaining of the tumor suppressor proteins PTEN and NKX3.1, of oncogenic AKT protein, involved in a cell survival pathway, in addition to the testosterone receptor TRPM8 in samples from patients with metastatic and nonmetastatic PCa in the search for candidate markers for prostate cancer.
Study group and sample characterization
In this retrospective longitudinal study, 60 prostatic paraffin-embedded samples of malignant and respective adjacent non-tumor tissues were evaluated. Samples were randomly selected from male patients with a confirmed diagnosis of PCa after radical prostatectomy, at Hospital do Câncer de Londrina (HCL), between the years of 2006 and 2016. Of this amount, 51 samples are radical prostatectomy (RP) products; however, in the metastatic group, 5 samples came from biopsy and 3 from transurethral resection (TUR); in the group with the worse prognosis, 1 sample was from TUR.
The study was approved by the Research Ethics Committee Involving Human Beings of the State University of Londrina-Brazil, under number 176/2013. Patients participated voluntarily and signed a free and informed consent form and answered a modified personal questionnaire based on Carrano and Natarajan (1988).
Histopathological data were obtained from medical records, which were used, together with the guidelines of the National Comprehensive Cancer Network (NCCN version 4.2019), for the classification of patients into three experimental groups: (1) PCa with a better prognosis (n = 20); (2) PCa with a worse prognosis (n = 23); and (3) metastatic PCa (n = 17). Patients with a ISUP grade ≤ 2 (3 + 4), staging ≤ T2b, and PSA ≤ 10 ng/mL were considered to have a better prognosis PCa. Patients with a ISUP grade ≥ 3 (4 + 3), staging ≥ T3a, and PSA ≥ 20 ng/mL were considered to have worse prognosis PCa. Patients with metastasis were classified according to the presence of lymph node invasion and/ or distant metastasis and/or positive bone scintigraphy. A table containing clinical and pathological characteristics of all patients was included as Online Resource 1.
The sample's protein profiles were compared between the metastatic versus non-metastatic group (with better and worse prognosis), as well as their malignant and adjacent non-tumor tissues. All samples in the present study were from biopsy and radical prostatectomy, without neoadjuvant chemotherapy.
Histopathological analysis
Tissues obtained from the biopsy were stained with hematoxylin and eosin to confirm the clinical diagnosis of PCa and to verify the presence of tumor and adjacent non-tumor tissue for further analysis and comparison of immunostaining of proteins in the tissues. This step was performed by pathologists from the HCL. The histopathological classification used was based on international standards established by the WHO, such as ISUP grade (European Association of Urology 2022) and clinical staging determined by the Tumor/Node/Metastasis (TNM) system, following the recommendations of the AJCC (American Joint Committee on Cancer).
Immunohistochemistry
Experiments were carried out according to Guembarovski et al. (2018) with modifications, regarding antigenic retrieval and the background blocker. Formalin-fixed paraffin-embedded tissue samples of metastatic and non-metastatic malignant and adjacent non-tumor tissues were obtained. Cuts were made, 5-6 µm thick, and fixed on silanized StarFrost ® slides (Knittel glass, ALE).
Negative controls were performed to verify the specificity of the primary antibody in all slide batteries, where it was replaced by phosphate-buffered saline (PBS). The secondary antibody kit (mouse/rabbit detection kit HRP/DAB ABC, Abcam, Cambridge, MA, USA) was used according to the manufacturer's instructions and as a chromogen, the Pierce ™ DAB substrate kit (Thermo Fisher Scientific, Rockford, IL, USA), using concentrated DAB ([2x]) for NKX3.1 and ([4x]) for the other antibodies (AKT, PTEN, and TRPM8), based on the manufacturer's protocol.
Immunostaining for protein profiles in experimental groups was analyzed by an experienced pathologist. In the analysis of adjacent non-tumor tissue, only normal glands and areas of benign hyperplasia were considered, excluding areas of atrophy. The classifications were considered according to staining signal strength: (0) no staining, (+) weak, and (++ and +++) strong, according to Figs. 1 and 2.
Statistical analysis
The comparison of the mean ages of the experimental groups was performed using the Student's t test. To compare the staining in tumor and adjacent non-tumor tissues for each protein evaluated, the McNemar test for related samples was used.
Kendall's Tau test was used to analyze the correlations between the protein's immunostaining by IHC and clinicopathological parameters, and logistic regression was performed for the variables that showed significance to verify whether they were independently associated with protein staining. To analyze the interaction between protein tags, the Kendall Tau correlation test was also performed.
Some data were omitted from the statistical analyzes due to lack of information contained in patient records and because some samples had worn paraffin blocks; therefore, some patients had no tumor tissue (2) or adjacent non-tumor tissue (3) on the same slide for comparison.
Photomicrograph of weak intensity immunostaining using the immunohistochemistry technique for PTEN, AKT, and TRPM8 proteins, evaluated in tumor tissue samples and adjacent non-tumor tissue from patients with PCa. Arrows point to weak immunostaining of proteins. Letters represent the evaluated proteins, being a (negative control), b (PTEN), c (AKT), and d (TRPM8). 40 × magnification. Source: the author himself.
Photomicrograph of strong intensity immunostaining using the protein immunohistochemistry technique for NKX3.1, evaluated in tumor tissue samples and adjacent non-tumor tissue from patients with PCa. Arrows point to strong immunostaining of NKX3.1. Letters represent the evaluated proteins, being a (negative control) and b (NKX3.1) 0.40× magnification. Source: author himself.
PTEN
PTEN protein did not show differences in immunostaining between tumor and adjacent non-tumor tissues (p = 0.647) or in the cellular locations (cytoplasm: p = 0.195) and (nucleus: p = 0.587) ( Table 1). PTEN immunostaining did not demonstrate any significant association or correlation with the prognostic groups, the prognostic parameters, or biochemical recurrence and metastasis (Online Resource 8).
AKT
AKT protein was also not expressed differently in the tissues of the same patients (p = 0.552) or in relation to cellular locations: cytoplasm (p = 0.194) and nucleus (p = 0.526) (Table 1). Furthermore, it was associated (p = 0.012) and correlated (p = 0.014; Tau = − 0,288) with the prognostic groups: better and worse prognosis and metastasis (Online Resource 9).
TRPM8
TRPM8 protein was expressed differently in tumor and adjacent non-tumor tissue (p = 0.010), with higher immunostaining in malignant tumor, even if of low intensity; the same result was observed for cellular locations of the immunostaining nucleus (p = 0.012), in both tissues (Table 1). In addition, protein immunostaining was associated with the ISUP grade parameter (p = 0.039) and perineural invasion (p = 0.020) (Online Resource 10).
NKX3.1
NKX3.1 protein was expressed differently between tumor and adjacent non-tumor tissue (p = 0.003), with higher immunostaining and strong intensity in malignant tumor tissue when compared to the adjacent non-tumor tissue; the same result was observed for cellular locations of the immunostaining: cytoplasm (p = 0.003) and nucleus (p = 0.008), in both tissues (Table 1).
There was no significant association between NKX3.1 immunostaining in relation to prognostic groups, prognostic parameters, or biochemical recurrence and metastasis. But it was observed that all patients in the metastatic group (17/17, 100%) presented positive and strong immunostaining, while in the non-metastatic group although strong immunostaining was also verified, this result was not observed in all patients
Multinomial and binary logistic regression
To verify whether clinical-pathological variables that showed statistical significance were independently associated with proteins' (PTEN, AKT, TRPM8 and NKX3.1) tumor staining, a multinomial or binary logistic regression analysis was used. TRPM8 was independently associated with ISUP grade (p = 0.024; OR = 8.373; 95% CI = 1.319-53.164), with the tumor immunostaining being a risk factor in patients with ISUP grade ≥ 4. In the perineural invasion parameter, TRPM8 immunostaining was associated, but not independently. AKT on the other hand was associated with prognostic groups (better and worse prognosis and metastatic), but not independently, as shown in Table 3.
Protein interaction
To assess whether the immunostaining results are related to each other, an interaction analysis was performed, considering the staining only in the tumor tissue for all possible protein combinations. Significant interactions were observed between PTEN and AKT (p < 0.001; χ 2 < 0.001) and PTEN and TRPM8 proteins (p = 0.014; χ 2 = 0.095) (Table 4).
Discussion
The evaluation of four proteins immunostaining by IHC (PTEN, AKT, NKX3.1, and TRPM8) in malignant tumors and adjacent non-tumor tissues of patients with prostate cancer indicated that TRPM8 protein was differentially expressed, with higher immunostaining in malignant tumor tissue. However, our most prominent result was the fact that NKX3.1 immunostaining was observed in all samples in the present study, both in tumor tissue and in the adjacent tissue non-tumor, indicating high specificity for prostate tissue. In addition, there was strong tumor immunostaining for NKX3.1 in all patients of the metastatic group. Certain results obtained from biopsies may be inconclusive, as the biological material collected is insufficient, with few atypical glands for analysis, and a repeat examination may be necessary. Therefore, the use of complementary techniques and the use of new biomarkers are extremely important. A widely used and highly relevant tool is the IHC technique (Kristiansen 2018;Orakpoghenor et al. 2018), which allows the identification of the presence or absence of certain proteins in specific tissues, and the staining intensity is generally used as the gold standard (Jamaspishvili 2018).
It is known that age is one of the main risk factors linked to PCa (Vaidyanathan et al. 2016;Junior et al. 2016;Tse et al. 2018;INCA 2019); therefore, aging increases the risk of developing the disease and its possible aggravation. The significative results obtained in this study comparing the mean ages of patients in the better prognosis and metastatic groups (p = 0.010), in addition to the result of the comparison between the metastatic versus non-metastatic groups (better and worse prognosis) (p = 0.010), confirm that PCa is a disease of advanced age, and that late diagnosis may be associated with a more severe condition of the disease, such as the development of metastases. The genomic deletion of PTEN is very common in PCa, as it is the most lost tumor suppressor in the early stages of the disease (Lotan et al. 2011;Jamaspishvili et al. 2018;Hamid et al. 2019). Consequently, an increase in AKT expression would possibly occur, as these proteins participate in the same signaling pathway (Kurose et al. 2001). Low or absent expression of PTEN was expected in the worse prognosis and metastatic groups; on the other hand, the AKT protein should be more expressed in tumor tissue in these groups of patients. However, we verified that the immunostaining intensities of PTEN and AKT were similar, showing a similar pattern of immunostaining in many of the samples, in the tumor tissue PTEN: 30/59 (50.8%) and AKT: 30/59 (50.8%) and in the adjacent non-tumor tissue PTEN: 27/58 (46.5%) and AKT: 36/58 (62.1%). Our data are not distinct from the literature, as according to the database The Human Protein Atlas (2021a), the PTEN protein has predominantly cytoplasmic immunostaining, and the prostate tissue has low expression of this protein. The AKT protein, on the other hand, presents nuclear immunostaining and in normal prostate tissue, its expression is median (The Human Protein Atlas 2021b).
Tumoral immunostaining of PTEN did not show any significant association with the groups, prognostic parameters, or recurrence and metastasis. AKT immunostaining, on the other hand, was associated and correlated with prognostic groups. Since this is an oncogenic protein, its presence may favor tumor growth.
The literature data demonstrate that PTEN and AKT are part of the same cell survival pathway, PI3K/AKT/mTOR, where PI3K acts as an agonist of the pathway converting PIP2 into PIP3, favoring the binding of AKT to PIP3, which will mediate, through activation of proteins, cell growth, proliferation, survival, and migration (Gonçalves et al. 2018). PTEN protein, on the other hand, acts as a direct antagonist of the pathway, converting PIP3 into PIP2, therefore, having a role as a lipid phosphatase (Gonçalves et al. 2018;Jamaspishvili et al. 2018). This role of antagonist (PTEN) and agonist (AKT) of the cell survival pathway was confirmed in our protein interaction analysis, in which it was observed that strong AKT intensity immunostaining was present when absent or weak intensity immunostaining of PTEN was observed. Asuthkar et al. (2015aAsuthkar et al. ( , b, c, 2017 suggested that TRPM8 is a key element in the testosterone-induced response pathway and that its activity can significantly contribute as an anti-tumor defense mechanism, serving as a new therapeutic target. The immunostaining of the TRPM8 protein, in general, was quite weak in the samples of the present study, in the tumor tissue: 42/60 (70.0%) and in the adjacent non-tumor tissue TRPM8: 39/59 (66.1%). In addition, it was more present in tumor tissue compared to adjacent non-tumor tissue and a significant difference was observed in immunostaining of this protein in the cellular location. According to the literature data, this protein is highly expressed in the prostate, both in tumor-free individuals and in patients with PCa (Asuthkar 2017). Our results do not support this study, but the fact that this protein was more marked in the tumor tissue than in the adjacent tissue non-tumor suggests the need for future studies with new sample groups, to verify whether it may have any correlation with the malignant change. Asuthkar et al. (2015b) found that TRPM8 protein and testosterone are directly involved in localized interactions in the plasma membrane of cells in the periphery of the prostate and in the plasma membrane of the endoplasmic reticulum of cells in the lumen. Furthermore, the authors suggested that testosterone-induced TRPM8 might be an important regulator of Ca 2 homeostasis and the cell cycle in prostate cells. Although TRPM8 mRNA levels increase during prostate tumor progression (Tsaveler et al. 2001), protein levels are not proportionally equal. Asuthkar et al. (2015c) found that TRPM8 is redirected to degradation in PCa, while protein recovery effectively suppresses tumor cell growth. This fact could explain the low expression of this protein in the samples of the present study.
One of the main prognostic factors described in the literature is the histological grade, with the Gleason score being the most used (Cambruzzi et al. 2010). This is a very important prognostic parameter in the assessment of tumor progression and aggressiveness (Löbler et al. 2012). TRPM8 was associated with perineural invasion (p = 0.020), but not with the groups or biochemical recurrence and metastasis; however, it was independently associated with the ISUP grade ≥ 4 (p = 0.024; OR = 8.373; 95% CI = 1.319-53.164). Yu et al. (2014) and Yee et al. (2015) also demonstrated an association between TRPM8 and Gleason score, associated the immunostaining of this protein with a high score.
Another interesting result was the interaction observed between PTEN and TRPM8 proteins, in which it was verified that the presence of one protein is associated with the presence of the other. However, when analyzing the interaction through the String Database (2021), we found no direct interaction between these proteins, but that TRPM8 can interact with other peripheral proteins of the PI3K/AKT/ mTOR and PTEN signaling pathway, such as PPP1CA, PPP1CB, and PPP1CC, for example, which are catalytic serine/threonine-protein phosphatase subunits, and associate with several regulatory proteins to form highly specific holoenzymes, aiming to dephosphorylate hundreds of biological targets.
NKX3.1 protein presented the most intense immunostaining in the tissues of the patients with PCa in the present study, with a significant difference being observed in the tumor tissue (48/59; 81.3%) versus adjacent non-tumor tissue (29/57; 50.9%). Differences in immunostaining regarding cellular locations (nucleus and cytoplasm) were also observed. These results corroborate the study of Gurel et al. (2010), which clearly showed the nuclear immunostaining of NKX3.1 present in practically all analyzed samples, presenting a high pattern of nuclear staining and a high rate of positivity in metastases. According to the database The Human Protein Atlas (2021c), NKX3.1 presents immunostaining both in the nucleus and in the cytoplasm, with nuclear staining being the most evident. In addition, this protein has a high expression rate in the prostate tissue.
NKX3.1 is an androgen-regulated homeobox gene, and its expression is almost exclusively restricted to the prostate (He et al. 1997). In adulthood, NKX3.1 stimulates the repair of DNA damage induced by transcription, through interaction with topoisomerase I, and this interaction is mediated by its homeodomain (Bowen et al. 2007;Puc et al. 2015). Therefore, NKX3.1 is a prostate-specific transcription factor/pioneer, which functions to specify prostate development in addition to its tumor suppressor role (Griffin et al. 2022).
It is already well established that clinicopathological parameters used when studying neoplasms are extremely important, as the data help to confirm and classify patients, and can help guide more effective therapy. The prognosis of PCa is fundamentally related to some histopathological data, such as topography/laterality, tumor volume/size, histological type, degree of differentiation, presence of capsular to extraprostatic neoplastic invasion, state of the surgical margins, and the presence of metastases in regional or distant lymph nodes (Cambruzzi et al. 2010). Within this context, PTEN and TRPM8 proteins were not significantly correlated with prognostic parameters, biochemical recidive or metastatic cases evaluated in our samples. However, AKT was positively correlated with prognostic groups, but not with biochemical recidive or metastatic cases evaluated in our samples.
NKX3.1 immunostaining was negatively correlated with the metastatic group, where all patients had intense immunostaining. Previous studies found that NKX3.1 expression was strongly present in metastatic PCa samples, showing high sensitivity for the prostate, and even when associated with PSA (Kristiansen 2017), PSMA , or HOXB13 (Abouhashem and Salah 2020), they were considered good markers for detecting metastases of prostatic origin. In addition to these studies, the International Society of Urological Pathology (ISUP) (Epstein et al. 2016) has already indicated the NKX3.1 protein as an excellent biomarker of prostate origin in PCa metastases, being highly specific for this tissue.
Conclusions
AKT and TRPM8 may be prognostic markers for prostate cancer that deserve future investigation. Our results also confirm the NKX3.1 specificity for prostate tissue and that it can be used to identify a primary site of metastasis. This marker needs further investigation for its validation as a candidate for early prediction of this phenomenon, given its immunostaining profile in metastatic patients.
Acknowledgements All the authors would like to thank the Hospital do Câncer de Londrina and Angela Navarro Gordan for providing the samples for this study.
Author contributions ÉRP participated in the study design and acquisition of data, experimental procedures, performed the statistical analysis and interpretation, and drafted the manuscript. ALF and LCLP participated in the collection of samples and medical records, and also participated in the study design and immunohistochemical reactions. CAM participated in the study design and experimental procedures. AFMLG participated in the histopathological assays for the selection of tumor tissues and adjacent non-tumor tissues and performed the immunohistochemical analysis of the samples. KBdO participated in the statistical analysis and data interpretation. PEF made the sample collection possible. IMdSC participated in the design of the study and reviewed the manuscript for important intellectual content. RLG participated in the design of the study, interpretation of data, and gave final approval of the version to be published. All the authors read and approved the final manuscript.
|
2022-08-26T13:31:11.797Z
|
2022-08-25T00:00:00.000
|
{
"year": 2022,
"sha1": "3c6f0461834e1159c0fbcc13aa7ecb21d2f7f084",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-1343436/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "Springer",
"pdf_hash": "a69e3a0a18c814c30a4d56d42f84cb5ea3106f0f",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
54857477
|
pes2o/s2orc
|
v3-fos-license
|
Aspects of Procurement Reforms that Influence Expenditure Management in Public Secondary Schools in Kenya : A Focus on Emergency Procurement
The Kenyan public procurement sector has gone though colossal reforms, which were ignited by findings of country procurement assessment reports of 1986 and 1997; and hallmarked by formulation of the Procurement Regulations in 2001. This study aimed at establishing the extent to which public secondary schools in Nairobi City County had complied with relevant legislative provisions guiding procurement reforms, as well as the effect of selected aspects of reforms on expenditure management. The article focuses on one aspect of reforms, namely, frequency of emergency procurement. The evaluation research model III guided the research process and primary data were sourced in 2015 from 35 public secondary schools. Quantitative analysis included cross-tabulation with analysis of variance, chi-square statistic, correlation coefficient, as well as multiple regression. About two-thirds of the schools had developed procurement plans, as required by the legislative and policy provisions; while another two-thirds ‘occasionally’ practised emergency procurement. Besides, the frequency of emergency procurement significantly correlated with variation in procurement expenditure; and further caused a significant increment in procurement expenditure (beta weight = 0.457, t-statistic = 3.240 & ρ-value = 0.003), which signifies a negative influence on expenditure management. Limiting the frequency of emergency procurement is an important step towards effective expenditure management in public secondary schools.
Introduction
Public procurement is the acquisition by purchase, rental, lease, hire, license, tenancy, franchise or by any other contractual means of any type of goods, services, and works, by public institutions using public resources, as well as disposal of public assets (Kenya Anti-Corruption Commission [KACC] & Public Procurement Oversight Authority [PPOA], 2009).Public procurement is the main process through which government spends public money; thus, making it central to expenditure management and national development.Through public procurement, circa 60% of government revenue is injected into the economy, which in turn, creates employment opportunities and improves per capita income (Organisation for Economic Cooperation and Development [OECD], 2001).
Notably, public procurement plays a greater role in developing countries, where the value of procurement expenditure ranges between 9% and 13% of national Gross Domestic Products (GDP), than it does in developed nations, where the value varies between 5% and 8% of the national GDP.In Kenya, the value of public procurement accounts for about 10% of the GDP, making it a large market for suppliers and contractors, albeit with high opportunities for corruption (Kavula, Kalai & Migosi, 2014;KACC & PPOA, 2009).In addition, public procurement is categorised into contestable and non-contestable.Whereas, contestable procurement is subject to competitive bidding, non-contestable procurement is often single-sourced (Kenya Institute of Public wages (Kavula et al., 2014;KIPPRA, 2006).
The Kenyan public procurement sector has developed over the years since the time of independence in 1963.In the first decade of independence, public procurement was predominantly undertaken by external agencies due to inadequacy of supplies and competent suppliers in the local market.But as the economy expanded, procurement responsibilities passed down to ministries, leading to the establishment of supplies offices in each ministry in 1974.However, the system was faulted for not addressing the needs of decentralised government units, particularly at the then provincial and district levels; and for lacking an effective legislative framework, which made it vulnerable to irregularities, such as designing tender documents to favour particular bidders, fixing and inflating prices, leading to wastage of public resources (KIPPRA, 2006;Basheka, 2006;Aketch, 2005;OECD, 2001).
In view of the stated challenges, procurement reforms process were initiated in Kenya in response to pressure from local and international stakeholders, including the World Bank, the International Monetary Fund (IMF), African Development Bank (ADB) and the International Trade Centre (ITC), among others (Aketch, 2005;Odhiambo & Kamau, 2003).Even though peace-meal reforms started way back in 1980s through to the 1990s, following findings of country procurement assessment reports of 1986 and 1997, formulation of the Exchequer and Audit Regulations (Procurement Regulations) in 2001, remains the most crucial turning point of procurement reforms in Kenya.Before the Procurement Regulations were formulated in 2001, public procurement in Kenya was carried out under unclear legislative frameworks, which in turn, failed to curb irregularities and regulate public expenditure.
The Procurement Regulations requires public institutions to: use standard tender documents, operate within set thresholds, ensure that technical specifications meet international standards and that all bidders are treated equally irrespective of race, religion or nationality (Government of Kenya, 2001).Additional hallmarks of the Procurement Regulations include the need for all tenders to be advertised widely in the print media, professional qualifications of bidders, effective record keeping, transparency in opening tenders, tender evaluation reporting, confidentiality of tender evaluation processes, as well as procurement planning and regulation of emergency procurement, among others.
Regarding institutional structures, the Procurement Regulations established the Directorate of Public Procurement (DPP) within Treasury to primarily streamline procurement activities through policy formulation, implementation, and capacity development; Public Procurement Complaints Review and Appeals Board (PPCRAB), which has since been renamed as PPOA, to oversee procurement activities and adjudicate over complaints; as well as institutional tender committees to manage procurement of goods, services and works within public institutions, including secondary schools (Kavula et al., 2014;KIPPRA, 2006;Aketch, 2005;Government of Kenya, 2001).
Section 10 (1) of the Procurement Act requires all procuring entities to establish tender committees, in a manner that is set out in the Second Schedule.Tender committees are obligated to perform the functions listed under sub-section 2 (a) to (o), which include reviewing, verifying and ascertaining that all procurement and disposal activities are in line with provisions of the Act, Procurement Regulations, and tender documents (Government of Kenya, 2010;2006).With the context of public schools, Part 7 of the Second Schedule (the Procurement Regulations) requires school tender to have a membership of at least six heads of departments or members of the teaching staff, including the Matron or officer-in-charge of the boarding facilities, where applicable appointed by the Principal, who in this study is referred to as the accounting officer (Government of Kenya, 2006).
The Ministry of Education embarked on measures to entrench provisions of the Procurement Regulations in academic institutions in 2002 to improve efficiency of procurement practices; thus, enable schools manage expenditure and utilise public resources judiciously (Embeli Iravo, Biraori & Wamalwa, 2014).In this regard, circulars were sent to all public secondary schools, directing them to follow the new regulations to improve procurement practices and procedures.The new measures included establishment of school tender committees, training members of such committees, as well as principals, deputy principals, and staff directly involved in procurement activities.In 2002, initial training workshops targeting principals and deputy principals of national and the then provincial schools were organised by the DPP, with support from the Treasury (Kavula et al., 2014).
Between 2002 and 2008, the Procurement Regulations went through various amendments, which were considered necessary to initiate and sustain the reform agenda in all public sectors.For instance, in 2002, the Procurement Regulations were amended to align with the needs of various public sectors (Kavula et al., 2014).In 2003, the Public Procurement and Disposal Bill was drafted, debated, and enacted in 2005 to provide the requisite legislative framework.In 2006, the Procurement Regulations was revised further and operated in 2007 in tandem with the Public Procurement Act, through the Legislative Notice No.174 of January 2007 (Kavula et al., 2014).In the education sector, the Procurement Regulations and the Procurement Act are domiciled by the Public Procurement Manual for Schools and Colleges 2010 (Procurement Manual).The three instruments provide the primary legislative and policy frameworks for reforming procurement practices and managing procurement expenditure in public secondary schools.One aspect of procurement practices that the legislative and policy frameworks seek to reform in order to improve expenditure management is the aspect of emergency procurement.
Emergency procurement is provided for in the legislative and policy frameworks under exceptional circumstances within the context of "urgent need" as defined in Part I, Section 3 (1) of the Procurement Act.In this regard, "urgent need" refers to a circumstance of imminent or actual threat to public health, welfare, safety, or damage to property, such that engaging in tendering procedures or other procurement methods would not be practicable (Government of Kenya, 2010).The Procurement Manual prescribes various measures that should be instituted to manage the application of emergency procurement provisions, including using business continuity planning, as a criterion for registration or pre-qualification of potential suppliers; and formulating procurement plans, which should include contingency planning for real emergency situations, as defined in Part I, Section 3 (1) of the Procurement Act (PPOA, 2009).
Furthermore, Section 26 (3) of the Procurement Act, as read together with Sections 20 and 21 of the Procurement Regulations, as well as Sections 6.1 and 6.2 of the Procurement Manual, make procurement planning mandatory for procuring entities to facilitate identification of each requirement, user(s), budget, procurement method and schedule of various activities in the procurement process and timeliness.Procurement plans must be integrated in the procuring entity's budget and approved by institutional decision-making organs before being operationalised (KACC & PPOA, 2009).
Even though the Government of Kenya has provided necessary legislative and policy frameworks to guide procurement reforms in all public institutions, only a few studies have explored the extent to which public secondary schools across the country have complied with provisions of the Procurement Act, Procurement Regulations and Procurement Manual.For instance, a study commissioned by the Ministry of Education in 2006 revealed that more than half of secondary schools did not adhere to provisions of legislative and policy frameworks in their tendering processes.As a result, there was rampant corruption particularly at the administration and board levels with regard to procurement of school equipment, learning materials, supplies, and hiring of both teaching and non-teaching staff (Institute of Policy Analysis and Research [IPAR], 2007).The study underscored the inadequacy of literature on governance and expenditure management in public secondary schools; as well as documentation of success stories regarding implementation of procurement reforms in the same institutions.Kavula et al. (2014) identified factors determining implementation of public Procurement Regulations in selected secondary schools of Kitui County, including lack of relevant procurement structures such as tender committees and sub-committees; lack of induction courses to enhance awareness and knowledge of Procurement Regulations; as well as lack of in-service training for some school principals and their deputies.Other determinants included school financial standing, based on the level of indebtedness; and budgetary constraints, which affected school-supplier relationship.
In their study, Embeli et al. (2014) reported that implementation of procurement reforms in public secondary schools in Trans-Nzoia County, was influenced by lack of procurement skills, non-enforcement, negative organisational procurement culture and low knowledge of Procurement Regulations; while Angokho, Juma & Musienga (2014) found that achievement of transparency and accountability in procurement procedures of public secondary schools in Vihiga County was prevented by general lack of information about the legislative and policy frameworks, principles, procedures and processes of procurement, among school tender committee members.
Notably, none of the extant empirical studies has determined the relationship between various aspects of procurement reforms on expenditure management in public secondary schools.Even though the study focused on various aspects of procurement reforms, this article narrows down to the aspect of emergency procurement, which intrinsically relates to procurement planning.Consequently, the purpose of this paper is to determine how public secondary schools complied with provisions on emergency procurement, and how this influenced expenditure management.The concept was measured in terms of variation in the amount of procurement expenditure between the periods: 'before reforms (1999-2002)', and 'after reforms (2007-2010)'.The idea was to determine if the introduction of procurement reforms in public secondary schools caused a reduction, an increase or no change in the level of procurement expenditure.
Literature Review
Even though many studies have focused on public procurement reforms all over the world, specific literature on how various aspects of reforms influence expenditure management in public secondary schools remains scanty, in both developed and developing countries.Notably though, the challenge of procurement malpractices is universal; variation only exists in the level of manifestation and magnitude.More still, whereas procurement systems of developed countries are more advanced in terms of legislative frameworks, institutional structures and technology, in developing nations, procurement systems are at nascent stages.Much of the reforms taking place in developed economies involve transformation from paper-based to electronic procurement (e-procurement).
In the United Kingdom (UK), public procurement has undergone and continues to undergo various reforms aimed at enhancing efficiency and sustainable utilisation of public resources, albeit with varying results (Perry, 2011;Brammer & Walker, 2007;Evenett & Hoekman, 2005).In England for instance, procurement reforms enabled public schools to have a great deal of autonomy in deciding how their budget is spent, what services to procure and how to procure them (Perry, 2011).The reforms led to introduction of e-procurement, which has widened choices for quality goods and services; and enabled public schools to save up to £1 billion every financial year (Department for Education, 2011).
In Canada, public procurement system has gone through various reforms over the past four decades (Strobo & Leschinsky, 2009).One aspect of reforms that significantly changed procurement practices in Canadian public schools is the introduction of e-procurement in 1990 (Swick &Tétrault, 2014;Fagan, 2005).The transition to e-procurement was motivated by the need to: lower the cost of accessing vendors, advertising tenders and distributing bid documents; improve accessibility of public tender opportunities, improve competitiveness of quotations, as well as increase trade between vendors and public procuring entities, including schools (Fagan, 2005).About eight years later, more than 80% of public schools reported significant savings in their procurement budgets, as all tender procedures, including advertising, bid submission; evaluation and contracting were done online (Financial Management Institute & Price Waterhouse Coopers, 2015).The Australian procurement system has experienced various reforms since 1997 when the Financial Management and Accountability Act was enacted (Department of Treasury and Finance [DTF], 2012;DTF, 2006).Existing literature single out reforms that were initiated in the first decade of the 21 st Century, based on recommendations of a study conducted in 2003, which include institutional capacity strengthening and e-procurement.In 2012, the Schools Electronic Catalogue Ordering (SECO) system was implemented in all New South Wale (NSW) schools.Two years later, up to 1,500 NSW schools and 6,374 users were connected to the SECO system; and up to 128,389 purchase orders had been sent electronically to catalogued vendors since inception.The SECO system improved expenditure management by enabling NSW public schools to save up to AU$218 million in two years (Jones, 2014).
In South Korea, the reform process, which began in 1996, focused on transforming the paper-based procurement system to an e-procurement system in order to improve transparency and efficiency (Neupane, Soar, Vaidya, & Yong, 2012;Chang, 2011;Westcott, 2004).By 2000, most public schools were transacting their procurement business through Government e-Procurement System (GEPS) (Westcott, 2004).The online facility enhanced efficiency in school procurement activities by eliminating paperwork, inflation of prices and collusion between some bidders and procurement staff, as all transactions were posted online for easy access by all stakeholders (Westcott, 2004).
The Chilean Government established a Communications and Information Technology Unit (UTIC) in 1998 to facilitate transition from paper-based to technology-based public procurement.In view of this, public schools began initiating e-procurement systems with support of the government through Ministry of Education as early as 1999 (Concha, 2004).By 2003, about 85% of public schools were practicing e-procurement.A study on the benefits of e-procurement noted that the system had improved transparency and efficiency, as well as reduced corruption (Concha, 2004).In Brazil, during the first two years of e-procurement, public secondary schools saved up to US$1.5m.By 2005, more than half of public schools were registered in the e-procurement database.However, reforms in schools was delayed by lack of skills among board members and shortage of computers, low internet connectivity (Ozorio de Almeida, 2005).
In Nigerian public secondary schools, procurement reforms tackled predominant malpractices such as single sourcing, tender splitting, induced emergency procurement, inflation of prices, and lack of transparency (Musa, Success & Nwaorgu, 2014).The reforms process enhanced efficiency and fiscal discipline in public secondary schools, thus, enabling the institutions to save up to ₦1.4 billion annually.Nonetheless, effectiveness of the reforms process was undermined by shortage of competent technical skills among tender committees, limited training opportunities, political influence on tender award process, as well as delayed auditing of school financial accounts.The challenges undermined the ability of school tender committees to sustain gains and deal with accumulated financial mismanagement within public schools over the years (Musa et al., 2014).
In Uganda, procurement reforms which started in 1997, led to the introduction of Regulations that decentralised public procurement in 2001 and facilitated enactment of the Public Procurement and Disposal of Public Assets Act in 2003.In 2014, Procurement Guidelines for Schools was developed to deepen reforms by informing and guiding school tender committees on the procedures to be followed and the documentation to be used in sourcing, selecting, and retaining providers for goods, services, and works (Komakech & Machyo, 2015).Even though the introduction of procurement Guidelines for schools synergised compliance with procurement laws, various issues remain outstanding, including poor management of records, use of direct procurement methods, as well as delayed award of contracts due to poor procurement planning (Komakech & Machyo, 2015).
Procurement in the Kenyan public sector has undergone several reforms from a system with no regulations in the 1960s to a system regulated by Treasury circulars between 1970 and 2000; and lastly, to a system with regulations at the turn of the 21 st Century, including the Procurement Regulations of 2001, Procurement Act of 2005, as well as Procurement Regulations of 2006 (Embeli et al., 2014).Nonetheless, it is the Procurement Regulations of 2001 that synergised the momentum for procurement reforms in Kenya (Embeli et al., 2014).Although procurement reforms brought in a new order of doing business in schools, it delayed to improve expenditure management as expected (IPAR, 2007).Nonetheless, there is a dearth of literature on how various aspects of procurement reforms influence expenditure management in Kenyan public secondary schools.The few studies that have focused on topics akin to the subject of this study mainly identified factors determining or preventing implementation of procurement reforms in a few secondary schools of Trans-Nzoia, Vihiga and Kitui counties (Kavula et al., 2014;Embeli et al., 2014;Angokho et al., 2014).Even scantier, is literature linking emergency procurement and expenditure management in public secondary schools of Nairobi City County.
The study was anchored on Fiscal Decentralisation Theory, advanced by Richard Musgrave in the mid-19 th Century (Rondinelli, 1981).The theory holds that fiscal decentralisation is an indispensable process that forms part of public governance reforms.It entails the decentralisation of authority, responsibility, and accountability for the management of public revenues as well as expenditure to peripheral cost centres and communities.The theory further holds that decentralisation of expenditure management to peripheral cost centres and communities are inevitable within the framework of bottom-up approach to development planning.The ultimate goal is to achieve efficiency, effectiveness, equity, and democracy, which may be constrained by a centralised system.The theory assumes that decentralising expenditure management is likely to stimulate equitable distribution of national resources and spur regional economic growth by injecting public funds in peripheral economies (Rondinelli, 1981).
In the education sector, decentralisation of expenditure management places authority, responsibility and accountability in the hands of institutional heads and management boards.The theory indicates that expenditure efficiency is likely to improve when communities surrounding cost centres are involved in partial financing, management and monitoring expenditure patterns (Winkler, 1989).By assessing the effects of procurement reforms on expenditure management, this study anchored on the postulates of the Fiscal Decentralisation Theory.In Kenya, although the authority, responsibility, and accountability for expenditure management were decentralised to educational institutions in the 1970s, lack of a comprehensive legislative framework hampered expenditure efficiency, leading to wastage of public resources.The development of a legislative framework in the wake of the Century, initiated a national reforms process, aimed at restoring efficiency in expenditure management at the peripheral, regional and national cost centres.
Methodology
The evaluation research model III guided the research process, including data sourcing, processing, and analysis.The model focuses on four key dimensions of programme evaluation, including needs and problems (context analysis); resources and strategies needed to achieve objectives (input evaluation); analysis of the programme while it is operating (process evaluation), and the extent to which goals of a programme have been achieved (product evaluation) (Mugenda & Mugenda, 2003).The study examined contextual issues such as challenges to procurement reforms in public secondary schools; input aspects such as number of tender committee members trained in procurement management; process evaluation, including the frequency of tender committee meetings, advertisements and emergency procurement; as well as product evaluation in terms of expenditure management.
The study targeted public secondary schools in Nairobi City County.At the time of the study, the County had 78 such schools, of which 7 were categorised as 'national', 49 belonged to the 'county schools' category, while 20 were 'sub-county schools'.In terms of gender, 19 were boys' only schools, 20 belonged to girls, while 37 were mixed schools.More specifically, the study focused on schools that had existed for at least 10 years prior to 2001 when procurement reforms in question were initiated.The criterion was based on the assumption that such schools had established databases for procurement expenditure and student population for the period under study, namely, 1999 to 2010.Within the schools, the study targeted deputy principals as well as members of Boards of Management (BOM) and Parents-Teachers' Association (PTA).
Stratified random and purposive sampling procedures were applied to select schools and respondents.In this regard, the 76 public secondary schools were collectively designated as the population (N i ) from where a sample (n i ) was drawn, using Fisher's formula for sample size determination from finite populations (Gliner, Morgan & Leech, 2009).The process yielded a sample size of 39 schools, which was stratified into three categoriesnational, county, and sub-county; as well as on the basis of gender -boys', girls' and mixed schools.The process ensured proportionate distribution, as indicated in Table 1.Deputy principals and members of BOMs and PTAs, were sampled purposively based on membership to school tender committees, management, and/or oversight of school procurement activities.Primary data were sourced through self-administered questionnaires for deputy principals and key informant interviews for BOM and PTA members, while secondary data were sourced through a review of annual financial reports and student enrolment data, among others.The split-half technique was used to estimate reliability of data collection instruments, and the resultant correlation coefficient adjusted using Spearman-Brown prophecy formula (Gliner et al., 2009).Data collection instruments were pre-tested in six public secondary schools in Kiambu, Muranga and Nyeri Counties, which neighbour Nairobi County to the West and North.Primary data were sourced in May 2015.Even though the study targeted 39 schools, 35 questionnaires were filled at the end of data collection period, which suggests a response rate of 89.7%.Again at the end data collection period, 16 key informant interviews were successful.
Quantitative analysis techniques included frequency distributions with percentages; Analysis of variance (ANOVA), Chi-square (χ 2 ) statistic, Pearson's correlation coefficient and multiple regression (Gliner et al., 2009;Morgan, Leech, Gloeckner, & Barrett, 2007).Multiple regression modes were applied to determine the effect of each aspect of procurement reforms (independent variables) on expenditure management (dependent variable).
In general form, the models are based on the premise that Y is a function of a set of k independent variables (X 1 , X 2 ...X k ) in a population (Morgan et al., 2007).To express the model in an equation form, X kj denotes the value of the j th observation of variable X k , as indicated: Where: β 0 is the intercept; β 1 … β k are partial regression coefficients; ɛ j is the error term; Y j is the dependent variable; X 1 …X k are independent variables (Morgan et al., 2007;Bryman & Cramer, 1998).In this study, the dependent variable (Y j ) was variation in procurement expenditure, while the independent variables (X 1 …X k ) included frequency of tender committee meetings in a quarter year, number of tender committee members trained in procurement management, frequency of tender advertisements, frequency of emergency procurement, frequency of applying open tender methods and frequency of tender splitting.The models generated three outputs of interest to this study, namely standardised regression coefficients (Beta weights), adjusted coefficient of determination (R 2 ) and F statistic.
The effect of independent variables was indicated by partial regression coefficients associated with each variable.Whereas a negative regression coefficient showed a negative effect, a positive coefficient indicted a positive effect on variation in procurement expenditure.The regression coefficients were standardised to generate Beta weights, to tell by how many standard deviation units the dependent variable was likely to change for a unit standard deviation change in an independent variable.The bigger the deviation from equilibrium, the stronger the effect of an independent variable (Morgan et al., 2007;Bryman & Cramer, 1998).
Goodness-of-fit shows how well a set of independent variables incorporated in regression models explain variation in the dependent variable, in this case, expenditure management.In multiple regression models, goodness-of-fit is explained by the coefficient of determination, designated as R 2 .Nevertheless, the adjusted R 2 provides a more accurate estimate of the explanatory power of a regression model than R 2 by considering the number of independent variables incorporated in the model.The significance of variation in Y is indicated by the F statistic (Morgan et al., 2007;Bryman & Cramer, 1998).Furthermore, variation in Y between the periods before reforms (1999)(2000)(2001)(2002) and after reforms (2007-2010) was computed using the arithmetic formula, stating that: Where E v is the variation in procurement expenditure; e 1a ...e 4a are procurement expenditures for years one to four after reforms; e 1b ...e 4b are procurement expenditures for years one to four before reforms; p 1a ...p 4a are the student populations for years one to four after reforms; p 1b ...p 4b are the student populations for years one to four before reforms; n a is the number of years under focus after reforms and n b is the number of years under focus before reforms.The analysis was based on the assumption that the level of procurement expenditure was a function of student population; that schools procure goods, services, and works to meet the needs of students.As student population increases, the level of procurement expenditure is also expected to increase proportionately.Whereas a reduction in procurement expenditure between the two periods signified that the reforms were effective in improving fiscal discipline, an increase or no change indicated lack of effectiveness.
Qualitative data were transcribed and analysed using Nvivo 10 to identify emerging themes and patterns.
Regarding ethical considerations, the investigator sought informed consent from potential respondents; and the process involved briefing them about the study, voluntary participation, withdrawal of consent and confidentiality of information sourced.Ethical clearance was obtained from the University of Nairobi Ethics and Research Committee.Regarding authorisation, a research permit was obtained from the National Commission for Science, Technology, and Innovation (NACOSTI), while an introduction letter was obtained from the University of Nairobi.
Results
The analysis revealed two outstanding patterns of variations in annual per capita procurement expenditure.
Whereas the first pattern shows that procurement expenditure reduced consistently from the period before reforms, to the period during reforms and further down to the period after reforms; the second pattern indicates that procurement expenditure reduced from the period before reforms, to the period during reforms; but later increased during the period after reforms.On average, the Analysis of Variance (ANOVA) results, in Table 2, show that before reforms, the schools recorded an annual per capita procurement expenditure of KES 47,768, which declined to KES 34,625 during reforms and dropped further to KES 30,977 after reforms.In this regard, the analysis obtained a computed F (2, 102) statistic of 4.621 and a ρ-value of 0.012, which suggests up to 95% chance that variations in annual per capita procurement expenditure were statistically significant, which in turn, suggests that procurement reforms may have significantly influenced the management of expenditure in public secondary schools.The ANOVA results further show that annual per capita procurement expenditure reduced by circa 35% from KES 47,768 before reforms to KES 30,977 after reforms.Based on this, a computed F (1, 68) statistic of 7.606 and a ρ-value of 0.007 were obtained, which suggests up to 99% chance that variation in per capita procurement expenditure between the two periods is statistically significant.Furthermore, the variation in annual per capita procurement expenditure between the periods before and after reforms were clustered into three categories of <KES10,000, which was designated as 'small variation'; KES 10,000 to 19,999, designated as 'average variation'; and KES 20,000+, designated as 'big variation'.Whereas 'small variation' signifies a weak level of fiscal discipline, 'big variation' suggests a strong level of fiscal discipline.Based on this, the results show that of the 35 schools, 24 (68.6%)recorded small variation in annual procurement expenditure, 7 (20.0%)achieved average variation, while 4 (11.4%)experienced big variation.
Analysis of the Relationship between Background Profile and Expenditure Management
The study captured various attributes of the schools including type, category, location, distribution by sub-counties, availability of tender committees and membership to such committees.The study further examined the relationship between such attributes and variation in procurement expenditure.The purpose of the analysis was to identify attributes that were likely to confound the relationship between the frequency of emergency procurement and expenditure management.The results in Table 3 show that among the schools that recorded small variation, 14 (58.3%) were boarding, 7 (29.2%)were day, while 3 (12.5%)provided both day and boarding services.Among those that achieved big variation, 3 (75.0%)were boarding schools, while 1 (25.0%) was a day school.However, the analysis revealed no significant association between type of school and variation in annual procurement expenditure (χ 2 = 2.263, df = 4 & ρ-value = 0.687).Furthermore, the analysis revealed lack of a significant association between variation in annual procurement expenditure and: category of schools (χ 2 = 7.013, df = 4 & ρ-value = 0.135); location of schools (χ 2 = 0.029, df = 2 & ρ-value = 0.986); as well as distribution of schools (χ 2 = 7.711, df = 14 & ρ-value = 0.904).In this regard, the results suggest that changes in annual procurement expenditure were homogenous across national, county and sub-county schools; as well as schools located in high and low income zones.In addition, all the sub-counties were homogenous in terms of such changes.
The results further show that all the 35 (100.0%)schools had complied with the requirement of the Procurement Regulations by establishing tender committees to manage procurement and disposal activities.The membership of school tender committees ranged between 6 and 12.In this regard, the results in Table 4 show that mean membership of school tender committees was 8.96 for schools that recorded small variation in expenditure, 8.57 for those that experienced average variation and 8.50 for those with big variation.However, the analysis obtained a computed F (2, 32) statistic of 0.326 and a ρ-value of 0.724, which is not statistically significant; thus, suggesting lack of significant variation in membership of school tender committees for the three categories of schools.The results in Table 4 further suggest lack of a significant correlation between the membership of school tender committees and variation in annual procurement expenditure (Pearson Correlation Coefficient [r] = 0.084; ρ-value = 0.633).In addition, key informant interviews revealed that school administration was represented in tender committees by deputy principals, who according to the Procurement Regulations are obligated to chair the committees.In some schools, county and sub-county education officers were co-opted in tender committees as ex-officio members; yet in others, a few BOM members sat in tender committees; which however, is contrary to provisions of the Procurement Regulations.The membership composition of school tender committees was further faulted for being skewed in favour of teaching staff.Even though non-teaching staff also constituted the committees, they lacked numerical strength to regulate decisions that go against institutional interests.Moreover, the involvement of teaching staff in procurement activities distracted them from undertaking their core business of tending to academic needs of their students.
Bivariate Analysis of The Frequency of Emergency Procurement and Expenditure Management
The results in Figure 1 show cross-tabulation results between the frequency of emergency procurement in public secondary schools and variation in procurement expenditure.Of the 35 respondents, 21 (60.0%)affirmed that emergency procurement were 'occasionally' practised in their schools.This group consisted of 18 (75.0%)respondents whose schools experienced small variation in procurement expenditure, and 3 (42.9%)whose schools recorded average variation.Contrastingly, 13 (37.1%)respondents indicated that their schools had complied with provisions of Part I, Section 3 (1) of the Procurement Act by not practising emergency procurement.Again, this included 5 (20.8%) respondents whose schools recorded small variation in procurement expenditure, 4 (57.1%)whose schools reported average variation and 4 (100.0%)whose schools achieved big procurement officers, as mentioned by 2 (9.1%) respondents; as well as store-keepers, deputy principals and departmental heads, each cited by 1 (4.5%) respondent.Note that this was a multiple response variable.Still on the same aspect, key informants acclaimed procurement planning for improving efficiency in school procurement activities, which in turn, enabled the institutions to utilise their budgets astutely.More particularly, procurement planning ensured timely execution of procurement activities, thus, eliminating the need for rushed orders.Procurement planning also ensured continuity of supplies and smooth operations without challenges of stockouts.Nonetheless, more than a third of the schools, 13 (37.1%),did not have procurement plans due to lack of awareness and skills to develop them.
Multivariate Analysis of the Frequency of Emergency Procurement and Expenditure Management
In this study, independent variables included frequency of tender committee meetings in a quarter year, number of tender committee members trained on procurement management, frequency of tender advertisements, frequency of emergency procurement, frequency of open tenders and frequency of tender splitting.The analysis generated two regression models.The first model incorporated independent variables only, while the second model incorporated both independent and intervening variables, including student population, school type, school category and income zone.
The models generated three important output indicators, including standardised regression coefficients (beta weights), adjusted coefficient of determination (R 2 ) and the significance of F statistic.Beta weights showed the effect of each aspect of procurement reforms on expenditure management in terms of direction (either positive or negative) as well as in terms of relative importance.Whereas a negative beta weight suggests a reduction in procurement expenditure, a positive beta weight indicates an increment in the same.In this study, reduction or increment in procurement expenditure was considered a crucial indicator of how good or bad procurement reforms had influenced the performance of public secondary schools in terms of expenditure management.More still, the adjusted R 2 shows how well the aspects of procurement reforms explained variation in procurement expenditure; while the significance of F statistic indicates whether the effect of procurement reforms on expenditure management made statistical sense or not.
The results of multivariate analysis are summarised in Table 5; and they show beta weights, as well as related t-statistic and ρ-values (Sig.) for each aspect of procurement reforms.Nonetheless, this article discusses the frequency of emergency procurement, being the aspect that influenced the biggest increment in procurement expenditure.In this regard, the analysis obtained a beta weight of 0.352 (t-statistic = 2.596 & ρ-value = 0.015) in model 1.However, this increased to 0.457 (t-statistic = 3.240 & ρ-value = 0.003), with the addition of intervening variables.In both models, the aspect caused an increment of procurement expenditure, which was statistically significant at 95% confidence level in model 1 and at 99% confidence level in the second model.This suggests that the addition of intervening variables in the regression model boosted the variable's effect on procurement expenditure.An increment of procurement expenditure signifies negative effect on expenditure management.Consequently, the investigator failed to reject the null hypothesis (H 0 4), stating that the frequency of emergency procurement negatively effects expenditure management in public secondary schools, for lack of sufficient empirical evidence to warrant such action.On the same note, key informants confirmed that emergency procurement was a common practice in public secondary schools, which enabled accounting officers to source goods and services in order to address situations at hand, without following existing procurement rules and procedures.In this regard, some administrators conveniently failed to procure goods and services in time, in order to create emergencies, during which tenders were awarded to selected suppliers and service providers without proper measures to check against irregularities.Emergency procurement created a leeway for over-expenditure, which in turn, contributed to the increment of procurement expenditure during the period under focus.Consequently, controlling the frequency of emergency procurement would be a key step towards effective management of procurement expenditure in public secondary schools.
Relative importance of independent variables in terms of effects caused on a dependent variable is indicated by the magnitude of beta weights.Whereas a negative (-) sign before a beta weight shows a decrement effect on the dependent variable, a positive (+) sign suggests an increment effect.The effect of independent variables is nil at 0.0, but increases away from 0.0 in both directions (±).The bigger the deviation from the equilibrium, the stronger the effect associated with a particular independent variable.Based on this principle, the analysis showed that among the aspects that caused an increment in procurement expenditure, the frequency of emergency procurement (beta weight = 0.457) was more important than the frequency of tender splitting (beta weight = 0.406).
The results further show that model 1 generated an adjusted R 2 of 0.537, which suggest that the aspects of procurement reforms analysed by the study accounted for up to 53.7% variation of procurement expenditure over the reference period.When intervening variables were added into the model, the adjusted R 2 increased to 0.563, which suggest that model 2 accounted for 56.3% of variation in procurement expenditure.The results also suggest that both models had a moderate strength in estimating the effect of procurement reforms on expenditure management.Besides, the strength of both models was statistically significant at 99% confidence level (ρ-value < 0.000).
Discussions and Conclusions
The purpose of this study was to establish the extent to which public secondary schools in Nairobi City County had complied with legislative and policy provisions guiding public procurement reforms in Kenya, as well as the effect of selected aspects of reforms on expenditure management.This article focuses on the frequency of emergency procurement, being the aspect that caused the biggest increment in procurement expenditure.In the public sector, expenditure management is a fundamental aspect for sustainable delivery of quality services; and its purpose is to achieve three interconnected objectives, including improving fiscal discipline, optimising allocation of resources in line with budgetary policy priorities, and ensuring good operational management.Improving fiscal discipline involves controlling the amount of fiscal resources spent in the procurement of goods, services, and works, with a view to minimising loss through accidental wastage and/or afore-thought malpractices.
The findings show that less than one-half of the public secondary schools, (37.1%), had complied with legislative requirement by not practising emergency procurement.Besides, the aspect caused a significant increment of procurement expenditure (beta weight = 0.457, t-statistic = 3.240 & ρ-value = 0.003), which signifies weakness (negative effect) in the standards of fiscal discipline, and thus, expenditure management.Notably, the provision for emergency procurement is not only delicate, but also vulnerable to misuse by accounting officers through dilatory tactics, with the intention of creating artificial "urgent need" situations in order to subvert procurement procedures.When used properly, emergency procurement provision can enable public institutions to save public resources; but when misused, the provision can lead to massive loss of public resources, through inflated prices, misguided priorities and bloated expenditure.
Induced emergency procurement is a key manifestation of corruption, which affects all organisations, including public institutions.Price differentials between induced emergency procurement and planned procurement can be as high as tenfold.In situations of induced emergency procurement, contracts are often awarded to most successful bribers, friends or relatives; and not necessarily to bidders who offer best price-quality combinations.Under such situations, procuring entities are highly likely to receive goods and services of poor quality, which logically, denies them best value for money.Corruption in induced emergency procurement can also lead to biased allocation of resources, as corrupt accounting officers exaggerate allocations for procurement projects that provide an easy way for personal benefit, at the expense of other more important institutional needs.In view of this, limiting the frequency of emergency procurement is an important step towards effective management of procurement expenditure in public secondary schools, which shall be achieved through comprehensive procurement plans and budgets.
Furthermore, about two-thirds of the schools (62.9%) had developed procurement plans, as required by the legislative and policy frameworks governing public procurement.The primary benefit of procurement planning is the elimination of induced emergency procurement; which in turn, promotes fiscal discipline and improves expenditure management.Procurement planning also provides opportunities for stakeholders, including requesting entity, end users, procurement department, technical experts, and even vendors, to meet and discuss procurement requirements and objectives; as well as assigning timeframes to planned procurement activities, which in turn, eliminates cases of emergency procurements.Supporting public schools to develop and implement procurement plans remains crucial for preventing loss of public resources, achieving financial sustainability and improving service delivery.
In view of the above, the Ministry of Education and Directorate of Public Procurement should invest in sensitisation programmes targeting school tender committees on emergency procurement, particularly focusing on legislative provisions for managing the practice, consequences of misapplication, and the importance of procurement planning.Again, the stakeholders should focus on training school tender committees on procurement planning to enable members acquire skills on how to develop, execute, and manage procurement plans.Equally important is the amendment of procurement laws and policies to grant BOM and PTA members more powers to monitor activities of school tender committees and accounting officers; thereby, flag out situations that may lead to emergency procurement.Procurement planning will ensure timely execution of tender activities; thus, eliminate the need for rushed orders and loss of resources.Lastly, it's important for relevant organs of the Government to disburse funds for FDSE in time to enable tender committees implement procurement plans.Timely disbursement of funds to schools will help avoid situations where procurement activities are executed in a rush; as well as identify financing gaps and measures to cope with such.
Table 1 .
Distribution of sample size based on school category and gender
Table 3 .
Background attributes of the schools
Table 4 .
Membership of school tender committees
|
2018-12-12T13:16:22.585Z
|
2018-05-16T00:00:00.000
|
{
"year": 2018,
"sha1": "c450aa615ea252588ddfc13912d2bddf95a4ee58",
"oa_license": "CCBY",
"oa_url": "https://www.ccsenet.org/journal/index.php/ijbm/article/download/75434/41643",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c450aa615ea252588ddfc13912d2bddf95a4ee58",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Economics"
]
}
|
258078873
|
pes2o/s2orc
|
v3-fos-license
|
Grand Gauge-Higgs Unification on $T^2/{\mathbb Z}_3$ via Diagonal Embedding Method
We study a novel six-dimensional gauge theory compactified on the $T^2/{\mathbb Z}_3$ orbifold utilizing the diagonal embedding method. The bulk gauge group is $G\times G\times G$, and the diagonal part $G^{\rm diag}$ remains manifest in the effective four-dimensional theory. Further spontaneous breaking of the gauge symmetry occurs through the dynamics of the zero modes of the extra-dimensional components of the gauge field. We apply this setup to the $SU(5)$ grand unified theory and examine the vacuum structure determined by the dynamics of the zero modes. The phenomenologically viable models are shown, in which the unified symmetry $G^{\rm diag}\cong SU(5)$ is spontaneously broken down to $SU(3)\times SU(2)\times U(1)$ at the global minima of the one-loop effective potential for the zero modes. This spontaneous breaking provides notable features such as a realization of the doublet-triplet splitting without fine tuning and a prediction of light adjoint fields.
Introduction
Higher-dimensional gauge theory has been studied in the last two decades extensively as one of the attractive possibilities for physics beyond the standard model (SM). It is worth noting that the higher-dimensional gauge theory can possess the dynamical mechanism for gauge symmetry breaking via continuous Wilson line phases, called the Hosotani mechanism [1]. It is one of the promising approaches to understand the origin of the gauge symmetry breaking in the electroweak theory or in the grand unified theory (GUT) [2,3]. The former attempts are called gauge-Higgs unification [4,5]. Hence, various aspects of the higher-dimensional gauge theory with the Hosotani mechanism have been investigated.
The zero modes of the extra-dimensional components of the gauge field become the dynamical degrees of freedom, which behave as scalar fields at low energy [6,7]. The zero modes are closely related with the Wilson line phases, and the quantum correction generates the effective potential for the phases. The zero modes can acquire vacuum expectation values (VEVs) at a minimum of the potential to induce the gauge symmetry breaking [1]. Interestingly, the gauge symmetry breaking patters are definitely determined irrespective of the detail of the dynamics in the ultraviolet region thanks to the finiteness of the effective potential for the phases once we fix the content of matter fields in the theory [8,9]. 4 One understands the definite origin of the potential that induces the gauge symmetry breaking.
The zero modes originally belong to the adjoint representation under the gauge group. Thus, it looks attractive and natural to apply the Hosotani mechanism to the spontaneous breaking of the GUT gauge symmetry such as SU (5) [2,3]. We immediately, however, encounter the difficulty that the existence of the scalar zero mode of the adjoint representation tends to be incompatible with chiral fermions, which are required in phenomenologically acceptable models. That is, if one tries to obtain the chiral fermion, the orbifold compactification with appropriate boundary conditions (BCs) is a possible framework, but the scalar zero mode of the adjoint representation is projected out for the case. Thus, the SU (5) symmetry is broken by the BCs in many higher-dimensional GUT models [11]. Otherwise, an alternative direction is to consider GUT models with higher-rank gauge groups [12] that are spontaneously broken by VEVs of the scalar zero modes belonging to non-adjoint representations [13].
The diagonal embedding method [14] makes the adjoint zero mode exist and overcomes the difficulty to apply the Hosotani mechanism to the breaking of the SU (5) symmetry accompanying the chiral fermions. Though the method is originally invented in the context of the heterotic string theory, it is possible to apply it to the higher-dimensional gauge theory. In fact, we have obtained the five-dimensional GUT models compactified on the orbifold S 1 /Z 2 , in which the SU (5) gauge symmetry is broken down to that of the SM by the Hosotani mechanism without contradicting the existence of the chiral fermion [15]. We call the theoretical framework the type A(djoint) grand gauge-Higgs unification. Phenomenologically notable aspects in the type A grand gauge-Higgs unification with S 1 /Z 2 compactification have been investigated [16,17,18,19]. For other types of the grand gauge-Higgs unification, referred to also as gauge-Higgs grand unification, see Ref. [5], where the Hosotani mechanism is utilized to break the electroweak symmetry.
What is striking is that the effective potential for the phases obtained in the diagonal embedding method maintains the desirable nature, that is, the finiteness. Hence, the VEV for the zero mode can be determined by minimizing the effective potential for the fixed matter content to induces the GUT gauge symmetry breaking without being affected by the physics in the ultraviolet region. Furthermore, the diagonal embedding method can straightforwardly be extended to the case with more complex orbifold compactification such as T 2 /Z 3 .
In this paper, we shall study the gauge symmetry breaking of the six-dimensional (6D) SU (5) gauge theory compactified on the T 2 /Z 3 in the type A grand gauge-Higgs unification. In the counterpart in the string theory for the Z 3 model, the gauge symmetry is realized at a level-3 affine Lie algebra or Kac-Moody algebra. We note that there is a conjecture that the generation number is a multiple of the level [20]. Though the generation number is just a free parameter set by hand within the field theory, it is meaningful to construct field theoretical models that can be considered as effective theories of the string theoretical models with three generations. The 6D model compactified on the T 2 /Z 3 orbifold is their simplest example. It is important and interesting to study the T 2 /Z 3 compactification from the side of quantum gauge field theory. One can study the breaking of the SU (5) gauge symmetry by minimizing the one-loop effective potential for the phases. We shall determine the gauge symmetry breaking patterns through the Hosotani mechanism for various matter contents from the one-loop effective potential and find matter contents that result the SM gauge symmetry. We also discuss the phenomenological implications such as four-dimensional (4D) chiral fermions, fermion masses, proton decay, and so on. This paper is organized as follows. In the next section, we introduce the basic aspects of the orbifold T 2 /Z 3 . We discuss the field theoretical realization of the diagonal embedding method focusing on the gauge fields on the orbifold T 2 /Z 3 in Sec. 3. This section contains the fundamental ingredients for studying the gauge symmetry breaking in our model. The matter fields are introduced in Sec. 4, where the BCs and the mass spectrum are studied. We compute the effective potential for the Wilson line phases in one-loop approximation and study the gauge symmetry breaking patterns, including the breaking down to the gauge symmetry of the SM, in Secs. 5 and 6. We also discuss the phenomenological aspects of our model in Sec. 6. The final section is devoted to conclusions and discussions. Some details on the calculations are given in the appendices.
T /Z 3 orbifold
We consider the orbifold T 2 /Z 3 as the compact extra dimensions. To deal with coordinate vectors in T 2 /Z 3 , it is convenient to use the basis vectors e i and the metric g ij , which satisfies e i · e j = g ij , e i+2 = −e i − e i+1 , e i+3 = e i , (2.1) where i ∈ Z. Among e i , we can choose e 1 and e 2 as a linearly independent set. A coordinate vector y in T 2 /Z 3 is spanned by the basis vector as y = y i e i = y 1 e 1 + y 2 e 2 , y i ∈ R, (2.2) and it satisfies the following identifications: 3) y = y i e i ∼ y i e i+1 = y 1 e 2 + y 2 e 3 = −y 2 e 1 + (y 1 − y 2 )e 2 , (2.4) where R parametrizes the size of the compact space. Contractions between upper and lower indices i imply the summation over i = 1, 2 hereafter. By requiring that the metric g ij is invariant under the transformation e i → e i+1 , we can fix it as 5) up to an overall constant, which can be absorbed into the definition of R.
The two-dimensional Cartesian coordinates, which we denote by x 5 and x 6 , are related to the oblique coordinates y 1 and y 2 . We take the basis such that x 5 = y 1 and x 6 = 0 hold for y 2 = 0 as x 5 In light of Eqs. (2.3) and (2.4), let us introduce the operatorsT j (j = 1, 2) andŜ 0 that act on the coordinates y i aŝ (2.9) The identifications in Eqs. (2.3) and (2.4) are rewritten aŝ (2.10) Figure 1: The oblique coordinate system on T 2 /Z 3 . The gray shaded region is an independent domain of the torus T 2 . The green shaded region is a fundamental domain of the orbifold T 2 /Z 3 . The small circles correspond to the fixed points.
We can define an independent domain of the T 2 torus regarding the identifications given bŷ T 1 andT 2 , where one of the domains is shown as the gray shaded region in Fig. 1. The additional identification given byŜ 0 defines the orbifold T 2 /Z 3 , which has the fundamental domain shown in Fig. 1 by the green shaded region.
There exist fixed points on the orbifold that are invariant up to the translationsT 1 and T 2 under the discrete rotationŜ 0 . That is, the fixed points are given by the solution to the following equation: where n 1 , n 2 ∈ Z. (2.11) We denote the three fixed points on the fundamental domain of T 2 /Z 3 by y i f(r) (r = 0, 1, 2), which are given by Any other fixed points are given by the translations of y i f(r) generated byT 1 andT 2 . In Fig. 1, the fixed points are shown by the small circles.
It is useful to introduce the dual basis vectorsẽ i as 15) where δ i j is the Kronecker delta and g 11 g 12 g 21 g 22 = 4 3 Note that g ij e j =ẽ i and g ijẽ j = e i hold. We can introduce a dual vectork that is spanned by the dual basis vectors ask Then, one seesk · y = k i y i ∈ R. As discussed in Appendix A,ẽ i is a natural basis for a Kalzua-Klein (KK) discretized momentum, which is mapped to a point on the lattice spanned byẽ i in a normalization.
The identification in Eq. (2.4) is related to the basis change e i → e i+1 . Under the basis change, the dual basis vectors also changeẽ i →ẽ i . Requiring e i+1 ·ẽ i = δ j i , we obtaiñ e 1 = −ẽ 1 +ẽ 2 andẽ 2 = −ẽ 1 . Thus, corresponding to Eq. (2.4), we obtain the identification for the dual vector as Then, the action of the operatorŜ 0 on the coordinates of dual vectors is naturally defined bŷ where we have also defined (2.20) From the above, one seesŜ 0 We use Eq. (2.21) for deriving the KK expansions of fields discussed in Appendix A.
3 The diagonal embedding method on M 4 ×T 2 /Z 3 : gauge fields
Lagrangian for gauge fields
We start to discuss the gauge theory with the field theoretical realization of the diagonal embedding method on M 4 × T 2 /Z 3 , where M 4 is the Minkowski spacetime. In the following, we denote the 6D orthogonal coordinates by x M = (x µ , x 5 , x 6 ) (µ = 0, 1,2,3). For the extradimensional coordinates, we also use the oblique coordinates y 1 and y 2 in Eq. (2.7) instead of x 5 and x 6 . The metric of (1, −1, −1, −1) and g ij is given in Eq. (2.5).
The action is given by the Lagrangians for the gauge fields L YM and the matter fields L mat , which will be discussed in the next section, as where det g ij = 3/4. The diagonal embedding method on the orbifold requires that the theory respects three copies of gauge symmetry G and the global symmetry Z (ex) 3 that permutes the three copies cyclically. Therefore, let us introduce the Lagrangian for the gauge fields as where g is the gauge coupling constant. The gauge fields A where the indices a run from 1 to the dimension of the Lie algebra of G, and the summation over a is implied. The operators t (k) a (k = 1, 2, 3) are representation matrices of the generators. We adopt the convention that the matrices satisfy the following relations: where f ab c is the structure constant. In Eqs. (3.2) and (3.3), the trace is taken over the representation space.
The Lagrangian in Eq. (3.2) has the gauge symmetry G × G × G and the global symmetry Z (ex) 3 . We define the gauge transformation of the gauge field as where α (k)a (x) are gauge parameters. To define the global Z (ex) 3 transformation of the gauge field, it is helpful to extend the range of index k ∈ {1, 2, 3} to k ∈ Z and to introduce the periodicity for the index k, e.g., a . Hereafter, we use this notation. Then, we can write the global Z (ex) 3 transformation of the gauge field as follows: 3.5) Under the transformations in Eqs. (3.4) and (3.5), the Lagrangian in Eq. (3.2) is invariant.
Using the above notation, we can define A [p]a M that are the eigenstates of the transformation in Eq. (3.5) as
Orbifold boundary conditions and residual gauge symmetries
In theories on the orbifold, field values are constrained since the extra-dimensional coordinates obey the identifications discussed in the previous section. To clarify the constraints, we define the BCs [21] for the gauge fields. As discussed in Sec. 2, we treatT 1 andŜ 0 as the independent operators and define the BCs for the gauge fields A (k)a M (x µ , y i ) as follows: Hereafter, we also use the notation A (k)a In general, BCs can nontrivially act on the representation space of not only the discrete group Z (ex) 3 but also the gauge group G. Nevertheless, it should be emphasized that we can always take the trivial BCs for G as in Eqs. (3.7) and (3.8) without loss of generality. This is understood as follows. Although one can introduce nontrivial transformations in the representation space of G [21,22], which we here call the gauge twists, the nontrivial gauge twists do not affect the low-energy physics in the present case. For the gauge twist with respect toŜ 0 , this introduces just a difference among the bases in the representation space of the generators t (1) a , and t (3) a . Such difference can always be absorbed into the redefinition of the generators t (k) a . The gauge twist with respect toT 1 can be absorbed by the continuous Wilson line phases [22], which will be discussed in detail in the next section, through the gauge transformations with the gauge parameters depending on the extra-dimensional coordinates. Then, if the BCs are the same up to the gauge twist with respect toT 1 , these BCs are said to belong to the same equivalence class [23,24,25]. As seen below, the vacuum is determined by a nontrivial expectation value of the Wilson line phases. It is known that BCs in an equivalence class describe the same low-energy physics through the dynamics of the Wilson line phases determined by the effective potential generated by quantum corrections [24]. 5 5 We introduce the twist for Z (ex) 3 only associated withŜ 0 in Eqs. (3.7) and (3.8). One may consider a Z (ex) 3 twist associated withT 1 , which cannot be absorbed by the Wilson line phases.
From the BCs and Eq. (3.6), it follows that The Z 3 transformation of y i generated byŜ 0 is discussed in Sec. 2 and is contained in SO (2) rotations that are part of the 6D Lorentz transformation. Hence, the extra-dimensional components of the gauge field nontrivially transform under the Z 3 transformation, and thus A [p]a y i are not the eigenstates of the BC forŜ 0 in Eq. (3.10). We refer to the Z 3 subgroup of the SO(2) as Z [q] that are defined as 11) where q ∈ Z and the superscript of y takes = 1, 2, 3, whereas that of y i takes i = 1, 2 as explained in Sec. 2. The normalization in Eq. (3.11) is fixed by A (x µ , y i ). (3.12) Namely A (k)a [q] has the eigenvalue ω −q under the Z (L) 3 transformation.
From the above discussions, the eigenstates A [p]a [q] of the BCs are naturally defined as Inversely, it also follows that [q] has the charges ω p−q and ω p+q , respectively. The BCs forŜ 0 introduced in Eqs. (3.7) and (3.8) are regarded as the twist for Z (+) 3 , and the zero mode is neutral under Z (+) 3 . The BCs determine the zero modes of the gauge field. The low-energy gauge symmetry associated with the zero modes of the 4D component of the gauge field is referred to as the residual gauge symmetry. To clarify the residual gauge symmetry, we focus on the covariant derivative: where we have introduced (3.17) The generator t a has a proper normalization and satisfies are decoupled from the effective theory since they have no zero modes. Hence, the residual gauge symmetry is the diagonal part of G × G × G generated by t a . We denote this diagonal part by G diag . From the commutation relation in Eq. (3.18) for p = 0 and p = ±1, we see that t [±1] a transforms as the adjoint representation under the residual gauge symmetry G diag .
Wilson line phases and spontaneous symmetry breaking
Let us focus on the zero mode of the extra-dimensional component of the gauge field. As discussed in the previous subsection, A [1] = 0 is satisfied. We also introduce the parametrization of the VEVs of A 20) whereω = e −2πi/3 . The real part of ω k+ −1 a a z is equal toã a k+ . 6 From Eq. (3.20), one sees that a a k+ has the periodicity under the shift of its subscript asã a k+ +3 =ã a k+ . Let us consider the Wilson line phases defined with closed paths on the orbifold T 2 /Z 3 . We denote the three distinct noncontractible cycles by C ( = 1, 2, 3). The cycle C 1 is defined by the path from y 1 = 0 to 2πR, while keeping y 2 = 0. The cycle C 2 is defined by the path from y 2 = 0 to 2πR, while keeping y 1 = 0. The cycle C 3 is defined by the path from −y 1 − y 2 = 0 to 2πR, while keeping y 1 = y 2 . By using them, we define the Wilson line phase factors W as 22) where = 1, 2, 3, and we also define the Wilson line phases Θ as From the above, we find Θ +k = ω k Θ , which implies Θ 1 + Θ 2 + Θ 3 ∝ 1 + ω +ω = 0. Let us note that the phase factors in Eq. (3.21) have physical consequences, rather than the phases in Eq. (3.23) [26]. [p] has the eigenvalue ω 2p under Z (−) 3 . One sees that the gauge fields and the phases transform as A survives. Thus, the vacuum with the alignment W 1 = W 2 = W 3 is discriminated in view of the symmetry and is provided byã a −ã a +1 = 0 (mod 1). The VEVs of the Wilson line phases are dynamically determined. Thus, we focus on the potential for the zero mode of A (k)a y i . In the present case, L YM involves the nonvanishing potential for A (k)a y i at the classical level. 7 From Eqs. (3.2) and (3.3), we obtain (3.24) 6 We have determined the normalization ofã a k+ in Eq. (3.20) so that the Wilson line phase factors defined in Eq. (3.21) are invariant under integer shifts ofã a k+ in the G = SU (N ) case where the length of the root vectors are taken to be 1. Namely, the Cartan generator H in the fundamental representation of the SU (2) Lie algebra associated with a root vector is chosen as H = diag(1, −1)/2. 7 In five-dimensional models compactified on the S 1 /Z 2 orbifold, there is no tree-level potential only for the zero modes of the extra-dimensional components of the gauge field, although the zero modes can have tree-level potentials in supersymmetric models with the helps of additional scalars belonging to vector multiplets [27].
The VEVs of the field strength tensors are written by the Wilson line phases as (3.25) where we have used Eq. (3.23). Therefore, the tree-level potential for the Wilson line phases is given by Note that the tree-level potential is positive definite and has flat directions. On the flat directions, [Θ 1 , Θ † 1 ] = 0 is satisfied, and hence the potential is minimized as V tree = 0. In this case, There are quantum corrections to the effective potential for the phases. As discussed above, the tree-level potential is minimized along the flat directions. Due to the loop factors, the quantum corrections are generally suppressed compared to the tree-level contribution if it is nonvanishing. For the quadratic terms, it is vanishing even along the nonflat direction. 8 Thus, we approximate that the minimum resides in the flat direction and [Θ 1 , Θ † 1 ] = 0 holds even if the quantum corrections are incorporated. In this case, we can diagonalize Θ by G diag transformations without loss of generality.
The flat direction of the tree-level potential is no longer flat in the effective potential. If some nontrivial values of the phase degrees of freedom Θ +Θ † are determined by the quantum corrections to the potential, the residual symmetry G diag is spontaneously broken to G 0 , whose elements and the Lie algebra g 0 are given by a , W j ] = 0 for j = 1, 2}. (3.27) In this case, the zero modes of the gauge fields A [0]a µ , which is related to the broken generators corresponding to G diag /G 0 , acquire masses at low energy. This is understood as follows. By using y i -dependent gauge transformations, we can always choose a gauge such that nontrivial VEVs of A of the broken generators are projected out. In this way, the spontaneous symmetry breaking can generally be triggered by nontrivial VEVs of Wilson line phases.
Since Θ +Θ † is diagonal, we can expand them by the elements of Cartan subalgebra h ∈ g, where g is the Lie algebra of G. We denote the generators in h by Hâ (â = 1, . . . , r), where r is the rank of g. Hence, we obtain . (3.29) To determine the VEVs of the Wilson line phases, we should evaluate the effective potential for aâ . The quantum corrections to the effective potential depend on the matter contents of the theory. Thus, we discuss bulk matter fields in the next section, and the one-loop corrections are studied in Sec. 5. 4 The diagonal embedding method on M 4 × T 2 /Z 3 : bulk matter fields Let us start to discuss bulk matter fields. The invariance of the Lagrangian under the Z (ex) 3 transformation restricts the matter contents of the theory. We denote the representation of a matter field under the bulk gauge symmetry symmetry of the theory, matter fields should be incorporated as the set of the representations (R 1 , . We refer to this set of fields as a Z (ex) 3 threefold. However, there is an exception; if a field belongs to the representation of R 1 = R 2 = R 3 , we can incorporate a single field keeping Z (ex) 3 . We refer to the field of the type (R, R, R) as a Z (ex) 3 onefold.
Lagrangian for bulk scalar fields
As the simplest example, we first discuss a threefold scalar Φ (k) R (k = 1, 2, 3), which belongs to the following representation: where 1 means the singlet under G. Their components are denoted by (Φ transformation, the threefold scalar can be defined to transform For a real field such as the gauge field, the integer p should be 0. For the complex scalars, the phase factor ω p can be absorbed by redefinitions of (Φ (k) R ) α . From the above definitions, one sees that the following Lagrangian is Z (ex) 3 invariant: where the repeated sets of upper and lower indices are summed. The representation matrices on R of the generators of G are denoted by T Next, let us discuss a more general case. Let Φ (k) R 123 (k = 1, 2, 3) be a threefold scalar that belongs to the following representations: We denote elements of the representation matrices T We introduce a convenient notations Φ (k+3) where the latter represents the components in Eq. (4.4) which can be summarized as We note that the phase factor ω p appearing in the above can be absorbed by the field redefinitions for the case with complex scalars. The Z is given by where the covariant derivatives are written as follows: , we find that the above covariant derivatives transform as (D where we use the same notations for the indices as (Φ The transformation law helps us to see the Z (ex) 3 invariance of the above Lagrangian.
Let us discuss the irreducible decomposition of Φ (k) transforms under G diag as the common reducible direct product representation R 1 ⊗ R 2 ⊗ R 3 , which can be decomposed into the direct sum of irreducible representationsR i (i = 1, . . . , n) as (4.13) This ensures that, in the representation space, there exist linear transformations that whereα i runs from 1 to dim(R i ). 9 Thus, we can find a basis in the representation space such that each irreducible representation transforms under the Z The above discussion means that a general threefold scalar transforms under G diag as a set of the threefold scalars of the type in Eq. (4.1); this is schematically written as We should note that the above relation for matter fields is limited for the transformation properties under G diag , while the couplings between the matter fields and the Wilson line phases, which belong to (G × G × G)/G diag , are slightly modified from the above. We will discuss the modification with an explicit example in the next section.
Finally let us discuss the onefold scalar Φ R 3 , which belongs to the following representation: The component of the onefold is denoted by ( transformation. We note that the phase factor ω p cannot be absorbed into field redefinitions in the onefold case. If one considers the threefold scalar of the representation R 1 = R 2 = R 3 , whose Z (ex) 3 transformation law is given by (Φ The linear transformations that give Eq. (4.14) generally depend on k of Φ R 123 has the same transformation law of the onefold scalar. The kinetic term for the onefold scalar is given by transformation.
Lagrangian for bulk fermion fields
Let us discuss bulk fermion fields. The notation of the fermion fields in six dimensions is summarized in Appendix D. We denote the 6D Weyl fermion with the positive and negative chiralities by Ψ + and Ψ − , respectively. Each of the 6D Weyl fermions involves a vector-like pair of the 4D Weyl fermions, ψ L and ψ R .
threefold 6D Weyl fermion that belongs to the representation as in Eq. (4.1). Its component is denoted by (Ψ 20) where Γ M is the 6D gamma matrix, given in Appendix D. The covariant derivative (D (k) M ) β α is the same form as in Eq. (4.2). Using 4D Weyl fermions ψ From Eq. (D.22), the Lagrangian can be rewritten by where we have defined 24) and the indices in the representation space of R are suppressed.
For a general threefold fermion, denoted by Ψ ±(k) R 123 , whose representation is R 1 ⊗R 2 ⊗R 3 , we can write the Lagrangian by using the covariant derivatives of the forms in Eqs. (4.10)- (4.12). The irreducible decomposition under G diag is obtained as the scalar case, discussed in the previous subsection.
Let us turn to deal with the onefold 6D Weyl fermions, which is denoted by Ψ transformation. The Lagrangian is written by 25) where the covariant derivative is the same as in Eq. (4.18). Using 4D Weyl fermions ψ ± R 3 ,L and ψ ± R 3 ,R , we can write Then the Lagrangian can be written as whereD y andD y are defined as Eq. (4.24) with the covariant derivative in Eq. (4.18), and the indices in the representation space are suppressed here.
In general, bulk gauge anomalies arise from 6D chiral fermions. The requirement of cancellations of the anomalies gives constraints on the matter contents of theories [21,22,29,30]. In our setup, bulk anomaly cancellations can be ensured by introducing vector-like sets of 6D Weyl fermions. There also appear 4D gauge anomalies on the boundaries, i.e., the fixed points on T 2 /Z 3 . Such 4D anomalies depend on BCs for fermions and will be discussed in the next subsection.
Orbifold boundary conditions and low-energy mass spectra
We here discuss the BCs for matter fields. First, let us see the transformation laws of covariant derivatives underT 1 andŜ 0 , which must be consistent with the BCs for gauge fields. From Eqs. (3.7) and (3.8), we find that the covariant derivatives in Eqs. (4.2) and (4.10)-(4.12) for threefolds and in Eq. (4.18) for onefolds transform aŝ where we have used the shorthand notation to show the boundary conditions for the covariant derivative along x µ and y i by the subscript {µ, y i }.
The BCs for the matter fields are taken to be consistent with the above transformations and written as where a pair of fields φ and φ S represent the scalars Φ and 6D Weyl fermions Ψ ± : The definition ofS Ψ is shown in Eq. (D.24), and p t , p s ∈ {0, ±1} are chosen by hand for each field. Since the 6D Weyl fermions compose of 4D Weyl fermions, the last three pairs in Eq. (4.33) are rearranged to the six pairs of the 4D Weyl fermions as Any BCs given above are formally written as where φ (k+3) = φ (k) (k ∈ Z) is a boson or a 4D Weyl fermion and is a component of an irreducible representation under G × G × G. The integerp s is equal to p s for a boson and equal to p s ± 1 (p s ∓ 1) for a left-handed (right-handed) fermion with the 6D chirality ±.
In most cases, components in a set {φ (1) , φ (2) , φ (3) } are not identical and are mixed by the transformation; in this case we call φ (k) as a Z (ex) 3 triplet. There is a special case, where φ (k+1) = φ (k) holds; in this case the field φ (k) is an eigenstate of the Z singlet. We list the sets of the form {φ (1) , φ (2) , φ (3) } as follows: triplet, we can define three eigenstates of the BCs, denoted by φ [p] (p = 0, ±1), as Then, φ [p] obeys the following BCs: We note that φ [p] is convenient to examine the KK expansions, which are summarized in Appendix A, while the couplings between matter fields and the Wilson line phases are simplified for φ (k) .
From the eigenvalues of the BCs, we can find zero modes, which are constant excitations over the extra-dimensional space. The zero mode can appear as a light degree of freedom in a low-energy effective 4D theory, where gauge symmetry G × G × G is reduced to G diag as discussed in Sec. 3 singlets, fields with p t =p s = 0 have zero modes. Note that fields with p t = ±1 do not have any zero modes.
We discuss zero mode spectrum that arises from threefold fields in detail. Since threefold fields do not involve any Z triplets in Φ R 3 with p t = 0, zero modes appear from the component φ [−ps] . For the fermion case, both ψ ± R 3 ,L and ψ ± R 3 ,R in a onefold Ψ ± R 3 have Z (ex) 3 singlets. For the case with p t = 0, singlets in ψ ± R 3 ,L (ψ ± R 3 ,R ) have zero modes only if p s = −1 (p s = 1). These zero modes of Z (ex) 3 singlets yield chiral fermion mass spectrum. There also exist triplets in Ψ ± R 3 . For p t = 0, zero modes appear from the triplet components φ [−ps∓1] (φ [−ps±1] ), constructed from ψ ± R 3 ,L (ψ ± R 3 ,R ). The zero modes of Z (ex) 3 triplets always compose vector-like pairs of 4D fermions. We note that any zero modes belong to irreducible representations, which are contained in the irreducible decomposition of R ⊗ R ⊗ R under G diag .
As an illustrative example, we consider G = SU (N ) and the N -dimensional fundamental representation as R. In this case, the irreducible decomposition of R ⊗ R ⊗ R is shown by the following Young tableaux: One sees that Z (ex) 3 singlets in Φ R 3 and Ψ ± R 3 always belong to the first representation on the right-hand side of Eq. (4.46). These singlets carry N out of N 3 degrees of freedom, and the rest N 3 − N degrees of freedom form ( singlets have zero modes, there appear N + (N 3 − N )/3 degrees of freedom appear as zero modes, whose representations correspond to the first and third terms on the right-hand side of Eq. (4.46). On the other hand, for p s = ±1 case with p t = 0, there appear (N 3 − N )/3 degrees of freedom as zero modes, which transform as the representation corresponds to the second terms on the right-hand side of Eq. (4.46). Consistently to the above, one sees the relation where ( * ) is the degrees of freedom of * . We see that N in Eq. (4.47) corresponds to the degrees of freedom of Z (ex) 3 singlet components.
Let us examine the fermion zero modes in the SU (N ) case. For the case with onefold fermions, zero mode spectrum can become chiral. For example, we consider the case with a onefold Ψ + R 3 of p t = 0. In this case, the representations of the zero modes depend on p s , which are summarized as follows: Thus, low-energy spectrum of 4D fermions is chiral for p s = ±1, but vector-like for p s = 0. A similar discussion holds for the case with Ψ − R 3 . Finally, we give comments on 4D gauge anomalies. If there are fermion zero modes, they generally contribute to the anomalies. For threefold fermions, their zero modes are always vector-like and do not give 4D anomalies. On the other hand, onefold fermions can have chiral zero modes and thus generally generate 4D anomalies. Thus, a requirement of the cancellation of 4D anomalies constrains the onefold fermion contents. In addition to the zero mode anomalies, localized anomalies induced at the fixed points y i f(r) (r = 0, 1, 2), defined in Eq. (2.12), should also be concerned [22]. The localized contributions arise even if the fermion has p t = ±1, in which case there is no zero modes. In our setup, contribution to the localized anomalies at y i f(r) can arise from a fermion ψ(x µ , y i ) that satisfy the BCs ψ(x µ ,Ŝ r [y i ]) = ψ(x µ , y i ). One can see that the contributions to the localized anomalies at each fixed point from threefold fermions always cancel out since the contributions are always vector-like. For the onefold fermion, localized anomalies generally exist; it gives constraints on the matter content of the theory. When the localized anomalies vanish, also the 4D anomalies do. Conversely, the 4D anomaly cancellation does not ensure vanishing localized anomalies.
5 One-loop effective potentials for Wilson line phases in SU (5) models In this section, we study one-loop effective potentials for the classical background VEVsã a k+ in Eq. (3.20), which are related to the Wilson line phase degrees of freedom. As a concrete example, we focus on the case with G = SU (5). The discussion can be generalized to other gauge group cases. threefold scalar field to the effective potential. The simplest example is the threefold Φ
Contributions from Z
5 ∼ (1, 5, 1), Φ where 5 and 1 are the fundamental and the trivial representations of SU (5), respectively. Based on the discussion in the previous section, we define BCs for their components, (Φ In the following, the fundamental representation of the SU (5) generators is denoted by (T To obtain the effective potential forã a k+ in Eq. (3.20), let us expand the Lagrangian in Eq. (5.3) around the classical background VEVs and extract quadratic terms of the quantum fluctuations. As discussed in Sec. 3.3, we always take a basis where the Wilson line phases are diagonal and have the form like Eq. (3.28). One-loop corrections to the effective potential for the phases can be derived through path integral over the fluctuation (Φ (k) 5 ) α . The quadratic terms are written as follows: where we have defined D (k) y i β α as a background covariant derivative and = ∂ µ ∂ µ . The matrices Hâ (â = 1, . . . , 4) are the fundamental representation of the Cartan generators of SU (5), which we can take as Thus, the Wilson line phases in Eq. (5.5) are written as We now readily rewrite the quadratic Lagrangian in Eq. (5.4) as where we have introduced the differential operatorM 2 k,α aŝ We note that the above corresponds to the operator in Eq. (A.38).
Based on the discussion in Sec. 4.1 and the BCs in Eq. (5.2), we see that the components triplet. In Appendix A, we first show the KK expansion of Z Here we briefly provide the overview of the derivation. From the triplet φ (k) that obeys the BCs in Eq. (4.37), we can define φ [p] that are eigenstates of the BCs as in Eq. (4.45). The KK expansion of φ [p] yields the corresponding KK modesφ For details, please refer to Appendix A.
With the above result, the 4D effective Lagrangian in Eq. (5.4) is rewritten by KK modes of the triplet, and we can integrate them to obtain the effective potential. The derivation of the potential is shown in Appendix B. Using the result shown in Eq. (B.14), we find that the effective potential contribution from a real degree of freedom in (Φ (k) 5 ) α is given by where we have usedã α i (i = 1, 2) as the parameter of the potential since they are taken to be the independent variables amongã α ( = 1, 2, 3). As discussed in Appendix B, the summation with respect to w 1 and w 2 is taken over for all integers except for (w 1 , w 2 ) = (0, 0), which is denoted by w 1 , w 2 ∈ Z . We note that the potential in Eq. (5.11) can also be naturally expressed by the vector notation as where we have introduced the vector w and the lattice Λ w as Λ w = w = w 1 e 1 + w 2 e 2 w 1 , w 2 ∈ Z , (5.13) and the dual vectorã α = (p t /3 +ã α 1 )ẽ 1 + (p t /3 +ã α 2 )ẽ 2 , similar to those in Appendix B. Let ∆V (pt) (Φ (k) 5 ) be the contribution to the effective potential from Φ (k) 5 with p t defined in Eq. (5.2). Then, we obtain where the overall factor 2 on the right-hand side arises due to the real degrees of freedom of a complex scalar. The potential in Eq. (5.11) is manifestly invariant under integer shifts of an arbitrarily chosen component of the Wilson line phases,ã α i →ã α i ± 1, which preserve the Wilson line phase factors W in Eq. (3.29). Given the above invariance, we relax the traceless condition imposed in Eq. (5.7) as 5 a=1ãâ i+k = 0 (mod 1) in the following discussions. We can generalize the above result to triplets belonging to other representations of SU (5) R ∼ (1, 1, R). (5.15) We write the contributions to the effective potential generated by Φ . We find that the contributions from, e.g., R = 10, 15, 24 cases are given by respectively. Here, we have discarded irrelevant constants that are independent ofã α i . For general threefold scalars in Eq. (4.3), we can derive a differential operator as in Eq. (5.9). As an example, let us consider an (R 1 , R 2 , R 3 ) = (5, 5, 1) case. In this case, a component of Φ (k) R 123 has two indices, which we denote by α 1 and α 2 (α 1 , α 2 = 1, . . . , 5). Corresponding to Eq. (5.9), we find the following differential operator: From the above, we obtain a one-loop correction to the potential from Φ (k) R 123 in a similar way to the previous cases. The result is given by ). (5.20) We note that, except for the subscripts of the phases, the potential contribution coincides with the sum of those coming from Φ We turn to discuss the contributions to the effective potential from threefold fermions. The contributions mostly depend on the eigenvalues of differential operators as in Eq. (5.9). Since the covariant derivatives for bosons and fermions are the same if they belong to the same representation of SU (5), the eigenvalues are also common for bosons and fermions. Thus, the contributions from threefold fermions can be written by using the contributions from threefold boson. We denote a 6D Weyl fermion Ψ ±(k) R , whose representation is the same as in Eq. (5.15). A contribution to the effective potential from Ψ ±(k) R is denoted by ∆V (pt) (Ψ (k) R ). Then, we find ∆V (pt) (Ψ (k)
Contributions from Z (ex) 3 onefold fields
We start to discuss the contributions from Z (ex) 3 onefolds. We first examine a bulk matter scalar Φ 5 3 , whose component is written by (Φ 5 3 ) α 1 α 2 α 3 . Here, Greek indices runs 1 to 5. The BCs can be introduced as The extra-dimensional component of the covariant derivative acting on (Φ 5 3 ) α 1 α 2 α 3 is written by where the indices α k (k = 1, 2, 3) are not summed on the right-hand side in the above.
Generalizations to the other representation than 5 are straightforward. For example, we find that the contribution from Φ 10 3 is given by (5.27) where the above potential consists of the contributions from (10 3 − 10)/3 = 330 triplets. As in the case of the threefold scalar, difference between contributions from the onefold scalars and fermions is just an overall factor. Let ∆V (pt) (Ψ R 3 ) be the contribution to the potential from a fermion Ψ ± R 3 . Then, it follows that ∆V (pt) , where the contribution does not depend on 6D chiralities of fermions.
6 Gauge symmetry breaking patterns in SU (5) models
Thus, if a VEV at a minimum is determined, we can find a gauge symmetry breaking pattern through Eqs. (3.27) and (6.1).
Before starting to show results, let us mention that there are degenerate vacua in potentials for the Wilson line phases. The degeneracy is related to the invariance of the potential under some transformations of the Wilson line phases. As we mentioned below Eq. (5.14), V (pt) (ã α i ) in Eq. (5.11) is invariant under an integer shiftã α i →ã α i ± 1. Thus, effective potentials for the Wilson line phases generally have degeneracy related to the integer shift invariance. This is due to the phase property ofã α i . In addition, from Eq. (5.11), we see that a simultaneous change of the overall sign of the VEVs asã α i → −ã α i for i = 1, 2 and α = 1-5 does not change the potentials for p t = 0 cases. This leads to a degeneracy in the potentials. On the other hand, the contributions to the potentials from fields with p t = 1 and −1 are related to each other by the overall sign change of the phases, i.e., V (−1) (ã α i ) = V (1) (−ã α i ), which is shown from Eq. (5.11). The potentials are invariant under the permutation of the index α, which can be regarded as a basis change in the representation space. The exchange ofã α 1 andã α 2 also does not change the potentials. Finally, the potential contributions from adjoint matter fields are invariant under the Z 5 transformation, which is the center subgroup of SU (5), with a α i + n i /5 (i = 1, 2), where n i ∈ Z.
Concerning the above degeneracy, in the following, we show representatives of VEVs at a degenerate global minimum. In Table 1, we show the values ofã α i at a global minimum of each contribution of ∆V (pt) (Φ (k) R ) and ∆V (pt) (Ψ (k) R ) for p t = 0, 1 and R = 5, 10, 15, 24. As noted below Eq. (5.14), the traceless condition holds modulo 1. We also show the unbroken gauge symmetry G 0 at the minimum. We don't give explicit results of p t = −1 cases since they are obtained from the ones of p t = 1 cases through the relation The gauge field also generates the contribution to the effective potential, which is equal to 24 ), whose minimum respects SU (5) symmetry. Thus, we need bulk matter fields in the theory to obtain the SM gauge symmetry 24 ) = 0 at the one-loop level, and the contribution ∆V (1) (Ψ (k) 5 ) has degenerate global minima with G SM and SU (5), as seen in Table 1. Thus, we easily find matter contents that ensure G SM at a minimum and have no bulk and boundary anomalies. We show two examples in Table 2. We refer to the bulk matter contents shown in the left and right tables as case (i) and (ii), respectively. The case (i) consists of a p t = 0 adjoint threefold fermion with positive chirality and 10 sets of the p t = 1 fundamental threefold fermion with negative chirality. The case (ii) consists of a p t = 0 adjoint threefold fermion with positive chirality, the 16 sets of the p t = 1 fundamental threefold fermion with negative chirality, and the 2 sets of the p t = 0 antisymmetric (10-dimensional) representation threefold fermion with negative chirality. 10 In both cases, one sees that there are no anomalies. In addition, the potential contributions from the gauge field and an adjoint fermion field cancel out. For the case (i), the sum of the effective potential contributions is proportional to ∆V (1) (Ψ (k) 5 ), in which SU (5) and G SM vacua are degenerate. For the case (ii), we numerically find that, at the global minima of the effective potential, the values of the Wilson line takẽ a α 1 =ã α 2 = (1, 1, 1, 0, 0)/3, (6.2) and the symmetry SU (5) is broken down to G SM . We note that on this vacuumã α 3 = (−2, −2, −2, 0, 0)/3 andã α −ã α +1 = 0 (mod 1) are obtained. Thus, this vacuum respects the symmetry Z 3 , as discussed in Sec. 3.3.
Phenomenological implications
On the vacuum shown in Eq. (6.2), interestingly, the so-called doublet-triplet splitting among the Higgs fields in the 5 representation can be realized, similarly in the S 1 /Z 2 case [16].
If we introduce a 5 threefold scalar with p t = 0, its triplet component gets contribution from the Wilson line phases to become massive, while its doublet component does not and contains a massless mode. We note that on this vacuum, the Z symmetry. This means that the tadpole term of the zero mode of A y i is absent even in the higher-loop corrections to allow the vacuum to be a (local) minimum without a fine tuning. In addition, the effective theory around the TeV scale would have a Z 3 symmetry, though a soft-breaking term of the Z 3 symmetry may be introduced as in the S 1 /Z 2 case [17].
Of course, there would be large radiative corrections to the scalar masses in nonsupersymmetric (non-SUSY) models, and thus we impose the SUSY in following. In the case (i) bulk matter p t flavor Ψ Table 2: Examples of bulk matter contents. We refer to the matter contents of the left (right) table as the case (i) ((ii)).
SUSY limit, however, the contributions from the fermions and the bosons to the effective potential are canceled out. Thus, the actual effective potential strongly depends on the SUSY breaking. In addition, when there is a hierarchy between the SUSY-breaking scale and the compactification scale, the effective potential suffers from the large logarithms, and we need to treat the renormalization group equations. In this way, the analysis in the previous subsection can not be applied directly. Nevertheless, it provides a hope that the vacuum tends to be realized in a sizable parameter region, besides a proof of existence.
Concerning the vacuum selection, we have proposed an interesting scenario in Ref. [18], which may be applied also to the present case. In the reference, we have calculated the effective potential at a finite temperature and found that there are models where the desired vacuum (in the S 1 /Z 2 case) is the global minimum at high temperature. Thus, if the universe started with very high temperature of order the Planck scale, the vacuum would be selected around the temperature of order the compactification/GUT scale, before the inflation. Then, it is natural to expect that the vacuum does not move so much until the reheating and has been selected.
An outstanding prediction of the SUSY version is the existence of light adjoint chiral supermultiplets of masses around the SUSY-breaking scale, which would be a TeV scale. This is understood as follows. The zero modes of A y i are massless at the tree level and receive masses through radiative corrections that are suppressed by the SUSY-breaking scale. Since the mass differences among components in a single supermultiplet are at most of the SUSY-breaking scale, the masses of their SUSY partners are also at most of the scale. Some collider phenomenology of them in the S 1 /Z 2 case was studied in Ref. [17]. In Ref. [19], another attractive possibility to regard the adjoint chiral supermultiplets as those introduced in the Dirac gaugino scenario [31] is studied to show that the so-called goldstone gauginos [32] are naturally realized. Similar analyses in the present case are desirable.
An unfavorable point of this prediction is that the light adjoint chiral supermultiplets ruin the success of the gauge coupling unification in the minimal SUSY SU (5) model [3]. This is because the adjoint multiplets give contributions of ∆b adj i = (0, 2, 3) for the beta function coefficients to that of the minimal supersymmetric SM (MSSM), b MSSM i = (33/5, 1, −3). It is possible, however, to recover the gauge coupling unification, for example by introducing additional multiplets that give further correction of ∆b add i = (3 + n, 1 + n, n) [16].
It is notable that an example with n = 0 is naturally realized in the present case, for instance by adding one 5 and two 10 threefold hypermultiplets with p t = 0. This is because the above 5 (10) hypermultiplet contains a zero mode vector-like pairs of the component with the SM charge (1, 2) −1/2 ((1, 1) 1 ). We note that it is in contrast to the S 1 /Z 2 case, where the pair with (1, 1) 1 can not be realized separately. This difference would bring significant effects on the phenomenology as the quantum corrections to the colored particles are not so enhanced in contrast to the n = 1 case where the color SU (3) symmetry is asymptotic nonfree (though still perturbative around the GUT scale) [17].
Next, we discuss the matter sector. As shown in Sec. 4.3, the zero modes of the threefold fermions are vector-like, and those of the onefold fermions may be chiral but the possible representations are restricted. Then, the simplest way to realize the chiral fermion in the SM is to put them on the fixed points. Though there are still several possibilities to put the fermions on the three fixed points, we consider here only the case all the SM fermions are put on a common fixed point, for simplicity.
In contrast to the usual gauge-Higgs unification models where the SM Higgs field is unified into a gauge field, the SM Higgs field is introduced as a 5 field in our scenario, and its Yukawa coupling can be set by hand on the fixed point. The flavor structure of the Yukawa couplings is similar to usual 4D models and it might be set by hand or a flavor symmetry may be introduced. A difference from the usual 4D models is the SU (5) breaking effect, which is carried only by A y i , and thus bulk fermions are necessary as messenger of the SU (5) breaking, to solve the wrong GUT relation among the Yukawa couplings.
Finally, we comment on the µ problem and the proton decay. If we put a 5 threefold hypermultiplet with negative chirality and p t = 0, the zero modes are a vector-like pair of the doublet chiral supermultiplet with the Z 3 -charge +1. When these are identified with H u and H d of the MSSM, the matter chiral supermultiplet 10 i and5 i where the index i denotes the generation should have the Z 3 charge +1 to allow the Yukawa couplings. These Z 3 charge forbids the dimension 5 operator for the proton decay, 10 i 10 j 10 k5l , and, at the same time, the µ term in the MSSM. We suppose the SUSY breaking sector breaks the Z 3 symmetry softly to solve the µ problem. Though this Z 3 breaking may generate the dimension 5 proton decay operator, its contribution to the proton decay is quite suppressed. Then, the proton decay via the dimension 6 operators mediated by the gauge field becomes dominant. In the 6D spacetime, the sum of the contributions from the KK gauge boson is (logarithmically) divergent [33], when all the fermion fields are put on a single fixed point. Though the summation should be cut off at some point as the 6D theory is also an effective theory, this process is enhanced, besides the effect of the enhanced coupling of the KK gauge field and the boundary fermions by a factor √ 3 shown in Eq. (3.17). Meanwhile, it also has a suppression factor. It is possible that the dominant element of the SM fermion may come from the "messenger field" instead of the boundary fields. In case that the origins of the dominant modes of the components of each SU (5) multiplet are different, the gauge interactions do not connect them. These points should be studied in a future work.
Conclusions and discussions
We have formulated a field theoretical realization of the diagonal embedding method in the gauge theory compactified on the T 2 /Z 3 orbifold. The original bulk gauge group of the theory is G × G × G, and a global Z (ex) 3 transformation permutes them. Through the BCs, only the diagonal part of the gauge group G diag , which is isomorphic to G, remains manifest at a low-energy effective theory. The 4D effective theory contains the zero mode of the extradimensional component of the gauge field, which belongs to the adjoint representation of G diag . The continuous Wilson line phase degrees of freedom, i.e., the zero mode along the flat direction of the tree-level potential for the extra-dimensional gauge fields, can acquire VEVs that further spontaneously break the gauge symmetry G diag . Thus, the theory possesses rich vacuum structure. We have shown a parametrization of the VEVs and the Wilson line phases, which are required to clarify the symmetry breaking patterns.
We have also discussed the bulk scalar and fermion fields in our setup. The representations of these bulk matter fields under the gauge group are restricted to be the Z invariance of the Lagrangian. We have examined the possible BCs for the matter fields and the KK mass spectrum. The onefold fermions can have 4D chiral fermions as their zero modes, although the threefold ones always have vector-like 4D fermion zero modes. A particular feature is that the representations of the chiral zero modes under the gauge group are restricted due to the diagonal embedding method, as shown in Eqs. (4.49) and (4.50).
We have studied the SU (5) type A grand gauge-Higgs unification model compactified on T 2 /Z 3 with the diagonal embedding method as an explicit application. We have derived the one-loop contributions to the effective potential for the zero modes of extra-dimensional gauge fields. We have examined the vacuum structure of the effective potential and discussed the symmetry breaking patterns related to the bulk matter contents. Our analysis has shown that the SU (5) symmetry is broken down to SU (3) × SU (2) × U (1) at the global minima of the effective potential with the specific bulk matter contents. Thus, the type A grand gauge-Higgs unification model on T 2 /Z 3 is viable for explaining the spontaneous GUT breaking.
In the present analysis, we utilize the dual lattice technique, which is just a Fourier transformation. It is actually useful to analyze the KK expansion in the T 2 /Z 3 model, which is the minimal Z 3 orbifold model and may be regarded as an effective theory of the heterotic string theory with an adjoint scalar zero mode and with three generations. In addition, this technique can be applied to more general orbifold models, for instance in a ten-dimensional spacetime, straightforwardly. It is also possible to treat more general gauge symmetry than SU (5) considered in this article, such as SO(10), E 6 and E 8 . These generalizations would be attractive future works.
Finally, we have discussed the phenomenological implications focusing on the GUT breaking vacuum. A notable feature of this spontaneous GUT breaking is to provide a solution to the doublet-triplet splitting problem in GUT models. In addition, the vacuum is characterized by the enhancement of a Z 3 symmetry and is implied to be stable against higher-loop quantum corrections. With a SUSY extension, the light three chiral supermultiplets, which are adjoint representations under SU (3), SU (2), or U (1), are predicted to appear around the SUSY-breaking scale. The unification of the three gauge couplings in the SM can be consistently explained with the vanishing beta function coefficient of the color SU (3) at the one-loop order. We have also given discussions about the SM matter sector and proton decay, although detailed examinations are left for future studies.
singlet fields
We first discuss the KK expansion of Z (ex) 3 singlet fields. Let φ(x µ , y i ) be a Z (ex) 3 singlet field that obeys the BCs as where p t , p s ∈ {0, ±1}, which are consistent withŜ 3 r =Î (r = 0, 1, 2). To examine the KK expansion, we introduce the orthonormalized eigenfunction under the One sees the eigenfunction satisfies where we have defined Notice that f (ȳ i (n i + α i )) is not an eigenfunction of the Z 3 transformation generated bŷ S 0 defined in Eq. (2.9). Using Eq. (2.21), we see that the transformation of the function is where we have defined n 3 = −n 1 − n 2 , α 3 = −α 1 − α 2 , n i+3 = n i , and α i+3 = α i for i ∈ Z. The eigenfunction of both the transformationsT 1 andŜ 0 is given bỹ where Conversely, we also obtain the relation as From Eq. (A.7), one confirms where the eigenvalues are exactly the same as in Eq. (A.1). Thus, a Z (ex) 3 singlet field with the BCs in Eq. (A.1) is expanded byf [ps] (ȳ i N i ). 11 The eigenfunctions in the set {f [p] (ȳ i N i )} for n i ∈ Z are neither completely independent nor orthonormalized. 12 From the right of Eq. (A.10) and Eq. (2.21), we find As seen below, there is no additional linear dependencies except for the above. To handle the eigenfunctions, it is convenient to introduce the normalized momentum lattice, which corresponds to the possible momentum values on T 2 in a normalization and is expanded by the dual basis vector in Eq. (2.15) as We hereafter refer to Λ pt as the dual lattice. In Fig. 2, the dual lattice with p t = 0, ±1 is illustrated. Since we can relate N and N +1 appearing inf [p] 2 on Λ pt , we regard that there is a corresponding eigenfunction on each point on the lattice. Note that N (1) , N (2) , and N (3) are not identical points on Λ pt , except for the case of (N 1 , N 2 ) = (0, 0). These points are related to each other by the Z 3 transformation generated byŜ 0 as found in Eq. (A.6) and identified to the positions of vertices of the equilateral triangle, whose center is located at the origin. From the above observation, we can divide Λ pt into the sublattice as 13 , n 1 , n 2 ∈ Z, N 2 ≥ 0, N 1 > −N 2 , (A.14) for = 1, 2, 3 and Λ (0) is the origin. Iff [p] (ȳ i N i ) corresponds to a point on Λ pt ( ) , then the dependent functionf [p] (ȳ i N i+1 ) corresponds to a point on Λ pt ( +1) . Thus, we see that the set of the eigenfunctions {f [p] (ȳ i N i )} defined on a sublattice Λ pt ( ) are linearly independent. In the following, we use With the help of δ (2) n i+k n i+k = δ kk δ (2) n i n i , one can derive the orthonormal relation for a fixed , from the definition off [p] (ȳ i N i ) in Eq. (A.7) and the relation in Eq. (A.4).
Using the eigenfunction in Eq. (A.7), we define the KK expansion of the singlet field in Eq. (A.1) as follows: This implies the constraint on the KK modeφ N 1 ,N 2 (x µ ) as Let us derive the effective 4D Lagrangian for the singlet field in Eq. (A.1) from the KK expansion (A.18). As an example, we treat φ(x µ , y i ) as a scalar and consider the 6D canonical kinetic term. 14 From the definitions of the eigenfunctions in Eqs. (A.2) and (A.7), we find the following relations: Using them, we obtain the effective 4D Lagrangian L singlet eff forφ N 1 ,N 2 (x µ ): whereφ N 1 ,N 2 (x µ ) is an independent field for N i ∈ Λ pt ( ) with a fixed . Thus, the above KK mode is a canonically normalized 4D field with the following KK mass squared: Note that, as Eq. (A.21), the KK modeφ N 1 ,N 2 satisfy the following constraint: In view of Eq. (A.28), it is natural to define the KK modeφ As seen below,φ (k) N 1 ,N 2 is a basis that diagonalize contributions from the Wilson line phases to KK masses. Combining Eq. (A.31) and the second equation in (A.33), we can expand φ (k) (y i ) byφ To see this, we use the formula derived from Eq. (A.7). Using it, we obtain From Eq. (A.32), we find the constraint onφ Let us derive the 4D effective Lagrangian for the triplet scalar defined in in Eq. (A.27). We consider the 6D kinetic term: whereM 2 k is a differential operator including Wilson line phases as in Eq. (5.9) and is defined byM where we have definedφ (N +ã ) 2 = 4 3R 2 (N 1 +ã 1 ) 2 + (N 1 +ã 1 ) (N 2 +ã 2 ) + (N 2 +ã 2 ) 2 . (B.1) In addition, we defineã =ã iẽ i . Then, we can write M 2 (ã i ) = |N +ã| 2 /R 2 . Since Eq. (A.37) is rewritten as Eq. (A.40), by performing the path integration of the KK modesφ (0) N 1 ,N 2 , we obtain the following contribution to the effective potential: whereN deg = 2 for the case of a complex scalar, and the square of an Euclidean four-momentum is denoted by p 2 E . To deal with the divergent momentum integral, we use the zeta function regularization and introduce Then, the contributions to the potential is rewritten as where Γ(s) is the Gamma function, and |s| 1 is implied. Thus, we get where the singularity associated with t → 0 corresponds to the ultraviolet divergence in the integral.
The O(s 0 ) term in Eq. (B.7) is evaluated by using the Poisson resummation formula, which is derived in the next section. In Eq. (C.8), we set D = 2 and where g ij and g ij are the metric given in Sec. 2. Let us introduce the vector w and the lattice Λ w expanded by e i , which is associated with the metric g ij , as Λ w = w = w 1 e 1 + w 2 e 2 w 1 , w 2 ∈ Z . (B.9) Then, we obtain where we have definedã ≡ (p t /3 +ã 1 )ẽ 1 + (p t /3 +ã 2 )ẽ 2 and Thus, we can replace the summation over Λ pt by the summation over Λ w . The integers w i are often called winding numbers. Let us consider a continuum path on the covering space of T 2 /Z 3 , where the separation between the endpoints of the path corresponds to the vector 2πRw. In this case, such continuum path represents a noncontractible cycle on T 2 /Z 3 , whose winding number along the e i direction is given by w i , except for the case of w = 0. This implies that the summation over the possible momentum states, i.e., KK modes, in the evaluation of the effective potential is replaced by performing the summation over the possible winding numbers. Notice that the term with w = 0 in the summation represents a local effect and is independent of the nonlocal Wilson lineã i . To deal with it, we define Λ w = Λ w \{w = 0} and write w∈Λw F (w) = F (0) + w∈Λ w F (w) for a function F (w). In the last equation, we have used that |w| 2 is symmetric under w i → −w i . We have separated the irrelevant constant term with w = 0 in the above. In this paper, we discard the constant term in the effective potential. The summation over all integers for w 1 and w 2 except for (w 1 , w 2 ) = (0, 0) is denoted by w 1 , w 2 ∈ Z . Finally, we obtain (B.14)
C The Poisson resummation formula in D dimensions
Let us consider the summation including a matrix A −1 , which is the inverse of a symmetric D × D matrix A, as Introducing β i = n i + d i , we rewrite C(w i ) as whereβ j = β j + iA jk w k .
Since A is symmetric, A −1 is diagonalized by an orthogonal matrix O (OO T = 1) and is written as 4) where −1 is the diagonal matrix and (a −1 ) k is a k-th eigenvalue of −1 . Defining z i = O i jβ j , we obtain where C i denotes a path in the complex plane defined by Re(z i ) = (−∞, ∞) with a fixed Im(z i ) = O i j A jk ω k . With the help of the relation Thus, the following relation holds: which is the Poisson resummation formula used in Appendix B. The above is naturally rewritten by the vectors and the metric that are defined by W = w i E i , n = n iẼ i , d = d iẼ i , E i ·Ẽ j = δ j i , and E i · E j = A ij as
The 6D gamma matrices can be defined by the 4D gamma matrices in Eq. (D.1) as (D.4) so that they satisfy the 6D Clifford algebra: To study the 6D chirality, it is useful to define (1, Thus, we can write (D.8) A 6D Weyl fermion Ψ ± involves a vector-like pair of 4D Weyl fermions. By using the twocomponent spinor notation in Eq. (D.3), we can also write (D.9) Let us study fermion bilinears. We define Ψ ≡ Ψ † Γ 0 and find The fermion bilinears without derivatives are given by (D.11) which are written in terms of the 4D Weyl fermions as To obtain the kinetic terms for fermion fields, we use the fermion bilinears with a derivative: (D.14) Thus, by using the 4D Weyl fermions, we can rewrite the above as The mixing terms between ψ ± L and ψ ± R include the derivatives on the extra-dimensional coordinates.
To deal with the gamma matrices and study fermion fields on M 4 × T 2 /Z 3 , it is useful to introduce the oblique coordinates discussed in Sec. 2. With the oblique coordinates y 1 and y 2 found in Eq. (2.2), we naturally define the new gamma matrices from Γ 5 and Γ 6 as As expected, they satisfy where g ij is the metric in Eq. (2.16). It is also natural to define Γ y i ≡ −g ij Γ y j , which are explicitly written as Γ y 1 = −i 0 1 1 0 ⊗ I 4 , Γ y 2 = −i 0ω ω 0 ⊗ I 4 .
(D. 19) We also introduce the useful notations: Then, Eq. (D.14) is rewritten as Let us discuss the BC on T 2 /Z 3 related to the Z 3 transformation y i →Ŝ 0 [y i ] for fermion fields. The Z 3 transformation generated byŜ 0 is a SO(2) ∼ = U (1) rotation with the angle 2π/3 on a two-dimensional Euclidean space, under which the derivative ∂ y i transforms to ∂ y i−1 as in Eq. (2.19). It is convenient to define a matrixS Ψ that satisfies UsingS Ψ , we can define the transformation law of the 6D Dirac fermion Ψ under the SO (2) rotation with the angle 2π/3 as Ψ →S Ψ Ψ so that the 2π rotation gives Ψ → −Ψ. One sees that the 2π/3 rotation keeps the fermion bilinear in Eq. (D.13) invariant as required from the 6D Lorentz invariance, with the help of the relatioñ We can define the BC for the 6D Dirac fermion Ψ(x µ , y i ) as where p s ∈ {0, ±1} can be chosen by hand, and the overall minus sign on the right-hand side originates from the fermion number operator. We note that the BC in Eq. (D.27) is consistent withŜ 3 0 =Î as required. Using the 4D Weyl fermions, we rewrite the BC as follows: ψ ± L (x µ ,Ŝ 0 [y i ]) = ω ps±1 ψ ± L (x µ , y i ), ψ ± R (x µ ,Ŝ 0 [y i ]) = ω ps∓1 ψ ± R (x µ , y i ), (D. 28) which allows us to leave a 4D chiral fermion spectrum as the zero mode from a 6D Weyl fermion Ψ ± .
|
2023-04-13T01:27:47.349Z
|
2023-04-12T00:00:00.000
|
{
"year": 2023,
"sha1": "fdaa0716103535d3f4b992f1f5faed09a5428b58",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "fdaa0716103535d3f4b992f1f5faed09a5428b58",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
146229765
|
pes2o/s2orc
|
v3-fos-license
|
Comparative Electron-Microscopic Study of Shape Memory Alloys of Systems Cu-Ni-Al and Ni-Mn-Al
The microstructure of Cu-Ni-Al and Ni-Mn-Al alloys in a wide range of chemical compositions has been studied by transmission and scanning electron microscopy methods, diffraction of electrons and X-rays. The phase composition of all the investigated alloys and the mechanism of the fracture under deformation have been determined.
Doping with third chemical elements allows one to control their critical temperatures M , M , A , A and to design alloys with specified TMT parameters. At the same time, many aspects of the effect of doping on the features of TMT and the mechanisms of destruction in such alloys remain unexplored.
The most important problem that impedes the practical application of many polycrystalline alloys based on most intermetallic compounds is their relatively low strength, plastic and fatigue characteristics and their tendency to brittle fracture. Thus, polycrystalline Cu-Ni-Al and Ni-Mn-Al alloys experience brittle intercrystalline failure after deformation by 2-3%. The main reasons for this destruction include: a very large elastic anisotropy of their metastable austenite; large grain sizes; the presence of grain-boundary segregations and precipitates of embrittling phases.
The Ural school-seminar of metal scientists-young researchers
Materials and Methods
In this study we used eleven Cu-Al -3 wt% Ni ternary alloys with the content of aluminum varied from 9 to 14 wt% (with an accuracy of ±0.1 wt%), taking into account a triple-point phase-equilibrium diagram of the vertical section of the Cu-Al-Ni three-component system. The alloys were produced by the electric-arc melting from high-purity Cu, Al and Ni (99.99%) in a refined helium atmosphere. For the sake of homogenization, the alloys selected by their chemical composition were subjected to long annealing at (1173 ± 25) K in an inert argon atmosphere. The alloy ingots were cooled in air. The alloys heated to 1223 K were forged to form a 12×12 mm bar, followed by cooling in air. Then they were quenched into water at room temperature after heating of the bars at 1223 K for 10 min.
The alloys based on Ni-Mn-Al system were prepared by induction melting in a purified argon atmosphere. For homogenization, they were remelted (at least three times) and then vacuum annealed at 1173 K for up to 30 h. High-purity (99.99% purity) metals served as starting materials for the alloys. Ingots were spark cut into plates, which were then again subjected to homogenizing annealing for 6 h in the state of β (B2) phase followed by water quenching or slow cooling at a rate of ∼100 K/h from 1073 or 1173 K.
The structure, phase composition, and martensitic transformations were investigated using the methods of X-ray diffraction at θ/2θ and electron microscopy. The X-ray diffraction analysis by the θ/2θ method was carried out using a DRON-3M diffractometer in the
Results and Discussion
In the present work, a comparative study of alloys with TMT and the related shape memory effects (SME) of the two doping systems Cu -Ni -Al (9-14 wt.%) and Ni -Mn -Al (0-25 at.%), created by based on binary alloys Cu-Al and Ni-Mn, is performed.
Page 216 The Ural school-seminar of metal scientists-young researchers By the method of temperature resistometry, it was found that doping with aluminum within the specified limits reduces the critical temperatures of TMT from high to cryogenic. The phase composition of the alloys and the structural types of martensitic phases were determined by X-ray diffraction and the complete phase diagrams of TMT in them were constructed. As aluminum was alloyed, the structural type of martensite also changed: in Ni-Mn-based alloys in the sequence 2M (L1 0 ) -14M -10M, on the basis of Cu-Al-Ni -from 18R to 4H. The presence of long-period martensitic phases (14M, 10M, 18R, 4H) is one of the main differences between these alloys.
According to the TEM data, a common feature of the studied alloys is the multi-packet morphology of pairwise twin martensitic phases (Fig. 1 a, c). In single crystals of low-modulus non-ferrous alloys with SME, this circumstance is responsible for their high structural-phase and physico-mechanical reversibility in the implementation of TMT under the influence of temperature or external load. However, as a rule, the high brittleness of these alloys in the polycrystalline state prevents the practical application in them of the effects of thermomechanical memory and superelasticity. Therefore, the establishment of the causes of fragility and their elimination is an important scientific and technical task.
A fractographic study of the alloys was performed using SEM in secondary electrons on the samples after testing prior to destruction. In Fig. 2 The Ural school-seminar of metal scientists-young researchers And if a crack develops perpendicularly or at an angle to the habit of the martensitic plates of the package, then a brittle-ductile fracture pattern takes place (Fig. 2, d). With a larger increase on the surface of fracture, one can observe a number of areas characterized by lamellar relief.
The nature of fracture under tension of samples of coarse-grained Cu-Ni-Al alloys, as a rule, was intergranular brittle, and in more fine-grained alloys it became ductile (cf. Fig. 3 a, b) or mixed ductile-brittle (Fig. 3 c, d).
At the same time, according to mechanical test data, the ultimate strength , the yield strength , and the relative elongation changed at room temperature. The increase in the mechanical properties of the alloys was due to the refinement of the grain structure of the -austenite and the packet substructure. So, for fine-grained alloys with 9. The Ural school-seminar of metal scientists-young researchers
Summary
The The Ural school-seminar of metal scientists-young researchers
|
2019-05-07T14:16:03.707Z
|
2019-04-15T00:00:00.000
|
{
"year": 2019,
"sha1": "7651d976744430b908a753ad165116119fb7a6cd",
"oa_license": null,
"oa_url": "https://knepublishing.com/index.php/KnE-Engineering/article/download/4412/9049",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "209cae7801fd10206215a49fb0c41a5a40c00572",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
15490694
|
pes2o/s2orc
|
v3-fos-license
|
Evidence Based Selection of Housekeeping Genes
For accurate and reliable gene expression analysis, normalization of gene expression data against housekeeping genes (reference or internal control genes) is required. It is known that commonly used housekeeping genes (e.g. ACTB, GAPDH, HPRT1, and B2M) vary considerably under different experimental conditions and therefore their use for normalization is limited. We performed a meta-analysis of 13,629 human gene array samples in order to identify the most stable expressed genes. Here we show novel candidate housekeeping genes (e.g. RPS13, RPL27, RPS20 and OAZ1) with enhanced stability among a multitude of different cell types and varying experimental conditions. None of the commonly used housekeeping genes were present in the top 50 of the most stable expressed genes. In addition, using 2,543 diverse mouse gene array samples we were able to confirm the enhanced stability of the candidate novel housekeeping genes in another mammalian species. Therefore, the identified novel candidate housekeeping genes seem to be the most appropriate choice for normalizing gene expression data.
INTRODUCTION
Measuring transcript abundance by real-time reverse transcription PCR (RT-PCR) has become the method of choice due to its high sensitivity, specificity and broad quantification range for highthroughput and accurate expression profiling of selected genes. [1] RT-PCR is the most commonly used method for molecular diagnostics, validating microarray data of a smaller set of genes and is especially useful when only a small number of cells is available. [2][3][4][5][6] Besides being a powerful technique RT-PCR suffers from certain pitfalls, with inappropriate data normalization as the most important problem. Various strategies have been applied to control gene expression results. Standardization of the amount of cells is for instance a problem when tissue samples are used. Quantification of total RNA is difficult when only minimal RNA quantities are available. More importantly, it measures the total RNA fraction of a sample, which consists for only a relatively small percentage (,10%) of mRNA and predominantly of rRNA molecules. A drawback to the use of 18S or 28S rRNA molecules as control genes is the abovementioned imbalance between mRNA and rRNA fractions. [7] In addition, it has been shown that certain biological factors and drugs may affect rRNA transcription. [8,9] Finally, those approaches still do not take a correction for the efficiency of enzymatic reactions into account. At this moment housekeeping genes are the gold standard to normalize the mRNA fraction. However, the known considerable variation in gene expression of commonly used housekeeping genes will add noise to an experiment and could ultimately lead to erroneous results. [10][11][12] This even resulted in strategies to control for the instability by using sets of control genes and calculation of normalization factors using statistical algorithms. [1,12,13] In order to identify the most stable expressed housekeeping genes we used a large set of expression data from 13,629 published human gene arrays and investigated the abundance and stability in gene expression levels. We validated the human results in mice using a set of 2,543 published mouse gene arrays.
RESULTS AND DISCUSSION
A candidate housekeeping gene was defined as a gene with the most stable expression, i.e. a gene with a small coefficient of variation (CV) and a maximum fold change ,2 (MFC, the ratio of the maximum and minimum values observed within the dataset). In addition, a mean expression level lower than the maximum expression level subtracted with 2 standard deviation (SD) was a prerequisite for a candidate housekeeping gene. The expression levels of 13,037 unique genes in the set of 13,629 diverse samples were used. Table 1 shows the identified top 15 candidate housekeeping genes (Table S1 shows CVs of all 13,037 unique genes). All 15 genes had a CV beneath the 4% level and a standard deviation below 0.49. Moreover, the MFCs ranged from 1.41 (RPL27) to 1.99 (RPS12), reflecting the minor variation in expression of those candidate housekeeping genes within the large dataset. Thirteen of these top 15 genes encode for ribosomal proteins involved in protein biosynthesis. The distribution of the expression levels is given in Figure 1A.
Next, we studied the expression levels of commonly used housekeeping genes (e.g. ACTB, GAPDH, HPRT1 and B2M). The expression levels of those commonly used housekeeping genes fluctuated dramatically ( Table 2). The MFC ranged from 1.91 (ACTB) to 15.15 (ALDOA). Moreover, for only one of 12 commonly used housekeeping genes (ACTB) the CV was beneath the 5% level, reflecting the highly variable levels of those commonly used housekeeping genes within our large dataset. Remarkably, none of the classical housekeeping genes ranked among the top 50 identified candidate housekeeping genes. The distribution of expression levels of commonly used housekeeping genes is depicted in Figure 1B.
To demonstrate the feasibility of the use of these novel candidate housekeeping genes, we created for 5 of the top 15 candidate housekeeping genes primers (i.e. RPL27, RPL30, OAZ1, RPL22 and RPS29). We tested with PCR for desired product length and specificity; no pseudogenes were amplified ( Figure 2 shows the PCR results).
To validate the enhanced stability of the identified novel candidate housekeeping genes we used another mammalian model system, i.e. the mouse. The expression levels of 21,377 unique genes in a set of 2,543 diverse mouse samples were used. The novel candidate housekeeping genes identified in the human data set also showed stability in expression in mouse arrays (Table 3). Also in mouse expression arrays genes encoding for ribosomal proteins are the most stable expressed ones. So, the stability in expression of the identified candidate housekeeping genes was confirmed in another species.
Our results clearly reveal novel candidate housekeeping genes with a more stable expression in different cellular and experimental contexts in comparison to frequently used housekeeping genes (e.g. ACTB, GAPDH and HPRT). On the basis of a definition of ubiquitous and stable expression, our results indicate however that no single gene qualifies as a 'real' housekeeping gene. GAPDH and ACTB were used as single control genes in more then 90% of the cases in high impact journals. [11] Commonly used control genes are historical carryovers and were considered good references for many years in techniques where a qualitative change was being measured, because these genes are expressed at relatively high levels in nearly all cells. However, the advent of RT-PCR placed the emphasis on quantitative change, and asks for a re-evaluation of the use of these historical housekeeping genes. Here we show for the first time a genome wide evaluation of candidate housekeeping genes by a meta-analysis of more then 13,000 samples. Interestingly, the identified candidate novel housekeeping genes do not vary much in terms of functionality; they are predominantly ribosomal proteins involved in protein biosynthesis. Therefore, experimenters that tinker with this specific cellular process would better use other candidate housekeeping genes of our analysis, for example OAZ1.
Using meta-analysis we were able to find candidate housekeeping genes with a much lower level of variance in expression across tissue types and experimental conditions than commonly used housekeeping genes. Our identified candidate housekeeping genes can be applied in (nearly) all future RT-PCR experiments without any restrictions.
MATERIALS AND METHODS
Microarray expression data of 13,629 publicly available samples hybridized to Affymetrix HG-U133A and HG-U133 Plus 2.0 GeneChips (Affymetrix, Santa Clara, Ca.) were downloaded from the Gene Expression Omnibus. [14] This set of samples comprises gene expression data of a wide variety of different tissues (e.g. primary patient material, cell lines, diseased as well as normal tissues, stem cells etc.) and varying experimental conditions (e.g. transfected/transduced cells, cytokine stimulated, cells under hypoxic conditions, ultraviolet treated cells, cells treated with chemotherapeutics or non cytotoxic drugs etc.). Probesets that were available on both platforms were converted to official gene symbols, averaging expression values of multiple probesets targeting the same gene. Next, quantile normalization was applied to the log2 transformed expression values. [15] For each gene the CV of the expression was calculated. The CV equals the standard deviation divided by the mean (expressed as a percentage). The CV is used as a statistic for comparing the degree of variation between genes, even if the mean expressions are drastically different from each other. [16] The calculated CVs for all genes were ranked. In addition, the MFC was calculated to reflect the minor variation in expression of those candidate housekeeping genes within the large dataset. For validation 2,543 publicly available mouse samples hybridized to Affymetrix Mouse Genome 430 2.0 GeneChips (Affymetrix) were downloaded from the Gene Expression Omnibus. [14]. Again, this validation set comprises a wide variety of different mouse tissues and varying experimental conditions.
Total RNA was extracted with Absolutely RNA Miniprep Kit (Stratagene, Amsterdam, The Netherlands), and reversetranscribed to cDNA with random hexamer and RevertAid TM M-MuLV Reverse Transcriptase (Fermentas, Burlington, Ontario, Canada) according to the manufacturer's protocols. Table 4 shows primer sequences for RPL27, RPL30, OAZ1, RPL22 and RPS29. The same annealing temperature (i.e. 60 uC) and number of cycles (i.e. 25) was used for all primers. The PCR products were analyzed by electrophoresis in a 1.0% agarose gel.
|
2014-10-01T00:00:00.000Z
|
2007-09-19T00:00:00.000
|
{
"year": 2007,
"sha1": "9de2940b44c4962ae68b6a152df245ea374b128a",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0000898&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d9a26ed1af480648a5262e5c92e4dd4673a38946",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
248849141
|
pes2o/s2orc
|
v3-fos-license
|
The complete chloroplast genome of Epimedium muhuangense (Berberidaceae)
Abstract Epimedium muhuangense S. Z. He & Y. Y. Wang 2017, one of the rare unifoliolate species in the Epimedium genus of Berberidaceae, is distributed in the Guizhou province of China. In present research, we sequenced the complete chloroplast genome of E. muhuangense with Illumina sequencing technology. The whole genome was 157,264 bp in length, which consisted of a large single-copy region (LSC, 88,588 bp), a small single-copy region (SSC, 17,036 bp), and a pair of inverted repeat regions (IRa and IRb, 25,820 bp). A total of 112 unique genes were successfully annotated, consisting of 78 protein-encoding genes, 30 rRNA, and four tRNA. Phylogenetic analysis demonstrated that E. muhuangense is closely related to E. elachyphyllum.
Epimedium L. is the largest herbaceous genus of Berberidaceae and contains about 62 species. China is the distribution and diversity center of Epimedium, in which there are about 52 species with continued evolution (Luo et al. 2021;Guo et al. 2022). As the traditional Chinese medicines, Epimedium plants have been identified with many pharmacological activities, such as improving cardiovascular function, anti-cancer, anti-osteoporosis and anti-aging activities (Zhou et al. 2021).
However, Epimedium is taxonomically and phylogenetically regarded as one of the most challengingly difficult taxa in plants, since abundant morphological variations complicate the interspecific relationship (Zhang YJ et al. 2016;Zhang Y et al. 2020). Chloroplast genome has been evidenced to be effective in plant phylogeny and species identification (Yang et al. 2013). In this study, we sequenced the complete chloroplast genomes of Epimedium muhuangense, aiming to provide valuable information for taxonomic and phylogenetic studies on the genus Epimedium.
In this study, E. muhuangense samples were collected from its type locality, Muhuang Town, Yinjiang County, Guizhou, China (E108 40 0 , N28 5 0 ). A specimen was deposited at the Herbarium of the Institute of Wuhan Botanical Garden, Chinese Academy of Medical Science (http://www.whiob.ac. cn/, Yanjun Zhang, yanjunzhang@wbgcas.cn) under the voucher number Yanjun Zhang 568. The genomic DNA was extracted from fresh leaves using the modified CTAB method (Doyle and Doyle 1987). The chloroplast genome was sequenced using Illumina Novaseq PE150. The sequenced clean reads were further assembled using the program GetOrganelle v1.7.4.1 (Jin et al. 2020) with E. acuminatum chloroplast genome (GenBank accession number: NC_029941) as a reference. The gene annotation was performed by online programs Geseq (Michael et al. 2017) and CPGAVAS2 (Shi et al. 2019), followed by manual correction. The chloroplast genome sequence of E. muhuangense was submitted to NCBI database with an accession number (OK166811).
To explore the phylogenetic position of E. muhuangense, the complete chloroplast genome sequences of 15 plant species were downloaded from the NCBI GenBank database. Sequences were aligned using MAFFT v.7 (Katoh et al. 2019) and trimmed using MEGA v.11 (Tamura et al. 2021). A maximum likelihood tree was constructed using raxmlGUI 2.0 (Edler et al. 2021) with Vancouveria hexandra (Hook.) C.
Morren & Decne as outgroup ( Figure 1). The results of the phylogenetic analysis show that E. muhuangense is closely related to E. elachyphyllum, both of which are distributed in northeastern Guizhou. Furthermore, E. muhuangense and E. elachyphyllum are the only two species only having unifoliolate leaves in Epimedium (Zhang YJ et al. 2011;Wang et al. 2017). The chloroplast genome of E. muhuangense will contribute to the research of phylogeny and evolution of Berberidaceae.
Authors' contributions
Jing Wang and Ruoqi Huang designed and performed the experiments; Jing Wang also analyzed the data and drafted the manuscript; Qiong Liang and Yanjun Zhang revised critically for intellectual content and approved the final version of the paper; and all authors agree to be accountable for all aspects of the work.
Ethical approval
The plant materials used in this study were transplanted into Wuhan Botanical Garden for cultivation through legal collection way. And the study was approved by the Wuhan Botanical Garden of the Chinese Academy of Sciences.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Data availability statement
The genome sequence data that support the findings of this study are openly available in GenBank of NCBI at (https://www.ncbi.nlm.nih.gov/) under the accession no. OK166811. The associated BioProject, SRA, and Bio-Sample numbers are PRJNA774127, SRR16571520, and SAMN22555212, respectively.
|
2022-05-18T15:10:26.618Z
|
2022-05-04T00:00:00.000
|
{
"year": 2022,
"sha1": "b9434cbf6fd304a3cd9e42f3075082ff7ecc1236",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/23802359.2022.2071650?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d5b262e5fbf1148cbd3272fa648aaaab15ac78f5",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
251528585
|
pes2o/s2orc
|
v3-fos-license
|
Blocking Studies to Evaluate Receptor-Specific Radioligand Binding in the CAM Model by PET and MR Imaging
Simple Summary In the development of new targeted radiopharmaceuticals, it is mandatory to demonstrate their target-specific binding. Rodents are still primarily used for these experiments. With respect to the 3Rs principles, the demand for alternative methods to reduce the number of animal experiments is continuously increasing. In the present study, we investigated whether radiotracer uptake specificity can be evaluated by blocking studies in the CAM model. PET and MR imaging were used to visualize and quantify ligand accumulation. It was demonstrated that the CAM model could be used to evaluate the target-specific binding of a radiopharmaceutical. Due to intrinsic limitations of the CAM model, animal testing will still be required at more advanced stages of compound development. Still, the CAM model could significantly reduce the number of experiments through early compound pre-selection. Abstract Inhibition studies in small animals are the standard for evaluating the specificity of newly developed drugs, including radiopharmaceuticals. Recently, it has been reported that the tumor accumulation of radiotracers can be assessed in the chorioallantoic membrane (CAM) model with similar results to experiments in mice, such contributing to the 3Rs principles (reduction, replacement, and refinement). However, inhibition studies to prove receptor-specific binding have not yet been performed in the CAM model. Thus, in the present work, we analyzed the feasibility of inhibition studies in ovo by PET and MRI using the PSMA-specific ligand [18F]siPSMA-14 and the corresponding inhibitor 2-PMPA. A dose-dependent blockade of [18F]siPSMA-14 uptake was successfully demonstrated by pre-dosing with different inhibitor concentrations. Based on these data, we conclude that the CAM model is suitable for performing inhibition studies to detect receptor-specific binding. While in the later stages of development of novel radiopharmaceuticals, testing in rodents will still be necessary for biodistribution analysis, the CAM model is a promising alternative to mouse experiments in the early phases of compound evaluation. Thus, using the CAM model and PET and MR imaging for early pre-selection of promising radiolabeled compounds could significantly reduce the number of animal experiments.
Introduction
In vivo characterization methods are essential for developing new pharmaceuticals including targeted radiolabeled compounds for diagnostics or therapy. These especially include demonstrating that accumulation in the target tissue in vivo is indeed specific. Typically, these analyses are performed in small rodents. However, to reduce the number of test animals in the sense of the 3R principles (Refinement, Reduction, Replacement), and 65% relative humidity, starting on embryonic development day (EDD) 0. The eggshell was opened on EDD2. On EDD5, two silicone rings were placed on the CAM, and on EDD6, 0.5 × 10 6 PC-3 (PSMA−) or 1.5 × 10 6 LNCaP C4-2 (PSMA+) tumor cells mixed with growth matrix (30%, v/v) were applied in a total volume of 45 µL per ring. Daily monitoring of tumor growth and embryo health was performed by visual inspection. MR and PET imaging were performed on EDD15. Chick embryos were cooled at 4 • C for 120 min before MR measurement to avoid motion artifacts (according to the protocols of Bain et al. and Zuo et al. [6,34]).
For the blocking studies in the CAM model, including the associated controls, a catheter-based on a 30G needle (B. Braun, Melsungen, Germany) was placed into a blood vessel of the chorioallantoic membrane in each case. Using a catheter allows for two or more injections to be administered. Through the catheter, 100 µL of the PSMA− specific inhibitor 2-(phosphonomethyl)-pentanedioic acid (2-PMPA; Enzo Life Sciences Inc., Farmingdale, NY, USA, ALX-550-358) was injected at various concentrations (50 µM, n = 5; 0.5 µM, n = 4; 0.05 µM, n = 5; 0.005 µM, n = 5; each at 0.9% NaCl) 20 min ahead of the PET scan. Controls received either no additional application (n = 5) or an injection of 0.9% NaCl without an inhibitor (n = 3). Each respective egg was positioned together with the catheter in the PET scanner, and the application of 150 µL [ 18 F]siPSMA-14 ((11.2 ± 0.3) µg/mL stock concentration) diluted in 0.9% NaCl was performed immediately after the start of the measurement. Catheter injection resulted in a higher average activity of (4.9 ± 1.0) MBq (median dose 4.6 MBq) compared to previously published experiments [17], corresponding to an average ligand concentration in ovo of (1.9 ± 1.5) µg/mL. The whole chick embryo, catheter, and syringe were measured in an activity meter (CRC-12, Capintec, NJ, USA) to determine the successfully applied radioactivity (100% injected activity [%IA]) for further quantification. A total of 33 chick embryos with tumors were selected for measurements, of which six (18%) had to be excluded due to failed injection (2), insufficient tumor growth, or large blood vessels too close to the tumor.
MRI and PET Measurements
For MRI, the precooled chicken eggs were placed in a custom 3D-printed holder. The holder allows MRI and PET measurements in different devices without changing the position of the egg. MR measurements were performed according to the protocols of Zuo et al. [6,35]. Data were obtained using a 60 mm quadrature volume T/R resonator on an 11.7 Tesla small-animal MRI system (Bruker BioSpec 117/16, Bruker Biospin, Ettlingen, Germany).
A T1-weighted 3D fast low-angle shot (FLASH) sequence covering the entire chicken egg was acquired as an anatomic reference for the subsequent PET ligand biodistribution measurements. The scan parameters were: TR/TE = 5/2 ms, matrix size = 400 × 400, in-plane resolution = 150 × 175 µm 2 , slice thickness = 175, no interlayer gap and NSA = 1. With 400 slices, the whole egg was covered, resulting in an acquisition time of 3 min. Furthermore, a high-resolution T2-weighted Multislice Rapid Acquisition with Relaxation Enhancement (RARE) sequence was used to accurately assess tumor volume, location, and structure. The scan parameters were: TR/TE = 4320/45 ms, matrix size = 650 × 650, in-plane resolution = 77 × 91 µm 2 , slice thickness = 500 µm, no interlayer gap, RARE factor = 8, and NSA = 4. Thirty slices were required to cover the entire tumor region, resulting in an acquisition time of 20 min.
To evaluate the biodistribution of [ 18 F]siPSMA-14 in chick embryos, a dynamic 60-min scan was performed using a small-animal PET scanner (Focus120, Siemens Medical Solutions, Inc., Erlangen, Germany). The Focus120 has a high spatial resolution (<1.3 mm) and high sensitivity (approximately 7%), with a 12 cm diameter bore and 7.6 cm axial length [36]. The obtained list-mode files were processed to generate histograms (sinograms) for a time series of 23 dynamic images in frames of 6 × 20 s, 7 × 60 s, 10 × 300 s. Reconstructions were performed with OSEM3D/MAP using 4 OSEM2D, 2 OSEM3D, and 18 MAP iterations with a matrix of 256 × 256 and a zoom factor of 1.5. MRI and PET data from the chick embryos were fused by automatic rigid superposition using the PMOD software tool (PMOD Technologies, Zurich, Switzerland).
Based on the MR images, tumor xenografts of LNCaP C4-2 and PC-3 were manually selected as volume-of-interest (VOI). The placement of the VOIs is illustrated as an example in Figure S2. As part of the analysis, decay correction to the injection time was applied. For comparison, time-activity curves (TAC) of the PET data (n = 26) were generated using GraphPad Prism ver. 9.4.0 (GraphPad Software, San Diego, CA, USA). The activity concentrations for the PSMA+ and PSMA− tumor xenografts were calculated, and mean value and standard deviation (SD) were determined. As the two tumor types were each studied in the same egg, their uptake values are paired data. Therefore, the ratio of the uptakes was first formed and then averaged. The activity concentration ratios for each tumor pair (PSMA+/PSMA−) were calculated, and the mean value and standard error of the mean (SEM) were determined.
Statistical Evaluation
A Mann-Whitney-test and simple linear regression were performed using GraphPad Prism (ver. 9.4.0 for Windows, GraphPad Software, San Diego, CA, USA). Linear regression was conducted between 16 min p.i. and the end of the PET scan and checked for significant differences in the slopes. A p-value < 0.05 was assumed statistically significant.
Tumor Size Evaluation and Visual Inspection of PET and MR Imaging
MRI was used to evaluate tumor growth in the chick embryo model. Tumor volumes were determined based on the high-resolution RARE sequence data ( Figure 1).
MRI and PET images were successfully obtained and coregistered using PMOD software, which allowed the direct correlation of the measured radioactivity to the volume of interest (VOI). These MR-based volumes correspond to an average voxel number of (4969 ± 2330) for LNCaP C4-2 and (4033 ± 1971) for PC-3. A mean activity concentration of (2.0 ± 0.9) %IA/mL was determined for the PSMA− positive tumors and (1.3 ± 0.8) %IA/mL for the PSMA− negative tumors.
For the PSMA+ tumor (LNCaP C4-2) a clear PET signal was detected without predosing with the inhibitor 2-PMPA (control). Inhibitor dose-dependent differences were already apparent in the images of selected eggs ( Figure 1), but quantitative analysis of all PET data was necessary for a precise evaluation. Comparing the signals of the PSMA+ and PSMA− tumor in each experiment revealed a usually lower uptake in PSMA− negative tumors (PC-3) (Figure 1).
While there was a larger difference in the PET signal between PSMA+ and PSMA− tumors at the lowest inhibitor concentration of 0.005 µM, there was almost no difference in the two highest inhibitor concentrations, 0.5 µM and 50 µM 2-PMPA, respectively ( Figure 1). The signals determined in the PSMA+ tumors for the intermediate inhibitor concentration 0.05 µM start to converge, and the differences in the signals of the PSMA− tumors become less pronounced.
Quantitative Evaluation of PET Imaging Using Time-Activity-Curves and Linear Regression
Based on dynamic PET data and VOIs drawn in the MRI, the activity concentration [%IA/mL] for each tumor VOI was determined for the complete scan and analyzed over time (Tables S1-S5). Time-activity curves (TACs) for each 2-PMPA-concentration experiment were generated and depicted in Figure 2 as the mean and the respective SEM. Tumor volumes of (0.023 ± 0.011) mL for LNCaP C4-2 and (0.019 ± 0.009) mL for PC-3 were determined after eight days of tumor growth.
MRI and PET images were successfully obtained and coregistered using PMOD software, which allowed the direct correlation of the measured radioactivity to the volume of interest (VOI). These MR-based volumes correspond to an average voxel number of (4969 ± 2330) for LNCaP C4-2 and (4033 ± 1971) for PC-3. A mean activity concentration of (2.0 PSMA− tumor (slope: (0.0152 ± 0.0008) %IA/mL/min). A similar trend was demonstrated for both TACs, and no significant difference was observed (p = 0.9916). Catheter injection allowed for the observation of early time points of the measurements. Due to perfusion effects in the tumors and neighboring blood vessels, noisy signals can be detected for the first 10 min p.i. Evaluation of the curves was therefore more reasonable starting at later time points. Linear regressions were performed for the period between 16 min p.i. and the end of the measurement. The lines were added to Figure 2, including the respective 95% confidence intervals (dotted lines).
There was no difference observable for controls without additional injection or the injection of 0.9% NaCl. Therefore, the control data were combined ( Figure S1).
As expected, a significantly faster increase of [ 18 F]siPSMA-14 accumulation over time was observed for the PSMA+ tumors of the control in comparison to the PSMA− xenografts, supported by a slope of (0.0182 ± 0.0005) %IA/mL/min in contrast to (0.0051 ± 0.0009) %IA/mL/min. Linear regression analyses revealed a highly significant (p < 0.0001) difference between the curves of the control measurements (Figure 2a).
Similar results were observed for the lowest inhibitor concentration (Figure 2b). Here, too, the radiotracer concentration increased steadily in the PSMA+ tumors (slope: (0.0112 ± 0.0012) %IA/mL/min), while only a minor increase was detected in the PSMA− tumors (slope: (0.0056 ± 0.0010) %IA/mL/min). Differences in the regression curves still are highly significant (p = 0.0027).
For the chicken eggs treated with a 2-PMPA concentration of 0.05 µM, also the [ 18 F]siPSMA-14 concentration still increases more in the PSMA+ tumors (slope: (0.0257 ± 0.0023) %IA/mL/min) than in the corresponding PSMA− tumors (slope: (0.0133 ± 0.0026) %IA/mL/min) (Figure 2c). Although the regression curves were also significantly different (p = 0.0029), the differences were less pronounced than in controls or at the lowest inhibitor concentration, indicating an intermediate level of blocking.
At the highest inhibitor concentration of 50 µM, no differences were detected between the two tumor types (Figure 2e). The increase in [ 18 F]siPSMA-14 concentration in the PSMA+ tumor (slope: (0.0151 ± 0.0014) %IA/mL/min) was nearly identical to the PSMA− tumor (slope: (0.0152 ± 0.0008) %IA/mL/min). A similar trend was demonstrated for both TACs, and no significant difference was observed (p = 0.9916).
The clear difference in TACs between the PSMA+ and the PSMA− tumors of the controls already indicated a PSMA− specific accumulation of [ 18 F]siPSMA-14 ( Figure 2a). This observation was supported by the results that radiotracer accumulation could be blocked by the administration of increasing concentrations of the PSMA− specific inhibitor 2-PMPA in the PSMA+ tumors (Figure 2b-e). Accordingly, these experiments also support the hypothesis that the CAM model is a potentially suitable candidate for inhibition studies to assess receptor-specific accumulation.
Analysis of Ratios between the PSMA+ and the PSMA− Tumors
The activity concentration values of the last frame of each chicken egg measurement were used to determine the ratios between PSMA+ and PSMA− tumors given as mean ± SEM (Table 1). Table 1. Each circle represents a single experiment. Concentration ratios between the control and the various concentrations were assumed to be significantly different for p < 0.005 in the Mann-Whitney test. ns = not significant; ** p < 0.005.
Discussion
We successfully demonstrated, exemplified by using [ 18 F]siPSMA-14, that target-specific tumor accumulation of a radiotracer can be assessed by inhibition studies in the CAM model with combined PET and MR imaging. In the model, various levels of inhibition could be detected, corresponding to the applied inhibitor concentration, demonstrating the potential of the method for quantifying receptor occupancy by a given target-specific radiopharmaceutical.
Due to the limitations of the CAM model in terms of biodistribution and pharmacokinetics, additional animal studies will always be needed at more advanced stages of compound development; however the model has a high potential for the early stages of characterization of new compounds. This will enable the pre-selection of promising compounds, thus reducing the number of animal studies required in concordance with the 3Rs principles.
Static Evaluation of PET-Data
We successfully demonstrated the specific blocking of radiotracer accumulation in the CAM model by PET and MR imaging. In the case of a complete inhibition, 2-PMPA occupies all free PSMA binding sites and accumulation in both tumors is characterized by nonspecific mechanisms, e.g., general perfusion and the EPR effect due to possible cotransport with albumin. Thus, the PSMA+/PSMA− ratio should be at a value of 1, as the Table 1. Each circle represents a single experiment. Concentration ratios between the control and the various concentrations were assumed to be significantly different for p < 0.005 in the Mann-Whitney test. ns = not significant; ** p < 0.005.
Application of an inhibitor concentration of 0.05 µM slightly reduced uptake in PSMA+ tumors, and the activity ratio of PSMA+/PSMA− decreased to 1.64 ± 0.31 ( Figure 3).
Beyond an inhibitor concentration of 0.5 µM, the measured activity concentrations were balanced, resulting in PSMA+/PSMA− ratios of 0.89 ± 0.10. Accordingly, at the highest inhibitor concentration used, 50 µM, a ratio of activity concentrations close to 1 was determined (1.21 ± 0.04), indicating identical and nonspecific activity concentrations in both tumor models.
A clear inhibitor-dependent trend in tumor accumulation was demonstrated, based on the ratio evaluations. While no inhibition occurred at a low concentration, as expected, partial inhibition was achieved by the administration of 0.05 µM 2-PMPA, and complete inhibition was achieved at high concentrations of 0.5 µM and 50 µM.
Discussion
We successfully demonstrated, exemplified by using [ 18 F]siPSMA-14, that targetspecific tumor accumulation of a radiotracer can be assessed by inhibition studies in the CAM model with combined PET and MR imaging. In the model, various levels of inhibition could be detected, corresponding to the applied inhibitor concentration, demonstrating the potential of the method for quantifying receptor occupancy by a given target-specific radiopharmaceutical. Due to the limitations of the CAM model in terms of biodistribution and pharmacokinetics, additional animal studies will always be needed at more advanced stages of compound development; however the model has a high potential for the early stages of characterization of new compounds. This will enable the pre-selection of promising compounds, thus reducing the number of animal studies required in concordance with the 3Rs principles.
Static Evaluation of PET-Data
We successfully demonstrated the specific blocking of radiotracer accumulation in the CAM model by PET and MR imaging. In the case of a complete inhibition, 2-PMPA occupies all free PSMA binding sites and accumulation in both tumors is characterized by nonspecific mechanisms, e.g., general perfusion and the EPR effect due to possible co-transport with albumin. Thus, the PSMA+/PSMA− ratio should be at a value of 1, as the nonspecific accumulation should be equal in both the blocked PSMA+ and PSMA− tumors, as was demonstrated in our experiments. The results of this study indicate complete inhibition both at the two highest concentrations used (50 µM and 0.5 µM). Both ratios are close to 1 and differ significantly from the PSMA+/PSMA− ratios of the control measurements, which reflects a specific accumulation of the tracer [ 18 F]siPSMA-14. Almost identical ratios for the control and the PSMA+ tumors blocked with the lowest used concentration of 0.005 µM 2-PMPA suggests no substantial inhibition of radiotracer accumulation at this low concentration. Furthermore, partial inhibition was observed at the intermediate concentration of 0.05 µM with a PSMA+/PSMA− ratio at a level between the complete and no inhibition results, which suggests that not all binding sites were saturated at this intermediate concentration.
Our data concerning dose-dependent inhibition were in excellent agreement with the expectations based on the K i value of 0.275 nM for the inhibitor 2-PMPA reported in the literature [37,38]. The small molecule 2-PMPA is a common inhibitor of PSMA and is regularly used for inhibition studies [39][40][41][42][43][44][45][46][47][48][49][50][51][52], with no side effects described at the concentrations used in this study. The affinity of the compound is about two orders of magnitude higher compared to [ 18 F]siPSMA-14, with an inhibitory concentration 50% (IC50) of (13.0 ± 1.2) nM based on patent information [53], and thus was a suitable inhibitor for the present study. Calculations of 2-PMPA concentrations in chicken embryos revealed inhibitor concentrations in the chicken embryo of 0.01 nM, 0.1 nM, 1 nM, and 109 nM at an average volume of 46.5 ± 3.2 mL. Thus, while the lowest concentration used in our experiments was below the K i , the two highest concentrations used were one to four orders of magnitude higher than the published K i and the intermediate concentration used was of the same order of magnitude as the reported K i .
Due to the experimental approach, the concentration of the applied ligand was in some experiments higher than the concentration of the inhibitor. An excessive amount of cold peptide could have been the cause for a low PSMA+/PSMA− ratio, especially in the controls or the lower inhibitor concentrations. Despite the higher ligand concentration, specificity was demonstrated by the inhibition studies, as the PSMA affinity of the inhibitor was significantly higher than of the ligand. However, the ligand concentration should be considered in similar experiments to ensure optimal differences between ligand and inhibitor.
Calculation of individual ratios of tracer accumulation between PSMA+ and PSMA− tumors yielded robust values for the analyses. The activity concentrations for the same tumors could vary between individual experiments in one group. However, since these fluctuations affect both tumor xenografts equally in the particular egg, the ratios could be used to compensate for these differences.
Dynamic Evaluation of PET-Data
Evaluation of the time-activity curves using linear regression confirmed the results of the static ratio analyses. While the slopes of the accumulation kinetics in the controls for PSMA+ and PSMA− tumors were significantly different, no significant difference between the kinetics of tumor accumulation could be detected for blocking with 50 µM. Thus, the kinetics of radiotracer accumulation in the PSMA+ tumors were consistent with the kinetics for nonspecific uptake in the PSMA− tumors. Even at a concentration of 0.5 µM 2-PMPA, complete inhibition could be demonstrated based on the kinetics, as no significant difference between the tumors could be detected either.
Significant differences in accumulation kinetics were detected for both controls and the lowest concentration of 2-PMPA used (0.005 µM). Here, the linear regression analyses also confirmed the evaluation via the ratio calculations, and we could demonstrate conclusively that there was no substantial inhibition of radiotracer uptake at this concentration.
At the intermediate concentration no complete inhibition was observed. The timeactivity curves and the linear regression evaluations also implied a dose-dependent incomplete inhibition. The results were not as distinct as for the ratio calculations, since both activity concentrations over time, for PSMA+ and PSMA−, were relatively high in these experiments.
While the time-activity curves already provided a good indication of the different accumulation kinetics in the tumor xenografts, the linear regression analysis facilitated a quantitative evaluation and assessment of the statistical significance of the differences. As expected, due to perfusion effects, the data obtained during the first 10 min of the measurements is often more noisy. These effects can be minimized by starting the linear regression analysis at a later time point, thus reducing signal variability.
The time-activity curves show the accumulation of the applied substance in a selected region over time. The use of relative activity concentration should normalize the data for comparability. However, kinetics also depend on the input function, including factors such as the concentration of the substance, the injection rate, or the specifics of the animal model [54][55][56][57][58]. It is therefore difficult to compare the curves of the different concentrations with each other. We suspect that such effects have a greater impact on the evaluation in the CAM model than in the murine model due to the smaller tumor structures and associated partial volume effect (PVE).
Limitations
The CAM model has intrinsic limitations, as we extensively discussed in a previously published study [17]. In the present study, the small size of the tumor xenografts was a particular challenge. Small anatomical structures smaller than three times the full width at half maximum (FWHM) are affected by the PVE. Consequently, in the case of Focus 120, structures 3.39 mm and smaller are affected [16,17,36], which includes the smaller tumors in the CAM model. The selection of 18 F as radionuclide avoids resolution degradation due to positron range, as this is 0.6 mm for 18 F, below the optimal spatial resolution (1.13 mm FWHM; tangential, filtered back projection) of the Focus 120 PET scanner used. While partial volume correction in chicken eggs still requires considerable effort, a PVE factor of small animal imaging could be optimized with less effort. We are working on a method to either compensate for the resolution effects using PVE correction or incorporating a PVE factor for small animal imaging and plan to provide a solution to this problem in future studies.
Also, γ-counter measurements are usually considered the gold standard for quantifying the accumulation of radiolabeled substances. Accurate extraction of small structures, such as CAM xenografts, for gamma counter measurements, including separation of undesired tissue and blood debris, can be difficult and lead to erroneous measurement results. Therefore, we focused on PET evaluation in this study.
For inhibition studies, but also for binding studies, we recommend using different tumor models that are either positive or negative with respect to the expression of the corresponding target structure, as was done in our studies. Immunohistochemical analyses can then be used to detect the expression in the tumor entities and, if necessary, to determine the ratio of the expression levels between the different tumors. For PSMA, we have already demonstrated in a recent publication that no PSMA expression was observed for PC-3 [17].
Radiotracer injection is often challenging in the CAM model [17]. Due to the small and sometimes poorly accessible blood vessels in the CAM, injecting once, let alone twice, is challenging. Thus, the injection method was optimized, and two reliable consecutive injections were done using a simple catheter. Administration of 0.9% NaCl 20 min prior to the PET study did not show any significant changes in tumor accumulation. Consequently, changes in tumor accumulation in the blocking studies were attributed to the inhibitor and were not to the application method or solvent. Obtained results for the controls were also in good agreement with previous data [17].
First descriptions of successful catheterizations of blood vessels in CAM, as well as first pharmacokinetic measurements, have already been published. The methods used for catheter placement, either by microsurgery or by injection through the shell membrane during candling, are very well described [21,59]. In addition to a comprehensive review on the CAM model [60], Chen et al. also recently published a paper in which relevant experiments based on catheterization were performed [61]. The different approaches to the same method provide an excellent basis for using the CAM model in conjunction with catheter application.
For future studies, catheterization of blood vessels in the CAM model enables the biodistribution analysis starting with the injection.
Perspective
In our previous publications, we demonstrated the accumulation of 68 Ga-or 18 Flabeled PSMA ligand in the respective xenografts in the CAM model. Preliminary indications of the possibility to analyze pharmacokinetics and the general description of the methodology in the CAM model were included in these, as well as indications of PSMA expression based on histological and immunohistochemical analyses [16,17]. However, these studies lacked evidence on the specificity of this accumulation.
Specificity is usually demonstrated by blocking studies using either the compound itself in unlabeled form or a competing inhibitor. The blocking agent is administered in excess through a separate injection by pre-dosing 10 min to 30 min prior to compound application [48][49][50][51][52] or by co-injection with the substance to be analyzed [39][40][41][42][43][44][45][46][47]. Due to the lack of information on the pharmacokinetics of the inhibitor in the CAM model so far and also to demonstrate the feasibility of multiple catheter applications in this study, we opted for preinjection of the inhibitor 20 min prior to the radiolabeled compound. For future studies, co-injection of the inhibitor and compound can be tested to evaluate whether inhibition studies can be performed on a single injection basis.
Another critical variant in inhibition studies is the analysis of specific binding both with and without an inhibitor in the same animal on consecutive days. A prerequisite for this measurement is an appropriately short-lived radionuclide, which would be given, for example, with 68 Ga or 18 F. Furthermore, it should be ensured that the ligand from the first application, either due to the low concentration used or also due to appropriate pharmacokinetics and excretion, has no influence on the binding in the second application. If these conditions are met, this form of measurement will allow the detection of binding and inhibition in the same animal and thus additionally reduce the number of animals required in terms of the 3Rs principles. In the CAM model, repeated measurements with imaging modalities may put too much stress on the embryo. In this case, the cooling or anesthesia necessary to immobilize the embryo may be a limiting factor for the embryo's survival.
Additionally, repeated application in the chicken egg on different days can be challenging. If the catheter remains in the egg until the second injection, the longer residence times in the chicken model can potentially result in increased lethality, e.g., due to injuries with the needle caused by movements of the embryo. Placement of a new catheter the next day may be problematic due to the availability of suited blood vessels. We are already conducting studies with multiple measurements in the context of the CAM model to evaluate the potential for follow-up studies over various days, but so far, only with single injections. The described evidence of binding and specificity on consecutive days of measurement in the same animal will be analyzed in future studies to further test the capabilities of the CAM model.
In the development of new radiopharmaceuticals, the demonstration of specific binding is an essential step to clinical application. The analysis in the murine model for each new ligand causes a high demand for animal experiments, corresponding animal test applications and the associated effort and costs. The CAM model allows initial binding and specificity studies, thus narrowing down the candidates for the murine assays. Experiments can be performed rapidly, provided that no animal test application is required for the CAM model, as is currently the case in many countries. Even if an animal experiment application is required, the lower cost, reduced preparation time, and simpler housing conditions are advantages of the CAM model over the murine standard. While there is no doubt that the CAM model cannot completely replace murine models, the pre-selection of the appropriate candidate ligand from an often large number of potential structures can be accelerated by using the CAM model. It also helps to decrease of the number of experiments required in mouse models.
Conclusions
The present results support our hypothesis that the CAM model offers great potential to reduce the required number of initial animal experiments during the development of new radiopharmaceuticals. Concerning the detection of specific drug accumulation employing inhibition or pre-dosing studies, the CAM model can be considered as an alternative to, e.g., animal experiments in mice. From our point of view, further studies are necessary to explore the potential of the CAM model for pharmacokinetic analyses. Since the chick embryo is significantly different from adult small animals, additional animal experiments for the final radiopharmaceutical characterization cannot be completely replaced.
|
2022-08-13T15:02:47.523Z
|
2022-08-01T00:00:00.000
|
{
"year": 2022,
"sha1": "12fae8f361a498e61c461d9e9a848ba48644b315",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6694/14/16/3870/pdf?version=1660199891",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e6471fb79eebc2cc794d54ab3d020d48b0bbb7fb",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
}
|
208857512
|
pes2o/s2orc
|
v3-fos-license
|
Photo-processing of astro-PAHs
Polycyclic aromatic hydrocarbons (PAHs) are key species in astrophysical environments in which vacuum ultraviolet (VUV) photons are present, such as star-forming regions. The interaction with these VUV photons governs the physical and chemical evolution of PAHs. Models show that only large species can survive. However, the actual molecular properties of large PAHs are poorly characterized and the ones included in models are only an extrapolation of the properties of small and medium-sized species. We discuss here experiments performed on trapped ions including some at the SOLEIL VUV beam line DESIRS. We focus on the case of the large dicoronylene cation, C48H20+ , and compare its behavior under VUV processing with that of smaller species. We suggest that C2H2 is not a relevant channel in the fragmentation of large PAHs. Ionization is found to largely dominate fragmentation. In addition, we report evidence for a hydrogen dissociation channel through excited electronic states. Although this channel is minor, it is already effective below 13.6 eV and can significantly influence the stability of astro-PAHs. We emphasize that the competition between ionization and dissociation in large PAHs should be further evaluated for their use in astrophysical models.
Introduction
Polycyclic aromatic hydrocarbons (PAHs) are present in photodissociation regions (PDRs) associated with massive star-forming regions, such as the prototypical Orion Bar region [1]. Their interaction with vacuum ultraviolet (VUV) photons can trigger various molecular processes: -(i)-ionization resulting in gas heating by thermalisation of the emitted electrons [2], -(ii)photodissociation limiting the survival of PAHs and producing molecules such as H 2 and C 2 H 2 in PDRs [3][4][5][6][7], and -(iii)-radiative cooling leading to the well-known aromatic infrared (IR) emission bands between 3 and 15 µm, which constitute the only direct diagnosis we have so far for the presence of these large molecules in astrophysical environments as proposed in the initial PAH model [8,9].
The chemical evolution of PAHs in PDRs has been modelled by several authors [3][4][5][6]10]. These models determined a critical molecular size, typically of 50 − 60 carbon atoms, below which these PAHs are not expected to survive. In some models, the critical size is estimated from the ability of the molecule to lose C 2 H 2 since in PDRs the absorption of UV photons is faster than the chemistry that could rebuild the carbon skeleton. The PAH experiencing C 2 H 2 loss will therefore be ultimately destroyed if its fragments also experience C 2 H 2 loss. Other models focus on the more common hydrogen loss. In this case the competition with rehydrogenation by reactivity with the abundant H and H 2 species needs to be considered (see for instance [5]).
All these chemical models rely on photophysical and chemical rates that are still only partly known for small and medium-sized species up to 24 carbon atoms. These species are easier to handle in gas-phase in the laboratory compared to large ones. However, their properties might differ, especially when one considers the interaction with VUV photons. In addition, one should keep in mind the extreme isolation conditions of astrophysical environments in which radiative cooling can play a key role in the photophysics of PAHs. The challenge for laboratory experiments is therefore to address the multiscale aspects of the photophysics of PAHs from very fast femtosecond processes such as ionization [11] to the very long timescale of IR cooling (see Fig.1). This challenge has to be addressed for large PAH species of ∼ 50 − 100 carbon atoms, which are the best candidates to survive in PDRs. This article reports our methodology and first results towards the study of large PAH species and evidence for their specificity. Timescale for the absorption of UV photons by a coronene cation C 24 H + 12 in the NGC 7023 NW PDR studied in the chemical model of Montillaud et al. [5]. An example of an IR cooling cascade is also provided in the inset, zoomed in over a 5 s time interval. The calculations were performed using a dedicated Monte Carlo code to describe the photophysics of PAHs [12].
The photophysics of astro-PAHs
In PDRs, astro-PAHs are well isolated. Infrared emission, which is a slow process and can extend over seconds (cf. inset graph in Fig. 1), is therefore a major process in energy relaxation due to the lack of collisions that could enter into competition. Considering a typical gas (mainly hydrogen) density of ∼ 10 4 cm −3 in PDRs, the timescale for collisions is typically 28 hours taking an optimistic rate of collisions of 10 −9 cm 3 s −1 . The timescale for the absorption of a VUV photon depends on the photon flux. It is several hours in the NGC 7023 PDR, which was studied by Montillaud et al. [5] (cf. Fig. 1), and tens of minutes in the brighter Orion Bar. This leaves time to lose most of the internal energy by infrared cooling. The energy of the available VUV photons is usually below 13.6 eV due to the ionization of hydrogen atoms but can be higher (∼ 20 eV) in the ionized bubbles around massive stars [13].
Since the initial description of the photophysics of an astro-PAH [14] we can now draw a more complete scheme of all processes triggered by the interaction with a VUV photon and their molecular timescales (cf. Fig. 2). On the very short timescales is ionization. XUV fs experiments have shown that it occurs on a characteristic time of 40 fs for small PAHs [11] and this time was found to increase with size. Dissociation starts on longer timescales [11]. It is found, in the case of PAHs, to be well described by statistical theories. In particular the fitting of breakdown curves obtained in imaging pho-toelectron photoion coincidence (iPEPICO) spectroscopy at the Swiss Light synchrotron with a model based on the Rice-Ramsperger-Kassel-Marcus (RRKM) theory has been successful to quantify activation energies and dissociation rates by studies in the 1 − 100 µs range [15]. In this time window, dissociation is not in competition with radiative cooling. It is only in the analysis of the experiments in ion traps and storage rings (e.g. in the PIRENEA setup described below) that this competition has to be taken into account considering the involved long timescales. An interesting result in the last years of the photophysics of PAHs has been the confirmation in the laboratory of recurrent fluorescence as a main radiative cooling mechanism. Recurrent fluorescence, also called Poincaré fluorescence, has been predicted by Léger et al. [16]. Boissel et al. [17] reported evidence for its role in the cooling of trapped anthracene cations submitted to the radiation of a Xe lamp. The process was confirmed and quantified in the Mini-ring storage ring [18,19]. The use of storage rings has opened the possibility for dynamical studies over the ms window for the study of the fast radiative cooling of energized PAHs. Timescales longer than seconds are now becoming accessible with the new generation of cryogenic rings such as DESIREE at the University of Stockholm or the electrostatic cryogenic storage ring CSR at the Max-Planck Institute for Nuclear Physics and University of Heidelberg. These will be really valuable to address the infrared cooling of PAHs [20].
Experimental methods
We have used ion trap experiments to investigate the effect of size on the dissociation of PAHs. More specifically, we discuss here the importance of the C 2 H 2 loss channel and the variation of the H dissociation rate with the PAH size. Two setups have been used: PIRENEA, a dedicated setup for astrochemistry, and the commercial linear ion trap available at the VUV DESIRS beamline. These setups are briefly discussed below.
• The PIRENEA setup for astrochemistry is a cryogenic Fourier transform ion cyclotron resonance mass spectrometer (FTICR-MS) that has been specifically designed to approach the conditions of the interstellar medium in terms of isolation of the trapped species. It is therefore best suited to study the photophysics of PAH ions on long timescales. It is unfortunately not interfaced with a tuneable VUV source, therefore a multiple photon absorption scheme with lower energy photons is used to achieve fragmentation. In this scheme, the photons are absorbed sequentially. Due to fast internal conversion (timescale typically less than ps [21]), the energy absorbed in an excited electronic state is rapidly converted into internal energy of the ground state before absorption of the next photon. Boissel et al. [17] have demonstrated that the use of a Xe lamp is convenient to control the heating of the ions and study their dissociation in competition with radiative cooling, which we call the dissociation at threshold. In these conditions, branching ratios between the different fragments can be obtained as illustrated in this article. In these experiments, one can easily follow the relative abundance of the parent and the different fragments as a function of the irradiation time. The fitting of these curves with a kinetic Monte Carlo model can be used to extract a dissociation rate (close to the threshold).
The model of Montillaud et al. [5] has been built on the analysis of data obtained for the coronene cation, C 24 H + 12 . • The VUV DESIRS beamline at the synchrotron SOLEIL is equipped with a Thermo Scientific LTQ XL TM linear ion trap [22]. The parent ions are produced from a neutral precursor in a solution by an atmospheric pressure photo-ionization (APPI) source. Inside the trap, the species of interest (given m/z) can be isolated by ejecting other species. The isolated singly charged cations are then irradiated by the tuneable VUV synchrotron radiation and a few hundreds of mass spectra are recorded for each photon energy. The photon rate is in the range of 10 12 −10 13 photons s −1 . Over the studied 8−20 eV energy range, the irradiation time (between typically 0.2 and 0.8 s) and the opening of the exit slit are tuned in order to maximize the signalto-noise ratio and to minimize multiple photon absorption events, which are therefore rare in our experimental conditions. The resulting photoproducts, i.e. fragments (H, H 2 and eventually C 2 H 2 loss) and doubly charged cations, are mass-analyzed and action spectra can be built as a function of photon energy. Two campaigns were performed, one on small/medium species up to 24 carbon atoms [23,24] and one on larger species up to 48 carbon atoms.
Numerical simulations
Molecular dynamics (MD) simulations are performed to study the dissociation of PAH radical cations in their ground electronic states. The electronic structure is described on-the-fly within the self-consistent-charge density functional-based tight binding (SCC-DFTB) scheme [25]. This methodology has been found to be promising to study isomerization effects during dissociation and trend in the H/C 2 H 2 branching ratio as a function of internal energy [26]. As the C 2 H 2 channel was found to be overestimated relative to H loss, we adopt here a similar procedure as in ref. [27] consisting in slightly modifying C-H and H-H interaction potentials by scaling the initial values of the parametrized atomic integrals φ C,H µ |ĥ[ρ 0 ]|φ H ν and φ C,H µ |φ H ν by 0.95 (φ µ,ν are the atomic orbitals andĥ[ρ 0 ] refers to the monoelectronic Hamiltonian at the reference density). Using this modified potential, we performed hundreds of simulations at the lowest energy values to observe dissociation in a reasonable running time. The fragmentation of PAHs can involve H and H 2 loss but also carbonaceous fragments, in general C 2 H 2 but in some cases also C 4 H 2 or CH 3 (case of hydrogenated PAHs) [15]. The mass spectra measured with the PIRENEA setup following irradiation with the Xe lamp of the isolated 12 C isotopomers of three medium-sized molecules are shown in Fig. 3. The less compact structure tetracene, C 18 H + 12 , shows the most intense peak for the C 2 H 2 fragment, followed by perylene, C 20 H + 12 . The compact ion coronene, C 24 H + 12 , does not exhibit C 2 H 2 loss, neither in PIRENEA nor in SOLEIL experiments at all VUV photon energies up to 20 eV [23]. A similar result was obtained for the larger molecules, dicoronylene, C 48 H + 20 , but also the less compact ion dibenzo[fg,ij]phenanthro[9,10,1,2,3-pqrst]pentaphene, C 36 H + 18 , whose structures are shown in Fig. 4. Figure 5. Fragmentation branching ratio (BR) for the perylene cation recorded at SOLEIL as a function of the VUV photon energy (first campaign, [23]). Also reported is the C 2 H 2 /(H+H 2 ) BR derived from the PIRENEA experiments (cf. Fig. 3).
We report in Fig. 5 the evolution of the fragmentation branching ratio (BR) with the energy of the absorbed VUV photon for perylene, C 20 H + 12 , up to 12 eV. In this range, the C 2 H 2 /(H+H 2 ) BR does not vary with energy, whereas the opening of the 2H/H 2 channel(s) proceeds.
At higher energies, sequential fragmentation is observed and the results are more difficult to interpret. The value of BR recorded at SOLEIL is also consistent with the PIRENEA value, showing that it does not depend on the excitation scheme: multiple UV-visible photon absorption for PIRENEA compared to single VUV photon absorption at SOLEIL.
The above results appear in line with a statistical fragmentation of the hot ion. It is therefore of interest to compare them with the results of MD/SCC-DFTB calculations. A total energy of 24.8 eV is necessary to observe a reasonable number of fragmentation events for the perylene cation during the simulation time (1620 simulations of 500 ps). Averaging over all simulations, losses of H, H 2 and C 2 H 2 were observed with BRs of 3.7, 0.6 and 0.7 %, respectively. In the case of the coronene cation at the same energy of 0.275 eV per mode (i.e. 28.1 eV of internal energy), the H, H 2 and C 2 H 2 losses were observed with BRs of 6.2, 0.6 and 0.5 %, respectively. The values of 0.08 and 0.07 derived for the C 2 H 2 /H and C 2 H 2 /(H+H 2 ) BRs, are therefore significantly lower than those for the perylene cation at 0.19 and 0.16, respectively. These simulations show the right trends compared to the experiments. Indeed, one should consider that, even with such a large number of simulations, statistics is probably not reached as the number of events at the energy dissociation threshold remains small. MD simulations can also be used to get structural information along the dynamics, which is not accessible in our experiments but can provide interesting insights into dissociation pathways and final products [28]. In particular it is interesting to investigate the role of isomerization upon dissociation [29]. This is an ongoing work which is not discussed in this article. The results of the model by Montillaud et al. [5] rely on the dissociation rate for the H loss of the coronene cation that was obtained by fitting the experimental data of the PIRENEA experiment with a kinetic Monte Carlo model and assuming an activation energy of 4.8 eV based on DFT calculations. This rate is compared in Fig. 6 with the more recent rate derived from iPEPICO measurements [15]. J. R. Barker showed that it is possible to describe the RRKM rate with a simple analytical expression derived from the Laplace transform inversion of the Arrhenius thermal molecular rate [30]. The two adjustable parameters are the activation energy E 0 and a pre-exponential factor A in the equation:
Dissociation rates
where ρ is the density of vibrational states (DoS) of the parent ion and U its internal energy. We computed the DoS using the Beyer & Swinehart algorithm [31] and the list of harmonic vibrational modes listed in the theoretical spectral database of PAHs [32]. We then fitted the dissociation rate of the coronene cation from [15] using the mean value of 4.41 eV reported by the authors for E 0 and adjusting the value of A to 3.05 10 15 s −1 .
It is now of interest to test whether such rates can be extrapolated to large PAH sizes. West et al. derived that the value of E 0 = 4.41 eV is typical for the loss of the first H in the studied PAHs up to 24 carbon atoms [15]. Assuming that A is independent of size, we can then calculate the dissociation rate of the dicoronylene cation, C 48 H + 20 , using Eq. 1. The result is shown in Fig. 6 and shows that C 48 H + 20 is not expected to lose H below 17 − 18 eV. This prediction can be tested with our SOLEIL data. Figure 7 (upper panel) compares the different photoproducts observed for coronene [23] and dicoronylene cations. For the latter, ionization leading to the dication is largely dominating, whereas there is more competition with hydrogen loss in the case of coronene. Zooming on the dicoronylene-H fragment channel (lower panel in Fig. 7), one can notice that the signal, although weak, starts to increase at around 12 eV, which strongly disagrees with the predicted dissociation rate (cf. Fig. 6). One could think of improper cooling of a small population of ions in the trap, possibly related to the sequential absorption of VUV photons over a short enough period so that these ions had not enough time to cool before the absorption of another VUV photon. However, a closer inspection of our data led us to notice that the fragmentation curve is following very closely the ionization curve until both curves split at ∼ 17 eV (lower panel in Fig. 7). This suggests that both channels are strictly in competition below 17 eV, which can be rationalized if dissociation is driven by electronic excited states. Thermal dissociation from the hot ground state of the cation would then start when both curves would deviate at ∼ 17 eV in reasonable agreement with the calculated dissociation rate. All this points to the occurrence of non-statistical dissociation through the same intermediate states leading to ionization. This non-statistical dissociation process is different from the one that has been reported for collisions with atoms at center-of-mass energies from a few tens to a few hundreds of eV [33]. In this case, specific non-statistical fragments are observed. On the contrary, in our experiments with VUV photons we observe a direct H loss at energies below those expected for statistical fragmentation following internal conversion. Direct dissociation in the excited states in a non-statistical process was reported earlier for large molecules, namely protonated peptides [34] and amino-acids [35] but never for PAHs. This work combined with a previous study on the hexa-peri-hexabenzocoronene, C 42 H + 18 , cation [36] shows that ionization is by far the dominant process in the processing of large PAHs by VUV photons. In addition, our SOLEIL experiments suggest that excited electronic states could play a role in dissociation. So far, only statistical (thermal) dissociation has been considered in astronomical models, which requires energies over 13.6 eV for large PAHs and therefore rare multiple absorption events [5].
On the other hand, although of very low occurrence relative to ionization, dissociation from excited electronic states can proceed from the absorption of a single VUV photon of energy less than 13.6 eV. It can therefore be very competitive relative to a multiple absorption process.
Conclusion
The study of the photophysics and stability of PAHs in astrophysical environments has motivated a wealth of studies involving a large variety of experimental setups to address the multiscale dynamics of these systems. Major progresses have been achieved but quantitative studies should now be extended to larger species. We discuss here some results obtained in ion traps for species containing up to ∼ 50 carbon atoms and came to the following conclusions. The loss of C 2 H 2 is not expected to occur in the fragmentation of these large PAHs. Ionization largely dominates fragmentation and this competition has to be better evaluated to be used in astrophysical models. In addition, the possibility to dissociate at energies below 13.6 eV, as suggested by our SOLEIL experiments on the dicoronylene cation, has to be further investigated in particular through probing the relaxation of excited electronic states with ultra-fast diagnostics [11].
|
2019-12-06T14:16:26.000Z
|
2019-12-06T00:00:00.000
|
{
"year": 2019,
"sha1": "7f891dca00281dda124c089f52ca495290fed856",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/1412/6/062002/pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "7f891dca00281dda124c089f52ca495290fed856",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
246635989
|
pes2o/s2orc
|
v3-fos-license
|
Potential Role of Domains Rearranged Methyltransferase7 in Starch and Chlorophyll Metabolism to Regulate Leaf Senescence in Tomato
Deoxyribonucleic acid (DNA) methylation is an important epigenetic mark involved in diverse biological processes. Here, we report the critical function of tomato (Solanum lycopersicum) Domains Rearranged Methyltransferase7 (SlDRM7) in plant growth and development, especially in leaf interveinal chlorosis and senescence. Using a hairpin RNA-mediated RNA interference (RNAi), we generated SlDRM7-RNAi lines and observed pleiotropic developmental defects including small and interveinal chlorosis leaves. Combined analyses of whole genome bisulfite sequence (WGBS) and RNA-seq revealed that silencing of SlDRM7 caused alterations in both methylation levels and transcript levels of 289 genes, which are involved in chlorophyll synthesis, photosynthesis, and starch degradation. Furthermore, the photosynthetic capacity decreased in SlDRM7-RNAi lines, consistent with the reduced chlorophyll content and repression of genes involved in chlorophyll biosynthesis, photosystem, and photosynthesis. In contrast, starch granules were highly accumulated in chloroplasts of SlDRM7-RNAi lines and associated with lowered expression of genes in the starch degradation pathway. In addition, SlDRM7 was activated by aging- and dark-induced senescence. Collectively, these results demonstrate that SlDRM7 acts as an epi-regulator to modulate the expression of genes related to starch and chlorophyll metabolism, thereby affecting leaf chlorosis and senescence in tomatoes.
INTRODUCTION
Leaf senescence, the final stage of leaf development prior to its death, is a genetically programmed degenerative process, which is accompanied by massive macromolecular catabolism and nutrient recycling to young or storage tissues (Gan and Amasino, 1997;Guo and Gan, 2005). Characterized by leaf chlorosis due to chlorophyll loss, leaf senescence mainly results from age-dependent internal factors, and it also can be triggered by a range of other internal and external cues, including reproduction, phytohormone levels, nutritional signals, water status, light regimes, temperature change, mechanical damage and pathogen attack (Lim et al., 2007a). Although these senescence-influencing factors induce apparently similar phenotypes, the initiation and subsequent processes of senescence are controlled by different molecular modes at multiple regulatory levels (Schippers, 2015;Luo et al., 2018). At the genetic transcriptional and post-transcriptional level, well-established senescence markers include chlorophyll content (Rossi et al., 2015), photochemical efficiency (Guo and Gan, 2005), starch metabolism (Caspar et al., 1985;Zeeman et al., 1998;Zeeman and Rees, 1999), and expression of senescenceassociated genes (SAGs) (Lim et al., 2007a). Various transcription factors, such as those belonging to NAC, WRKY, and MYB families (Robatzek and Somssich, 2002;Miao et al., 2004;Ay et al., 2009;Balazadeh et al., 2010;Yang et al., 2014;Lira et al., 2017;Ma et al., 2018;Ma X. et al., 2019;Woo et al., 2019;Jin et al., 2020), have been identified to modulate leaf senescence by activating the expression of downstream SAGs including chlorophyll catabolic genes (CCGs) for chlorophyll degradation (Balazadeh et al., 2010;Woo et al., 2019). In addition, histone modification and chromatin remodeling have been found to regulate certain SAGs expression at the transcriptional level (Lim et al., 2007b;Ay et al., 2009;Chen et al., 2016;Liu et al., 2019), implying a critical role of epigenetic control over leaf senescence.
The main form of conserved epigenetic modification, DNA methylation, often occurs at the 5' carbon of cytosine base ( m C) and plays roles in genome stability and gene expression (Finnegan and Kovac, 2000;Robertson, 2005). In plant, DNA methylation involves an RNA-directed DNA methylation (RdDM) pathway of m C establishment in CG, CHG, and CHH (where H is A, C, or T) contexts and m C maintenance (Zhang et al., 2006;Henderson and Jacobsen, 2007;Law and Jacobsen, 2010). The dynamics of DNA methylation are regulated by DNA methyltransferases and DNA demethylases (Finnegan et al., 1996;Penterman et al., 2007;Lei et al., 2015;Zhang et al., 2018;Liu and Lang, 2020). Plants encode DOMAINS REARRANGED METHYLTRANSFERASE (DRM), METHYLTRANSFERASE (MET), and CHROMOMETHYLASE (CMT) to establish and maintain m C by distinct pathways (Finnegan and Dennis, 1993;Henikoff and Comai, 1998;Cao and Jacobsen, 2002a). In Arabidopsis (Arabidopsis thaliana), MET1 prefers symmetric CG sites, while CMT2 chooses asymmetric CHH sites; CMT3 contributes to maintaining m CHG and, to a lesser extent, m CHH; DRM2 establishes de novo RdDM and also participates in maintaining m CHH (Chan et al., 2005;Matzke and Mosher, 2014;Zhang et al., 2018), respectively. Despite the dynamic alternations of global m C during vegetative and reproductive growth, DNA methylation plays an important role in regulating plant development, reproduction, and responses to biotic and abiotic stresses (Ronemus et al., 1996;Chinnusamy and Zhu, 2009;Slotkin et al., 2009;Dowen et al., 2012;Ay et al., 2014). Defects in RdDM and m C maintenance showed many phenotypic and developmental abnormalities, including reduced apical dominance, smaller plant size, altered leaf size with curly shape, decreased fertility, and varied flowering time (Finnegan et al., 1996;Kankel et al., 2003;Yang et al., 2019). Although neither the cmt3 mutants (Lindroth et al., 2001) nor the drm1 drm2 double mutants (Cao and Jacobsen, 2002b) show morphological differences from wild type (WT), drm1 drm2 cmt3 plants showed pleiotropic phenotypes including developmental retardation, reduced plant size, and partial sterility (Cao and Jacobsen, 2002a), whereas targeted disruption of rice OsDRM2, which caused a 13.9% decrease in genome-wide m C, displayed at vegetative and reproductive development in Oryza sativa, showing growth defects, semi-dwarfed stature, reductions in tiller number, delayed or no heading, aberrant panicle and spikelet morphology, and complete sterility (Moritoh et al., 2012).
However, limited progress has been made toward elucidating the involvement of DNA methylation during plant aging and senescence. For instance, the transcript levels of MET1, CMT3, DRM1, and DRM2 are shown to be repressed during leaf senescence (Cao and Jacobsen, 2002a;Jackson et al., 2002;Kankel et al., 2003;Law and Jacobsen, 2010). Additionally, the expression of 16 methylation-associated genes, including MET1, REPRESSOR OF SILENCING 1 (ROS1), and ARGONAUTE 10 (AGO10) were significantly downregulated in aging Arabidopsis leaves (Ay et al., 2014). Recently, Arabidopsis dml3 (DEMETER-Like DNA demethylase3) knockout (KO) mutant results in genome-wide hypermethylation, especially in the promoters of many SAGs whose expression are consequently suppressed, leading to a significant delay in leaf senescence (Yuan et al., 2020). In fact, global m C decreased dynamically during shoot aging (Ogneva et al., 2016). However, the precise relevance of such epi-modification in controlling leaf senescence, and the pertinent underlying mechanisms are still largely unknown. Here, we report that SlDRM7 (Solyc04g005250.2) impacts chloroplast development via modulating starch accumulation and senescence-related chlorophyll synthesis and imposes epi-effects on leaf senescence that affects vegetative growth in tomatoes.
Plant Materials and Growth Conditions
Wild-type tomato Solanum lycopersicum cultivar Ailsa Craig (AC) and the SlDRM7-RNAi lines (AC background) were generated and used in this study. Tomato seeds were either germinated directly in compost (Sunshine Mix 3, Sungro Horticulture Canada) or surface-sterilized and germinated on 1/2 Murashige and Skoog (MS) medium for 6 days before being transferred to 1/5 Hoagland solution (pH 5.5) for hydroponic growth. Seedlings that are 5-week-old were then transferred to composts and grown in insect-free growth rooms or greenhouses at 25 • C under a 16-h-light/8-h-dark cycle with a humidity of 60 to 80% (Chen et al., 2018a).
RNA Interference Constructs, CRISPR/CAS9 Gene Editing, and Tomato Transformation
The SlDRM7 RNAi vector pRNAi-SlDRM7 was constructed as described (Chen et al., 2018b;Yao et al., 2020). A 230-bp SlDRM7 fragment was PCR-amplified using tomato cDNA as a template and cloned in the sense and antisense orientations into the pRNAi-LIC vector (Chen et al., 2018b). A pair of 20-bp sgRNA oligos targeting the exon of SlDRM7 was cloned into the plant CRISPR/Cas9-induced vector to produce the SlDRM7 gene editing constructs. Tomato transformation was performed as previously described (Yao et al., 2020). Briefly, the construct was transformed into tomato cotyledons by Agrobacterium tumefaciens strain GV3101 to induce shoots under the selection of kanamycin resistance (Dobrev and Kaminek, 2002). Regenerated shoots with 3 to 4 cm length were cut off from independent calli and transferred to the rooting medium for root development (Kit et al., 2010). To confirm the stable transformation event, putatively transformed plantlets with well-developed roots were subjected to molecular analyses through genomic PCR and RT-qPCR. Primers used for making these constructs are listed in Supplementary Table 1.
Statistical Analysis of Morphological Features
To differentiate WT and SlDRM7-RNAi tomato plants, at least 10 leaflets of each line were scanned for measurement of leaf area using ImageJ software. All images showing phenotypes were captured with a Canon digital camera.
Photosynthetic Pigment Quantification and Confocal Microscopy of Chlorophyll Auto-Fluorescence
About 0.02 g fresh leaf samples collected from the 2nd leaf of six-leaf-stage tomato seedlings were immersed in 10 ml 80% (v/v) acetone in the dark for 24 h until leaves were completely bleached. The absorbance of the supernatant was measured, respectively, at 645, 663, and 470 nm, then chlorophyll a/b and carotenoids content were calculated. The photosynthetic pigment content was calculated as described (Arnon, 1949). Three biological replicates and three technical replicates for each leaf sample were measured. To determine chlorophyll autofluorescence, the 2nd leaf of six-leaf-stage tomato seedlings was examined with LSM710nlo laser scanning confocal microscope (Zeiss, Germany).
Photosynthetic Measurements
The Li-6400 portable photosynthesis system (LI-COR, Lincoln, NE, United States) was used to measure the photosynthetic physiological indexes of the 2nd leaf of six-leaf-stage tomato seedlings. The reference CO 2 concentration was held at 480 µmol mol −1 and leaf temperature at 25 • C for all measurements. Air humidity inside the leaf chamber was equivalent to values measured inside the greenhouse (approx. 75%). The light and CO 2 response curves were measured by varying light intensity from 0 to 2,500 µmol · m −2 · s −1 . Net photosynthesis rate (Pn), respiration rate, transpiration rate (Tr), stomatal conductance (Gs), and intercellular CO 2 concentration (Ci) were measured at 2,000 µmol · m −2 · s −1 . Five replicates were measured at each light intensity. The light and CO 2 response curves were simulated by a non-rectangular hyperbola model: An(I): Pn, I: light intensity, θ: the initial slope, α: the initial photochemical efficiency, Rd: dark respiration (Thornley, 1976).
Transmission Electron Microscopy
About 1 × 3 mm 2 leaf tissues were cut off from the seedlings at the six-leaf stage. Natural senescence leaf samples were collected from WT. Leaf samples were first fixed with 2.5% glutaraldehyde in phosphate buffer (0.1 M, pH 7) for more than 4 h, washed three times in the phosphate buffer for 15 min each, then post-fixed with 1% OsO 4 in phosphate buffer for 1-2 h, and washed three times in the phosphate buffer. Leaf samples were first dehydrated by a graded series of ethanol and then dehydrated by pure acetone. Next, the specimen was placed in a 1:1 mixture of absolute acetone and the final Spurr resin mixture for 1 h at room temperature, then transferred to a 1:3 mixture of absolute acetone, and the final resin mixture for 3 h and transferred to final Spurr resin mixture for overnight. The specimen was placed in Eppendorf which contained Spurr resin and heated at 70 • C for 9 h. The specimen was sectioned in LEICA EM UC7 ultratome and sections were stained by uranyl acetate and alkaline lead citrate for 5 to 10 min, respectively, and observed under a Hitachi Model H-7650 TEM (Hitachi, Japan).
Histological Detection of Starch and Measurements of Starch Content
Leaves were treated with 80% (v/v) ethanol to remove chlorophylls and stained with Lugol's iodine solution to detect starch distribution (Tsai et al., 2009). Images were captured with a Canon digital camera.
Starch content was measured following the method as described by Clegg (1956). Briefly, 0.02 g of fresh leaves were ground in liquid nitrogen. Leaf powders were mixed with 80% ethanol and then incubated at 60 • C for 20 min. After centrifugation at 4,000 rpm/min for 5 min, the supernatant was discarded. The samples were then resuspended in 3 ml of ddH 2 O and 2 ml of 9.2 M perchloric acid and incubated at 100 • C for 10 min followed by centrifugation at 4,000 rpm/min for 10 min. This step was repeated three times and the supernatant was pooled. The 5 ml of anthrone reagent was added to a 0.1 ml aliquot of extract for glucose measurement. The intensity of the color formed was measured at 620 nm after heating on a boiling water bath for 10 min and rapidly cooled. The glucose concentration was estimated using a standard curve prepared from different glucose concentrations. Since 0.9 g starch yields approximately 1 g of glucose on hydrolysis, the conversion factor is 0.9 for the starch extract.
Total RNA Extraction and Quantitative RT-PCR Analyses
Total RNA was extracted from the 2nd leaf of six-leaf-stage tomato seedlings using the RNAprep pure Plant Kit (Tiangen). Then, quantitative real-time PCR (RT-qPCR) was carried out on a LightCycler480 machine (Roche Diagnostics, Switzerland) using SYBR Premix Go Taq by CFX96 TM Real-Time System (Bio-Rad, United States). The relative expression level of genes was calculated using the formula 2 − Ct and normalized to the amount of Actin mRNA detected in the same samples. At least three technical replicates for each of three biological replicates for each sample were performed in this study. All primers used for real-time PCR analysis were listed in Supplementary Table 1.
Deoxyribonucleic Acid Extraction and Whole-Genome Bisulfite Sequencing
Genomic DNA was isolated from the 2nd leaf of six-leaf-stage tomato seedlings harvested from WT or SlDRM7-RNAi lines using the DNeasy Plant Mini Kit (Qiagen). Two biological replicates for each sample were performed. About 100 ng genomic DNA spiked with 0.5 ng lambda DNA were fragmented by sonication to 200-300 bp with Covaris S220. These DNA fragments were treated with bisulfite using EZ DNA Methylation-Gold TM Kit (Zymo Research). The library was constructed by Novogene Corporation (Beijing, China), and sequenced on the Illumina Novaseq platform (United States). Image analysis and base calling were performed with Illumina CASAVA pipeline and finally generated 150-bp paired-end reads. The FastQC (fastqc_v0.11.5) was used to perform basic statistics on the quality of the raw reads, which were pre-processed through fastp (fastp 0.20.0). The remaining reads that passed all the filtering steps, counted as clean reads, were mapped to the reference tomato genome 1 by BSMAP. The tomato genome fasta were obtained from Ensemble Plants 2 . The reference genome and clean reads were transformed into bisulfite-converted version (C-to-T and G-to-A converted) and then indexed using bowtie2 (Langmead and Salzberg, 2012). Sequence reads that produce a unique best alignment were then compared to the normal genomic sequence and the methylation state of all cytosine positions was inferred. The sequencing depth and coverage were summarized using deduplicated reads.
Methylation Analysis
Results of methylation extractor were transformed into bigWig format for visualization using IGV browser. The sodium bisulfite non-conversion rate was calculated as the percentage of cytosine sequenced at cytosine reference positions in the lambda genome. Methylated sites were identified with a binomial test using the methylated counts (mC), total counts (mC+unmC), and the nonconversion rate (r). Sites with FDR-corrected P value < 0.05 were considered methylated sites. To calculate the methylation level of the sequence, Methylation Level (ML) is defined as ML (C) = reads(mC)/ reads (mC) reads(C) .
Differentially methylated regions (DMRs) were identified using the DSS, which is a new dispersion shrinkage method for estimating the dispersion parameter. According to the distribution of DMRs through the genome, we defined the genes related to DMRs as genes whose gene body region (from TSS to TES) or promoter region (2 kb upstream from the TSS) have an overlap with the DMRs. P-values less than 0.05 were considered significantly enriched by DMR-related genes.
RNA-Seq and Data Analysis
Total RNA was isolated from the same leaf materials as for the DNA extraction. Three biological replicates for each sample were performed. Five micrograms of pooled RNA were used for the RNA-seq library using the Illumina Genome Analyzer (Solexa, United States). The sequencing data was filtered with SOAPnuke (v1.5.2). Low-quality reads were removed from the raw data, and high-quality reads were aligned to the tomato genome (version SL2.50; see text footnote 1). Differential expression analysis was performed using the DESeq2 (Love et al., 2014) with a corrected P-value (q value) < 0.05. To take insight to the change of phenotype, GO 3 and KEGG 4 enrichment analysis of annotated different expressed genes was performed by Phyper 5 based on Hypergeometric test. The significant levels of terms and pathways were corrected by q value with a rigorous threshold (q value < 0.05) by Bonferroni.
Statistical Analysis
All data in this study were presented as mean ± standard deviation (SD). Student's t-test or One-Way ANOVA followed by multiple comparison (Tukey's HSD, P ≤ 0.05) was performed to analyze the significant difference between genotypes.
SlDRM7 Is Essential for Tomato Leaf Development and Vegetative Growth
Through a gene-specific RNAi strategy ( Figure 1A), we generated two independent tomato SlDRM7-RNAi lines drm7i-1 and drm7i-2. Compared with WT, SlDRM7-RNAi resulted in leaf interveinal chlorosis and senescence in T0 and T1 to T8 seedlings of the two independent drm7i lines ( Figures 1B-I). The chlorotic phenotype along with antibiotic resistance exhibited Mendelian segregation, displaying approx. 2:1 ratio of chlorotic to normal green leaves among viable seedlings in T1, and continuing to segregate in T2 to T8 generations of both lines, while seedlings without the transgene after segregation in T1 to T8 reversed to green and were sensitive to antibiotic selection ( Figure 1B). Intriguingly, no homozygous transgenic plants were obtained for both SlDRM7-RNAi lines. These data indicate that the dominant leaf chlorotic phenotype is genetically linked with the presence of a single copy of the pRNAi-SlDRM7 transgene, presumably resulting from the RNAimediated specific suppression of SlDRM7 expression rather than non-specific off-target effect in heterozygous plants of the two SlDRM7-RNAi lines. Homozygous SlDRM7-RNAi lines may be lethal to survival (due to extreme senescence). Considering the critical role of SlDRM7 in de novo RdDM, it is possible that SlDRM7-RNAi may impose some epigenetic remodeling of its target genes that are required for proper leaf development, and such epigenetic remodeling to control leaf chlorosis is not transgenerationally heritable but relies on constant RNAi of SlDRM7.
To investigate the genetic bridge between the genotype and phenotype in drm7i lines, we used drm7i_ns-1 and drm7i_ns-2 that were reverted to WT non-segregating (ns) green leaves after segregation as negative controls. Compared to WT and drm7i_ns lines, endogenous SlDMR7 expression level was dramatically reduced by approximately 30-60% in drm7i lines ( Figure 1J). When the SlDRM7-RNAi affected vegetative leaf development, slight chlorosis started in the margin of the newly formed leaves of drm7i-1 during the 1st week transferred into a hydroponic culture, became more pronounced by the end of 2nd week, and gradually spread toward the interveinal regions, resulting in the characteristic phenotype of leaf interveinal chlorosis (Supplementary Figure 1A). We also generated SlDRM7knockout (KO) lines where CRISPR/Cas9-induced gene editing resulted in nucleotide deletions of SlDRM7 (Supplementary Figure 1B). To our surprise, no defect in compound leaf development with chlorotic phenotype was observed in both KO lines vs. drm7i-1 (Supplementary Figures 1C-H). Such striking phenotypic differences between SlDRM7-RNAi and SlDRM7-KO lines are not unprecedented with genes essential for development and growth and are often associated with "transcriptional adaptation" or "genetic compensation response" (El-Brolosy et al., 2019;Ma Z.P. et al., 2019;Wang et al., 2020), which may be able to offset the effects of a completely dysfunctional SlDRM7 gene that has lost its capacity to express SlDRM7 protein in KO lines. To test this possibility, we analyzed the expression of SlDRM5, SlDRM6, and SlDRM8 in KO lines. Compared to SlDMR6 and SlDMR8, SlDRM5 was almost not expressed in tomato leaf. While the expression of SlDRM8 was not affected by the knockout of SlDRM7, that of SlDMR6 was significantly induced (Supplementary Figure 2).
In addition, SlDRM7-RNAi also affected the leaf size and plant growth at the vegetative stage (Supplementary Figure 3). At the six-leaf stage, we measured the leaf area of the first three compound leaves and found that the average lobule area of leaves was predominant in WT, followed by that in drm7i_ns lines, while the smallest in drm7i lines (Supplementary Figures 3A-C). When checking the shoot height of seedlings after 4-week hydroponic culture, we found that the average shoot height was about 16.84 (±2.18) cm in WT, 10.85 (±1.81) cm and 10.10 (±1.44) cm in drm7i_ns-1, and drm7i_ns-2, while only 7.72 (±0.99) cm and 8.63 (±0.99) cm in drm7i-1 and drm7i-2, respectively (Supplementary Figures 3D-F), consistent with the stunted plant growth exhibiting a dwarf and bushy phenotype in drm7i lines (Figures 1H,I). Phenotypically, the distinguishable differences in terms of leaf size and plant height between WT and drm7i_ns, as well as between drm7i_ns and drm7i, suggest that these phenotypic changes may result from SlDRM7-RNAi mediated epigenetic modification(s) that can be maintained in the absence of a pRNAi-SlDRM7 trigger.
Taken together, these results demonstrated that the SlDRM7directed mechanism plays an essential role in tomato vegetative development, especially leaf development and senescence. In this work, using SlDRM7-RNAi lines drm7i-1 and drm7i-2 and the segregated lines with WT green leaves drm7i_ns-1 and drm7i_ns-2, we focus on understanding how SlDRM7 governs leaf chlorosis/senescence during vegetative growth.
SlDRM7 Modulates Leaf Senescence and the Expression of Senescence-Associated Genes
The leaf chlorotic phenotype prompted us to investigate the function of SlDRM7 in photosynthetic capacity physiologically. At the six-leaf stage, we measured chlorophyll auto-fluorescence, photosynthetic pigment content, and photosynthetic efficiency in the 2nd compound leaves. In yellowing leaf mesophylls (YLM), but not greening leaf mesophylls (GLM) of two drm7i lines, a significant decrease in chlorophyll auto-fluorescence intensity was observed (Figure 2A). Correspondingly, the content of chlorophyll a, chlorophyll b, and carotenoid were significantly lower in these two drm7i lines vs. WT or drm7i_ns lines ( Figure 2B). Since there was no obvious difference of chlorophyll auto-fluorescence and photosynthetic pigment content among WT and two drm7i_ns lines (Figures 2A,B), the influence of SlDRM7-RNAi on photosynthetic capacity was studied by comparing drm7i lines with drm7i_ns-1. Based on the light and CO 2 response curves (Figure 2C), the fitted maximum photosynthetic rate (Pn) of drm7i_ns-1 was 35.53 µmol · m −2 · s −1 , whereas that of drm7i-1 and drm7i-2 were reduced to only 14.34 and 8.86 µmol · m −2 · s −1 , respectively ( Figure 2D). Similarly, a decreasing tendency of the respiration rate, stomatal conductance (Gs), and transpiration rate (Tr) were further observed in both drm7i lines ( Figure 2D). These data suggest that photosynthesis capacity was inhibited in drm7i vs. drm7i_ns.
The decreases in both chlorophyll content and photosynthetic efficiency led us to speculate whether the chlorosis of drm7i lines is associated with premature senescence. Therefore, we examined the expression of SAGs including SlSAG12, SlSAG13, SlSAG15, SENESCENCE-RELATED GENE1 (SlSRG1), senescence-related transcription factors (TFs) genes such as SlORE1S03, SlORE1S06, and SlNAP2, and GOLDEN2-like (SlGLK1) which is related to chlorophyll biosynthesis (Lira et al., 2017;Ma et al., 2018). Except for SlGLK1, the expression of the rest of the genes was upregulated in drm7i lines compared with WT or drm7i_ns lines (Figure 3). Furthermore, we tested the SlDRM7 expression during age-dependent and dark-induced senescence in WT plants and found that the transcript level of SlDRM7 was significantly upregulated in senescence leaves induced either naturally or darkly (Supplementary Figure 4). This finding seems contradictory to the genome-wide demethylation during plant senescence (Ogneva et al., 2016), suggesting a putative self-feedback pathway may be involved in regulating senescence to T8 generations of SlDRM7-RNAi lines. The numerator represents the number of progenies of SlDRM7-RNAi lines with maintained interveinal chlorosis/senescence leaves with kanamycin resistance, while the denominator represents the number of progenies with normal green leaves with kanamycin sensitivity. (C-I) Segregation of leaf chlorosis in SlDRM7-RNAi lines. wild-type (WT) seedlings display normal green leaves (C), while progenies of SlDRM7-RNAi lines display normal WT green leaves called drm7i_ns-1 (D), and drm7i_ns-2 (E), whereas progenies maintain interveinal chlorosis/senescence leaves called drm7i-1 (F,H) and drm7i-2 (G,I). Tomato seeds were spread and germinated directly in compost, and seedlings were photographed at 10 days after germination in (C-G) with bars = 1 cm, while seedlings were photographed at 6 weeks after germination in (H,I) with bars = 5 cm. (J) RT-qPCR analysis of the relative expression levels of SlDRM7 in leaves of WT and SlDRM7-RNAi lines at the six-leaf stage. Data are means ± SD of five biological replicates. Asterisks indicate the significant differences compared with WT (*P ≤ 0.05, ***P ≤ 0.001, one-way ANOVA, Tukey's HSD). No difference with statistical significance was found for WT vs. drm7i_ns-1, and WT vs. drm7i_ns-2. by increasing SlDRM7 expression and a subsequently enhanced epi-control. Taken together, these findings reveal that SlDRM7 is required for proper vegetative growth, where SlDRM7 may work as a negative epi-regulator to repress the transcriptional expression of senescence-associated genes, leading to sustain a photosynthetic capacity and consequently inhibit the initiation of leaf senescence in tomatoes.
Effect of SlDRM7 on Genome-Wide Methylome and Transcriptional Profiling
To elucidate how SlDRM7-RNAi regulates the leaf interveinal chlorosis/senescence, whole-genome bisulfite sequencing (WGBS) was performed on the 2nd leaf collected from WT, drm7i_ns-1, and drm7i-1 seedlings at the six-leaf stage. We observed that SlDRM7-RNAi did not cause any significant alterations in genome-wide m C ( Figure 4A; Supplementary Tables 2, 3). However, upon evaluating methylation levels within gene body or promoter regions, we found genome-wide hypermethylation at CHG, CG, and CHH sites of drm7i-1 compared to WT and drm7i_ns-1 (Figure 4B), resulting in the production of a number of differentially methylated regions (DMRs) and differentially methylated genes (DMGs) in drm7i-1. A similar scenario where the increased wholegenome methylation caused by SlMET1 RNAi has been reported previously (Yao et al., 2020). Generally, most hypermethylated DMRs (hyper-DMRs) occurred at CHH sites, where m CHH-type hyper-DMRs/DMGs accounted for more than half (Figure 4C). Strong chlorophyll auto-fluorescence intensity with almost no differences was observed in leaf Mesophyll cells from the 2nd compound leaves of wild-type AC (WT), two drm7i_ns lines, and the greening leaf Mesophyll (GLM) cells of two drm7i lines at the six-leaf stage, but much weaker in the yellowing leaf mesophyll (YLM) cells of two drm7i lines. WT, drm7i_ns-1 and drm7i_ns-2, drm7i-1 and drm7i-2 are shown. Bars = 20 µm. (B) The content of three photosynthetic pigments in leaf tissues of SlDRM7-RNAi lines. The content of chlorophyll a (chla) and chlorophyll b (chlb) (left panel), and carotenoid (right panel) was measured in the 2nd compound leaves of six-leaf-stage seedlings of WT, two drm7i_ns lines, and two drm7i lines, respectively. (C,D) Effect of SlDRM7-RNAi on photosynthesis. Light and CO 2 response curves (C), as well as net photosynthetic rates (Pn), respiration rate, stomatal conductance (Gs), and transpiration rate (Tr) (D) of the 2nd compound leaves of six-leaf-stage seedlings of drm7i_ns-1 and two drm7i lines. Data are means ± SD (n = 3 in panel B, while n = 5 in panel D). Asterisks indicate the significant differences when compared with WT or drm7i_ns lines (*P ≤ 0.05, **P ≤ 0.01, ***P ≤ 0.001, one-way ANOVA, Tukey's HSD). Although m CHH-type DMRs were much more abundant than m CG-type or m CHG-type DRMs, their methylation levels were lowest and less influenced by SlDRM7 silencing (Figures 4D,E). In addition, when assessing the DMRs associated with different gene features of DMGs, there were 12 DMGs possessing DMRs in all three contexts located at gene body, and 4 DMGs located at promoters, respectively ( Figure 4F).
Previous studies have suggested that hypermethylation of transposable elements (TEs) is probably responsible for the suppression of active transposons (Nuthikattu et al., 2013;Yang et al., 2019). We further analyzed methylation levels within Tes and their flanking regions. In general, m CG, m CHG, and m CHH all show higher levels in TEs than in both 2-kb upstream and downstream regions in the tomato genome (Figure 4G), consistent with the methylation patterns in Arabidopsis and rice genomes (Le et al., 2014;Zhang et al., 2015). However, by contrast with WT and drm7i_ns-1, m CHG and m CG decreased while m CHH increased within TEs of drm7i-1 ( Figure 4G). Moreover, 118,720 differentially methylated probes (DMPs) were identified, of which the majority were preferentially hypermethylated in the CHH context, although greater differences of methylation levels were observed in m CG and m CHG rather than that in m CHH (Figures 4H,I).
To understand the biological functions of DMGs, Gene Ontology (GO) enrichment and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment analyses were performed. The GO enrichment showed that the hyper-DMGs were significantly enriched in terms of "extracellular region" and "catalytic activity, " which contained 37 and 565 DMGs, respectively (Supplementary Figure 5A). The KEGG pathway enrichment revealed that abundant hyper-DMGs were mainly assigned into five pathways related to "metabolic pathways, " "biosynthesis of secondary metabolites", "starch and sucrose metabolism, " "linoleic acid metabolism, " and "phenylpropanoid biosynthesis, " containing 166, 94, 26, 22, 6, and 22 DMGs, respectively (Supplementary Figure 5B). For hypo-DMGs, neither GO nor KEGG enriched any terms significantly (Supplementary Figures 5C,D).
To further investigate whether SlDRM7-mediated DMRs affect gene expression, RNA-seq was performed using the same leaf samples for WGBS. From the comparative "drm7i-1 vs. drm7i_ns-1" transcriptomes, 709 and 968 differentially expressed genes (DEGs) were identified to be up-and downregulated, respectively (Supplementary Figure 6A). However, there were only 289 unique DEGs with either m CG, m CHG, or m CHHtype DMRs (Figure 5A), which were designed as meth-DEGs thereafter. Among those meth-DEGs possessing both hyperand hypo-DMRs, 22 were upregulated, while the other 3 are downregulated ( Figure 5B). Most DMRs of meth-DEGs occurred in the CHH context, except that hypo-DMRs of upregulated meth-DEGs dominated in the CG context ( Figure 5C). Indeed, hyper-DMRs of upregulated meth-DEGs preferentially occurred in promoters, while others were predominantly in the gene body ( Figure 5D). Apart from m CHH with a moderate increase in drm7i-1, the methylation levels of upregulated meth-DEGs in CG and CHG contexts reduced 22 and 31%, respectively ( Figure 5E).
These results establish that different meth-DEGs may involve in different epi-regulatory modes mediated by silencing of SlDRM7. It is worthwhile noting that our comparative transcriptional profiling does not reveal any off-target effect on the expression of genes that may be related to SlDRM7 in these RNAi lines.
SlDRM7 Epi-Controls Gene Expression Related to Photosynthesis and Chlorophyll Metabolism
To explore which biological processes the above meth-DEGs participate in, using GO enrichment analysis, the above 289 meth-DEGs were implied in "beta-glucosidase activity, " "photosystem II, " "glucosidase activity, " and "oxidoreductase activity" (Supplementary Figure 6B; Supplementary Table 4). Exhilaratingly, 128 downregulated meth-DEGs were significantly enriched in 20 putative pathways, most of which were linked to "chloroplast-related cellular component metabolism or photosynthesis" (Figure 6A; Supplementary Table 5), and 16 candidates out of them were further screened for RT-qPCR analysis. Consistent with RNA-seq data (Supplementary Table 5), the transcript levels of these genes were significantly depressed in drm7i-1 compared with drm7i_ns-1 or WT ( Figure 6B). Apart from Solyc03g096850 that encodes a mitochondrial out membrane transporter protein presenting hypomethylation at CHH sites within its promoter, the rest of 15 meth-DEGs possessed hypermethylation at either CG, CHG, or CHH sites within promotors or gene body regions ( Figure 6C).
It was known that SlPSKA, SlLHCB4, SlPsbP, SlPsbP-2, SlPsbP-3, and Solyc10g047410.1 encode photosystem family proteins to form core complexes as well as light-harvesting complexes (LHCI and LHCII). The light reaction is catalyzed by two photosystems, photosystem I (PS I), and photosystem II (PS II), where the light energy is harvested to drive the transfer of electrons from water, via a series of electron donors and acceptors, to the final acceptor NADP + , which is finally reduced to NADPH (Nelson and Yocum, 2006;Jarvi et al., 2015). By multiple sequences alignment with those encoding core complexes of light reactions in Arabidopsis, a total of 88 tomato orthologous were identified, of which 42 genes were expressed in leaves (WT FPKM > 1) (Supplementary Table 6). Compared with drm7i_ns-1, the transcriptional level of 19 genes in drm7i-1 was significantly inhibited (Figure 6D; Supplementary Table 6). Since a set of LHCI and LHCII subunits binds to the core complex to capture the light and to bound chlorophylls (Li et al., 2000;Scheller et al., 2001;Jarvi et al., 2015), we also found 32 genes encoding LHC proteins in tomato. Compared with WT, the expression of these LHC genes were significantly repressed in drm7i-1, except for Solyc05g056080.2 and Solyc05g056060.2 (Supplementary Figure 6C). Moreover, 3 out of 32 genes, i.e., Solyc08g067320.1, Solyc07g063600.2, and Solyc09g014520.2, were meth-DEGs with m CHH-type hyper-DMRs located at promotor or intron regions (Supplementary Figure 6D).
Considering that chlorophyll content decreased in drm7i leaves ( Figure 2B, left panel), we further investigated whether candidates involved in chlorophyll biosynthesis and/or degradation could be identified as a meth-DEG. By screening a tomato orthologous that is linked to chlorophyll metabolism as previously indicated in Arabidopsis (Kobayashi and Masuda, 2016;Lin et al., 2016) (Supplementary Table 7), expression levels of most genes involved in chlorophyll biosynthesis were found to be significantly decreased in drm7i-1 (Figure 6E; Supplementary Figure 6E), where SlPPOX2 and SlPORB were meth-DEGs (Figure 6F). On the other hand, another 6 meth-DEGs were observed to be related to chlorophyll degradation ( Figure 6F). Nevertheless, unlike DEGs associated with chlorophyll biosynthesis, no obvious variation trend was observed in the expression of DEGs linked to chlorophyll degradation, among which some were unchanged, some were downregulated, and some were upregulated (Figure 6E; Supplementary Table 7). These findings imply that the inhibition of chlorophyll biosynthesis, rather than acceleration of chlorophyll degradation, might be important to induce chlorophyll loss, leading to the subsequent formation of chlorosis in drm7i leaves.
Silencing of SlDRM7 Blocked Starch Degradation and Caused Chloroplast Dysfunction
The comparative transcriptomes revealed that 59 unique DEGs in drm7i lines were linked to terms of "starch and sucrose metabolism, " the most significant category in the KEGG enrichment pathway (Supplementary Figure 7A). Starch is the transient storage photosynthate in many higher plants. During rapid growth, starch accumulates in the chloroplast in the daytime and degrades at night, thereby providing a steady supply of carbohydrates for organs to sustain metabolism and growth (Zeeman et al., 2002). Defects in starch turnover have been shown to impact plant growth in various species (Eimert et al., 1995;Corbesier et al., 1998;Harrison et al., 2000;Yu et al., 2001;Nashilevitz et al., 2009;Vriet et al., 2010). Therefore, we tracked diurnal variation of starch in leaves evidenced by Lugol's staining. Notably, after growing in daylight for 12 h, WT, drm7i_ns-1, and dim7i-1 exhibited bright blue-purple color ( Figure 7A, upper panel); however, after undergoing in dark for 12 h, the staining color of starch almost completely disappeared in WT and drm7i_ns-1, while remained substantially in drm7i-1, especially in leaf lamina where chlorosis developed (Figure 7A, bottom panel). This finding suggested that, in drm7i-1 leaves, starch degradation occurred in green regions near the vein but not in interveinal yellowing regions. Consistently, after 12-h treatment of daylight or dark, starch content in drm7i was significantly higher than that in WT or drm7i_ns, indicating that the starch degradation was inhibited in drm7i leaves ( Figure 7B).
In severe starch excess (sex) mutant, whole-chloroplast degradation is due to the highly accumulated starch caused by an imbalance between the daytime excessive starch synthesis and night-time limited breakdown (Stettler et al., 2009), which prompted us to examine the cellular ultrastructure of mesophyll cells in drm7i-1 leaves using transmission electron microscopy (TEM). As shown in Figure 7C, when compared with the chloroplasts in the leaves of WT, chloroplasts in GLM of drm7i lines resembled WT, but often contained grana lamella with a slightly disorganized appearance. However, it was noteworthy that in YLM of drm7i lines, highly misshapen chloroplasts were observed, in which grana and stroma thylakoids were completely disrupted, leaving an aberrant accretion of starch granules that appeared as swollen. Interestingly, the chloroplast degradation that occurred in YLM of drm7i lines was distinguishably different from that occurred in naturally senescent WT leaves, where thylakoid membrane system and starch granules were normally degraded (Supplementary Figure 7B), suggesting a linkage between starch accumulation and chloroplast homeostasis in SlDRM7-mediated leaf chlorosis/senescence. Taken together, our findings demonstrate that silencing of SlDRM7 led to an excessive accumulation of starch in chloroplasts that causes chloroplast dysfunction, resulting in tomato leaf chlorosis/senescence, a similar phenotype in sex mutants.
To elucidate the possible cause(s) underlying the sex-like phenotype of drm7i lines, genes related to starch metabolism and sugar transporter were chosen from the tomato genome (Supplementary Table 8). The RT-qPCR analysis showed that, compared with WT and drm7i_ns-1, some starch biosynthesisassociated genes were downregulated, but none was upregulated in drm7i-1 (Supplementary Figure 7C), despite 4 of them were hyper-DMGs at CHH sites (Figure 7E), indicating that the sex-like phenotype of drm7i lines did not result from the acceleration of starch biosynthesis. Besides, expression of most starch degradation-associated genes was repressed in drm7i-1 (Figure 7D), including 4 meth-DEGs, i.e., SlGWD, SlLSF1, SlBAM3, and SlPHS1, which encode glucan water dikinase, phosphoglucan phosphatase, beta-amylase-3, and alpha-1, 4 glucan phosphorylase isozyme, respectively. The methylation levels of SlGWD and SlBAM3 were decreased by 42.6% in the CHG context located at intron regions, and 20.4% in CG context located at promoter regions, respectively ( Figure 7F). Both SlLSF1 and SlPHS1 had hypermethylation at CHH sites (approximately 20%) within promoter regions ( Figure 7E). It is worth noting that SlGWD is a key enzyme in controlling the phosphate content of starch and that Arabidopsis gwd mutant exhibits the most severe sex phenotype (Yu et al., 2001). These data suggested that the suppression of starch degradation is a key factor for starch accumulation and is closely linked with leaf chlorosis mediated by SlDRM7-silencing-induced epi-regulation.
SlDRM7 Impacts Tomato Methylome
Plant DRMs are not only required for de novo methylation but also act with CMT3 to maintain non-CG methylation (Cao and Jacobsen, 2002a;Cao et al., 2003). For instance, rice OsDRM2, Arabidopsis DRM2, and tobacco (Nicotiana tabacum) NtDRM1 have de novo methylation activity and a preference for non-CG methylation (Cao et al., 2003;Wada et al., 2003;Moritoh et al., 2012;Gouil and Baulcombe, 2016). Surprisingly, although no obvious changes in overall genome-wide methylation level were caused by SlDRM7-RNAi (Figure 4A), methylation levels of different gene features in drm7i lines, especially at promoter and exon regions, were different from that in WT and drm7i_ns lines ( Figure 4B). In addition, the hypermethylation and hypomethylation levels of DMRs were equivalent (Figures 4E,I), which might be accountable for no significant alterations in whole-genome m C. While the majority of DMRs were preferentially hypermethylated in the CHH context, the magnitude of differential methylation was much greater for CG and CHG contexts (Figure 4D). Similarly, increased CHH methylation was observed in the SlDRM7-mediated DMRs affect gene expression, which is designed as meth-DEGs. Silencing of SlDRM7 influences DNA methylation in promoter and gene body, and leads to transcriptional inhibition of genes directly or indirectly as exemplified by SlLFNR1, SlPORB, SlPsaK, and SlGWD. We hypothesized that these changes including promoter hypermethylation of SlLFNR1, intron hypermethylation of SlPORB and SlPsaK, and intron hypomethylation of SlGWD resulted in their expression repression, which inhibited photosynthesis and starch degradation, eventually leading to leaf chlorosis and senescence. Conversely, leaf senescence can induce SlDRM7, forming a feedback regulatory loop, to balance vegetative growth and senescence.
Arabidopsis met1 mutant (Mathieu et al., 2007). One possible explanation is that CHH methylation might be a result of changes in DMLs expression that was affected by SlDRM7 (Supplementary Figure 8). It has been reported that CHH hypermethylation occurred when DNA demethylase (DMLs) was repressed (Liu and Lang, 2020).
SlDRM7 Affects Plant Growth and Development
We reveal that SlDRM7 is required for proper plant growth and development and, more importantly, for preventing leaf chlorosis and premature senescence (Figure 1; Supplementary Figures 1, 3). Although the function of genes involved in de novo DNA methylation and maintenance in plant growth and development have been reported previously, the interveinal chlorosis exhibited in tomato drm7i lines was never observed in other plant species. For example, rice osdrm2 disruption displayed pleiotropic developmental defects at both vegetative and reproductive stages including semi-dwarfed stature, reductions in tiller number, abnormal panicle, and spikelet morphology (Moritoh et al., 2012). Maize ortholog of DRM2 loss-of-function lines, dmt103, had developmental defects in the reproductive stage but no morphological phenotypes (Garcia-Aguilar et al., 2010). However, Arabidopsis drm1 drm2 mutant had WT phenotypes (Cao and Jacobsen, 2002b). The different effects of DRMs on plant development in different species may be ascribed to the distribution of repeats and TEs and different genome sizes or the abundance (He et al., 2011). We observed a higher degree of methylation within the TEs enrichment region than a gene body in the tomato genome ( Figure 4G). Compared to Arabidopsis, crop genomes such as tomato, rice, and maize have not only more abundant TEs but also more genes with highly methylated TEs (Lang et al., 2017). In addition, given the more complex genome (∼900 Mb) of tomato in comparison with Arabidopsis genome (∼125 Mb) or rice (∼370 Mb), it is possible that the tomato SlDRMs play more intricate functions in growth and development.
SlDRM7-Mediated Epi-Control Influences Starch Metabolic Pathways
First, RNA-seq showed that a considerable proportion of genes related to starch degradation were downregulated in drm7i lines compared to drm7i_ns and WT plants ( Figure 7D). Second, several genes were identified as meth-DEGs including SlGWD, suggesting that these genes could be regulated through SlDRM7induced epi-control. In Arabidopsis, GWD was reported to be involved in the initiation of starch degradation, since gwd mutants had more severe sex phenotypes than mutations affecting steps downstream of GWD (Critchley et al., 2001;Lu and Sharkey, 2004). In the absence of GWD activity, starch degradation was impaired leading to a severe sex phenotype not only in Arabidopsis but also in potato (Solanum tuberosum) and Lotus japonicas (Lorberth et al., 1998;Yu et al., 2001;Nashilevitz et al., 2009;Garcia-Aguilar et al., 2010). Here, TEM analysis demonstrated that silencing of SlDRM7 resulted in excessive accumulation of starch granules in chloroplasts, which triggered severe sex phenotype in tomato leaves as well ( Figure 7C). Additionally, pollen in tomato gwd mutant with a sex phenotype was associated with a reduction in pollen germination and caused a male gametophytic lethality (Nashilevitz et al., 2009), indicating that the deficiency of SlGWD affects both the vegetative and reproductive growth in tomatoes. Moreover, SlPWD and other genes acting downstream of SlGWD contributing to starch degradation were also suppressed in drm7i (Figure 7D), reflecting the key position of SlGWD to initiate the biodegradation of starch.
We further found that starch accumulation was accompanied by chloroplasts dysfunction in drm7i lines, indicating that starch metabolism is critical for chloroplast homeostasis (Figure 7C). In agreement with our supposition, Arabidopsis maltose excess 1 (mex1) mutant accumulated high levels of starch in chloroplasts and displays autophagy-like chloroplast degradation (Stettler et al., 2009). However, leaf senescence and senescence-related chlorophyll catabolism are not induced in mex1 (Stettler et al., 2009), whereas our results showed silencing of SlDRM7 reduced chlorophyll content and repressed expression of genes involved in the chlorophyll biosynthesis and subsequently induced leaf senescence (Figures 2, 7B). These findings indicated that, apart from starch metabolism, SlDRM7-induced epi-control has a wide range of effects on plant growth and development in tomatoes.
SlDRM7 Modulates Leaf Chlorosis and Senescence
Leaf chlorosis and senescence involve complex genetic programming and less understood epigenetic re-programming (Gan, 2003;Guo and Gan, 2005;Ay et al., 2014;Woo et al., 2019). In comparison to drm7i_ns-1, we found 289 genes, such as SlGWD, in the whole genome were meth-DEGs in drm7i-1 (Figure 5A). Whether SlDRM7 directly affects methylation levels at these specific sites or indirectly through other processes remains unknown. Also, it is unclear whether DNA methylation changes are directly related to gene expression regulation. In some cases, DNA methylation changes appear to be associated closely with the transcriptional control of specific loci in plants (Kinoshita and Seki, 2014;Yong-Villalobos et al., 2015;Feng et al., 2016). Here, we found that intron hypomethylation of SlGWD is related to the expression repression in SlDRM7i lines ( Figure 7E). Gene body methylation can be related to transcriptional upregulation and has been suggested to protect genes from aberrant transcription caused by cryptic promoters (Zhang et al., 2006;Feng et al., 2016). On the other hand, a poor correlation between DNA methylation and gene expression has also been reported in Arabidopsis (Meng et al., 2016;Yen et al., 2017;Chen et al., 2018c;Fan et al., 2020). Indeed, there are a large number of DEGs including SAGs, TFs related to senescence, and genes involved in the photosynthetic system, chlorophyll metabolism, and Calvin cycle, whose methylation levels remain unchanged in SlDRM7i lines (Figure 3; Supplementary Tables 6-9).
NAC family proteins are found to act as regulators during leaf senescence (Woo et al., 2019), and the orthologs of Arabidopsis ORESARA1 (AtORE1), namely SlORE1S02, SlORE1S03, and SlORE1S06, have received special attention for encoding a master regulator of senescence initiation, which can induce senescence-related genes SlSAG12 and interact physically with and inactivates the chloroplast maintenance-related TF SlGLK1 (Garapati et al., 2015;Lira et al., 2017). The SlNAP2 has a complex role in establishing ABA homeostasis during leaf senescence (Ma et al., 2018;Ma X. et al., 2019). The decrease of Calvin cycle activity in chloroplasts leads to an increased generation of reactive oxygen species (ROS) in the respective organelle, which worked as signaling molecules involved in the regulation of senescence (Navabpour et al., 2003). Therefore, the pleiotropic phenotype in drm7i lines may be affected by multiple pathways, and there might be the indirect effects of SlDRM7-mediated epi-control on regulating the expression of many development-related genes.
It is found that global DNA demethylation occurs during Arabidopsis leaf senescence (Ogneva et al., 2016). However, SlDRM7 expression was induced in both natural senescence and dark-induced aging (Supplementary Figure 4). Therefore, tomatoes might have a feedback regulation mechanism to inhibit leaf senescence by increasing SlDRM7 expression and consequently affecting genome-wide DNA methylation levels, and SlDRM7 seems to be a necessary anti-senescence factor during the growth and development in tomatoes. Based on our results, we propose a regulatory model to illustrate the relationship between SlDRM7-mediated epi-control and leaf senescence (Figure 8). The SlDRM7 functions as an epiregulator to modulate the expression of meth-DEGs to affect leaf development and senescence. Silencing of SlDRM7 leads to hypermethylation or hypomethylation within promoter or intron regions, which directly or indirectly suppresses a set of meth-DEGs involved in photosynthesis, chlorophyll biosynthesis, photosystem, and starch degradation. Subsequently, the inhibition of these genes reduces chlorophyll content and photosynthetic capacity and triggers chloroplast breakdown as well, which resulted in leaf chlorosis and senescence. Meanwhile, an unknown self-feedback regulatory pathway is established by activating SlDRM7 expression to make a balance between vegetative growth and senescence.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are publicly available. This data can be found here: All tomato materials, including wild-type Solanum lycopersicum cv. Ailsa Craig (AC), the SlDRM7-RNAi lines and SlDRM7-KO lines (AC background), are deposited in our lab. Sequence data can be found in NCBI (https://www.ncbi.nlm.nih.gov/) and Phytozome (https://phytozome-next.jgi.doe.gov/), two comparative platform for green plant genomics. Gene ID was shown in Supplementary Tables 1, 4-8. WGBS and RNA-seq data sets have been deposited in NCBI under Bioproject ID: PRJNA773102 and PRJNA772527, respectively.
AUTHOR CONTRIBUTIONS
YW, WC, and JY designed and performed the experiments, analyzed the data, and wrote the manuscript. YW and LH performed the bioinformatics analyses. HZ, JW, GH, and RH cultured the plants and performed the experiments. SZ was involved in data analysis and helped writing the manuscript. YH, JY, and WC initiated the project, conceived the experiments, analyzed the data, and wrote the manuscript. All authors edited and finalized the manuscript.
|
2022-02-08T14:22:25.351Z
|
2022-02-08T00:00:00.000
|
{
"year": 2022,
"sha1": "34d7716a040253fd143bfa501b9dde562ee3b6a2",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2022.836015/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "34d7716a040253fd143bfa501b9dde562ee3b6a2",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
254533196
|
pes2o/s2orc
|
v3-fos-license
|
Can cheniers protect mangroves along eroding coastlines? – The effect of contrasting foreshore types on mangrove stability
eroding coastlines. In addition, we investigated local and short-term foreshore effects by measuring wave propagation across two cross-shore transects, one with a mudflat and chenier and one with a deeper tidal flat foreshore. The satellite images (Sentinel-2) revealed that mangrove dynamics over multiple years and seasons were related to chenier presence and stability. Without a chenier, a mudflat width of 110 m (95%CI: 76 – 183 m) was required to make mangrove expansion more likely than mangrove retreat. When a stable chenier was present offshore for two years or more, a mudflat width of only 16 m (95%CI: 0 – 43 m) was enough to flip chances in favor of mangrove expansion. However, mangrove expansion remained heavily influenced by seasonal changes, and was highly event driven, succeeding only once in several years. Finally, although mudflat width was a direct driver of mangrove expansion, and could be targeted as such in coastal management, our field measurements demonstrated that cheniers also have an indirect effect on mangrove expansion. These sand banks significantly reduce wave height offshore, thereby likely creating favorable conditions for mudflat accretion landward, and thus mangrove habitat expansion. This makes stabilization - and possibly also the temporary creation - of cheniers an interesting target for mangrove conservation and restoration.
Introduction
Mangrove ecosystems have been increasingly valued for their ecosystem services in the past few decades (Barbier et al., 2011). Besides traditionally valued services such as viable fisheries, nurseries and water filtering capacity, the use of mangroves for coastal protection has also received attention (Mazda et al., 1997;Temmerman et al., 2013). Mangroves attenuate waves with their dense tree tissues, such as extensive aerial root networks and canopy (Bao, 2011;Horstman et al., 2014;Quartel et al., 2007). Their complex root and branch structures reduce the wave velocity and can decrease wave height by 50% with every additional 100 m of forest (Mazda et al., 2006). The use of mangrove greenbelts for wave impact reduction is therefore often discussed in literature and implemented in coastal zone management (Duarte et al., 2013;Narayan et al., 2016;Othman, 1994;Spalding et al., 2014). However, mangrove vegetation itself is also vulnerable to high wave energy and does not typically occur along exposed coastlines (Chapman, 1976). This "vulnerable protectors" paradox can be easily overlooked in management discussions debating the ideal width needed to obtain the desired amount of wave reduction on the landward edge of the mangrove forest. As a result, there may be clear specifications on the width of a protective mangrove greenbelt in various countries, but it is not always clear if and how such mangrove width can be sustained. For instance, Indonesia prescribes a mangrove greenbelt for coastal protection to be 130 m times the annual average tidal range (Presidential Decree (Kepres) No. 32/1990), whereas the Philippines uses a minimum width between 50 m and 100 m as guideline for coastal mangroves (R. A. 8550, P.D. 705, P.D. 953). Having such clear restrictions on the required wave-attenuating width of a protective greenbelt, requires tools to manage the mangrove forest width, especially under physically hostile conditions. Only by having an in-depth understanding of the environmental conditions that spark mangrove forest retreat and forest regeneration can we develop the means by which to achieve sustainable and effective forest widths.
Mangroves need an episodically occurring period of calm conditions to establish, a so-called Window of Opportunity (Balke et al., 2011). On a small scale, favorable conditions for natural mangrove establishment are nowadays well understood: mangrove seedlings need a window of opportunity in the form of an inundation-, wave-and erosion-free period to strand, root and anchor themselves to survive the first life stages (Balke et al., 2011). On a larger scale, such calm-and wave free conditions can frequently be found in relatively sheltered areas such as lagoons and estuaries. However, at seaward facing sites, such calm conditions will only occur rarely. That is, the presence of such episodic calm conditions may be expected to be driven by the seasons in combination with the foreshore morphology . As such, dynamic foreshore structures such as mud-banks, intertidal mudflats with or without chenier-sand-banks may be expected to play an important role in creating windows of opportunity for mangrove establishment. This is exemplified along the coastline of the Guianas, where the mangrove dynamics are dominated by fluid mud banks that originate from the Amazon river, and migrate west-ward along the coast (Augustinus, 1978). Mangroves extend seaward when sheltered by wave-dissipating intertidal mud banks and erode during exposed interbank stages (Anthony et al., 2010). Along the mangrove-mud coast of north Java, similar patterns of mangrove recruitment and mangrove dieback can be found in a relatively patchy and young mangrove forest . Here however, mud-banks are absent and the foreshore seems to be more dominated by intertidal mudflats with, in some cases, cheniers.
Cheniers, bodies of sand sitting on top of intertidal mudflats, can potentially create shelter for mangrove recruitment along coastlines. However, cheniers are typically described as features of eroding mangrove coasts (Anthony et al., 2010). Sand is supplied in small amounts by rivers (Prost, 1989), but is only formed into cheniers when enough wave energy is present to rework the sediment (Augustinus, 1978). Chenier formation has been observed at locations such as the Red River Delta, Vietnam (Van Maren, 2005), the Mississippi Delta (McBride et al., 2007;Russell and Howe, 1935), China (Liu Cangzi and Walker, 1989), West-Africa (Anthony, 1989), North-Java van Bijsterveldt, 2015), Australia (Woodroffe and Grime, 1999), and at the Suriname-Guyana coastline (Anthony et al., 2019;Augustinus, 1989). Along the Suriname coast, the wave-conditions driving chenier formation are typically found during the erosive stages of mud-bank migration, when mangroves are also eroded by the waves (Anthony et al., 2010). Satellite images of the mangrove-mud coasts of Java suggest that cheniers might also be present during periods where parts of the mangroves expand, suggesting that the cheniers may take on a sheltering role, enabling mangrove expansion. However, intertidal mudflats also reduce waves at the coastline, and if they are of sufficient width, they may also provide the physical requirements for mangrove establishment . In this study, we aim to investigate in depth how foreshore characteristics such as intertidal mudflat width and the presence of cheniers relate to mangrove dynamics. We investigate this along the coastline of Demak, Central Java, Indonesia on two temporal and spatial scales ( Fig. 1): (1) At the scale of the coastal system (i.e. in the order of 10s of kilometers) and yearly timescales, we used satellite-derived data in a Geographical Information System (GIS). (2) On a local and short-term scale (i.e. the order of 100 s of meters and a period of days), we used field derived data.
Methods
2.1. Field study: short-term cross-shore wave transformation with and without chenier 2.1.1. Site description The coastline of Demak, North Java (Indonesia), is delimited by the city of Semarang in the South, and the Wulan River delta in the north (Fig. 1). Demak experiences a microtidal range of 1 m and a mixed, mainly diurnal, tide . The local wave conditions are mild during most of the year, except during the NW monsoon between November and March, when significant offshore wave heights reach up to 2 m (Van Domburg, 2018). The coastal area is mostly formed by fine muddy sediment, except for the presence of cheniers along the coast. To investigate the short-term sheltering effect of cheniers on a local scale, we measured wave transformation and erosion in two cross-shore transects that were installed in Demak, Java, Indonesia: one transect with a chenier, and one transect without a chenier (Fig. 2a). The location for the transects was selected in such a way that the hydrodynamic boundary conditions of the two locations was as similar as possible except for the presence of a chenier. Therefore, both transects started at a water depth of approximately 1 m with respect to MSL and were spaced 400 m apart along the coastline. The first transect started 260 m offshore from a chenier (chenier transect) and the second transect started at a similar depth and distance from the shoreline, but without a chenier (exposed transect) (Fig. 2b).
The chenier transect featured two cheniers; one bare sand lens that consisted mostly of fine sand, ranging from 63.5 to 500 μm in grainsize (as measured from sediment samples of the top 3 cm, freeze-dried and analyzed using a Malvern Mastersizer 2000), and one vegetated chenier that consisted of a thin layer of sand on top of mud (Fig. 3). In the field, both cheniers were easy to walk on, although the layer of sand appeared thin. Jumping on top of the sediment caused the sand body to quiver, and when walking towards the landward side of each chenier, the sand became so thin that one could sink through the sand into the underlying mud. Along the seaward edge of the chenier (A2a in Fig. 2 a), a more consolidated mud layer was visible where the chenier sand had been eroded away by waves ( Fig. A 1 a & b). A series of transparent cores, taken on the seaward side of the chenier transect for a different project in the dry season preceding this study, revealed that the subtidal foreshore of the chenier transect (roughly between the later placed stations A1 and A2a) consisted of alternate layers of mud and sand (Fig. A 1 c), which seems to support the hypothesis presented in Tas et al. (2022) that cheniers are formed through sediment sorting. The exposed transect did not have an emerged chenier. However, the foreshore stations of the exposed transect (E1-E2 in Fig. 2 a) showed grainsize distributions with much more sand mixed through the sediment than the stations seaward of the chenier in the chenier transect (A1-A2a in Fig. 2 a). This could indicate that the sediment at these stations were the remnants of an old chenier or the start of a new chenier forming in the exposed transect.
The most landward station of the two transects was situated inside the mangrove forest. The sediment composition inside the mangrove stands of the two transects was very similar with a high silt content (>88%) at all sites (Fig. 3), although the forest stations in the exposed transect also contained fine sand (125-250 μm, 2 ± 0.2%) and very fine sand (62.5-125 μm, 7.6 ± 1.2%), indicating that the mangroves at the fringe of the exposed transect were subjected to more wave energy than the mangroves in the chenier transect. Fig. 1. The coastline of Demak district (panel on the right), on the Indonesian island of Java. The two focus areas of this study are indicated by (1) the white box, indicating the focus area of the GIS study in which we studied large scale and long-term chenier effects, and (2) the black rectangle, pinpointing the location of the two cross shore transects in which we studied the small scale and short-term effects of cheniers. The picture in the lower left panel features a chenier in one of the transects, where small waves arrive on the seaward side (left) and distant mangroves are visible on the landward side (right).
Fig. 2.
Two cross-shore transects in the field with and without chenier a. Wave logger deployment locations are indicated in a drone image (indicated with white dashed outline, a Sentinel-2 image from two weeks later is used in the background) of the transect area in November 2017. The exposed transect did not feature a chenier (E1-E4) and showed mangrove die-back E5-E6. The chenier transect contained a chenier (A2a-A2c), a mangrove stand on an old chenier (A3a-A3b) and a mudflat with seaward expanding mangroves (A4-A6). b. Schematized bathymetry and instrument deployment along the exposed and chenier transect.
Data collection
Wave loggers (OSSI Wave loggers and NIOZ, MARK III SED pressure sensors) were deployed along the two transects at equal distances from the mangrove border in both transects perpendicular to the coast in a north-west direction (Fig. 2b). Additional wave loggers were placed across both the bare and the vegetated chenier in the chenier transect, in order to measure wave propagation across these sheltering landscape features as well. Wave data were collected continuously with a frequency of 10 Hz during 8 days (November 26th-December 4th) during the 2017-2018 wet season, the most turbulent season of the year in terms of onshore waves, storms (MMAF, 2012) and coastal erosion . While the wet season typically lasts from November until March, the 8-day measurement campaign captured representative wet-season conditions (A 6), and included a storm event that caused extensive flooding of the whole area (Afifah and Hizbaron, 2020). The average significant wave height was therefore determined for both the entire 8-day period, as well as for the storm on the 1st of December, between 00:00 and 05:00, giving insight into the impact of cheniers on average wet-season waves and on extreme storm waves.
Forest parameters were recorded at the landward edge of each transect to typify the forest. We counted the number of seedlings (height < 0.5 m) per species and recorded the diameter at breast height (DBH) of individuals that were larger than 1 m. Individuals between 0.5 m and 1 m in height were recorded as saplings. These species counts and DBH measurements were conducted in circular plots at the most landward station in the exposed and chenier transects after the wet season of 2017-2018. Plot size differed between the exposed and chenier transect (78.5 m^2 and 38.5 m^2 respectively) due to difficulty to move around in the muddy sections of the chenier transect and the risk of trampling seedlings. Forest parameters were therefore corrected to counts per hectare (ha) to compare the two transects.
Data analysis
2.1.3.1. Processing hydrodynamic data. The pressure measurements from the wave loggers were corrected for the atmospheric pressure, using the air pressure data collected by a wave logger installed in a nearby tree. The offset of each instrument was determined by in-situ calibration: instruments were placed at one location, and water depth was measured manually at different moments of the tidal cycle for validation. After offset correction, the pressure measurements were transformed into water depth assuming a water density of ρ = 1024 kg − 1 m 3 and a gravitational acceleration of g = 9.8 m − 1 s 2 . The mean water levels were derived from the pressure signal, and the detrended pressure signal was then used to calculate the wave density spectra over 19.5-min intervals. The significant wave height (H_m0), and peak period (T_p) were derived from the spectra of each interval. To compare the same wave conditions over the different stations, only those intervals were selected, during which all sensors (also the sensor on top of the chenier) were fully submerged at the same time. The wave heights during these submergence periods were then averaged over the duration of the storm (1st of December) and over the full 8-day measurement period.
GIS study: the relation between intertidal foreshore features and mangrove dynamics 2.2.1. Data collection
To study the effects of cheniers and mudflats on mangroves over multiple years, we performed a GIS study on the coastline of Demak. Sentinel-2 satellite images were selected to study the effect of cheniers because 10 m is the highest resolution of freely available satellite images and a sufficient resolution for the detection of cheniers and changes in mangrove cover. All available Sentinel-2 satellite images during a 4-year period were therefore assessed for cloud cover in the research area and tidal level. Ultimately, only eight images could be selected based on cloud cover (<10%), low tide conditions and season (one post-dryseason and one post-wet season for each year). The exact tidal level at the moment of satellite image acquisition was obtained from a tidal harmonic analysis of the tide station of Semarang. To detect seasonal changes in mangrove and mudflat dynamics, images were selected based on acquisition dates before the stormy wet season (Dec-Feb) and before the relatively calm season of the year: the dry season (Jun-Aug) and transitional seasons (Mar-May and Sep-Nov).
Image classification.
Satellite images were atmospherically corrected using Sen2cor software. Clouds and cloud shadows were removed from the images by masking QSC values produced by the Sen2Cor software. Then a normalized difference vegetation index (NDVI) band was computed for all selected Sentinel-2 images, and pixels for all bands outside the study area and the zone of interest were masked. The study area was restricted to the region of the tidal flat beginning from the coastal mangrove forest as it appeared in October 2015, reaching 2 km out to sea in a northwest direction from the mangrove edge (Fig. 4a). Satellite images were then subjected to two steps of unsupervised classification, to cluster cells into four relevant classes in the study area: water, mud, sand and vegetation.
These four classes are easy to distinguish manually from the satellite imagery (Fig. 4b), but the differences in background reflection between the different dates made it difficult to use fixed thresholds of a certain band to distinguish these classes consistently between the dates. For instance, one spot in the middle of a mangrove stand can have an NDVI of 0.3 in the satellite image of one date and 0.7 on a different date, with mudflats having an NDVI of up to 0.5. Using a threshold of NDVI = 0.3 would therefore overestimate the mangrove cover on the second date, misclassifying sections of the mudflat as mangroves. To avoid this problem, we used an unsupervised classification tool (ArcGIS pro, Iso Cluster Unsupervised Classification Tool), which made it possible to automate the classification process for multiple images.
The tool uses a combination of an iterative self-organising (iso) algorithm (migrating means clustering) and a multivariate analysis of the input satellite bands to classify the raster cells based on their statistical similarity (maximum likelihood classification)(ESRI, 2020). When only one band was fed into the tool, the statistical method clustered the image in a way that was very similar to a clustering based on "natural jenks" in the frequency distribution. These tool-properties were used to classify the images in two steps (Fig. A 2). For the first step, the masked Sentinel-2 images were clustered into 5 groups using only the NDVI band as input for the unsupervised classification tool. One cluster was then classified as vegetation, two clusters were classified as exposed sediment (either wet or dry bed) and two clusters as water (either with moderate or high levels of suspended sediment) (Fig. A 3). The exposed sediment group was subsequently used in the second step of the classification. The exposed sediment layer per Sentinel-2 image was used as a mask for all 10 m and 20 m resolution bands per satellite image. These masked bands were then fed into the unsupervised classification tool, and subsequently clustered into a predefined number of sediment classes. Trial and error runs with various numbers of classes revealed that clustering the masked images into 8 groups consistently grouped "sand" into one class for all Sentinel-2 images (Fig. A 4). The signal of the "sand" cluster consistently showed a relatively high surface reflectance in the short wave infra-red (SWIR) bands in combination with a low to medium reflectance in the visible and near infrared bands, whereas the 7 mud clusters all showed a strong drop in surface reflectance between the near infrared and SWIR bands. This difference in SWIR reflectance between sand and mud is probably caused by the efficient drainage of sand in comparison to mud (Small et al., 2009). However, because we could not validate the different drainage levels of mud in the field, we added only the classes "sand" and "mud" to the first image classification resulting in a raster with classes: water, mud, sand and vegetation for every Sentinel image of interest (Fig. A 5).
Validation of GIS classification.
To validate the unsupervised classification we visited 8 sites within the study area in October and November 2018 with a drone to collect a total of 171 ground control points with a dGPS and high resolution imagery of the cheniers, mudflats and vegetation at low tide (Fig. 4d). Ground control points were classified as mud or sand in the field. These points were then used to validate the sediment type as classified in the Sentinel image of November 2018. We determined the percentage of field stations that were classified correctly as sand and mud in GIS (the producer's accuracy), and we determined the percentage of test pixels from Sentinel-2 that were classified correctly based on the sediment type in the field (the user's accuracy ( Table 1)). The user's accuracy showed that 94% of the ground control points that were classified as mud from the Sentinel-2 image were indeed muddy in the field, and that 83% of the locations that were classified as sand were indeed sandy in the field. Similarly, 94% of the sandy field locations were also classified as sandy based on the Sentinel-2 image and 85% of the muddy sites were classified as mud (Producer's Accuracy). Overall, the accuracy of the classification was 91% (kappa = 0.78, Lower 95%CI = 0.67, Upper 95%CI = 0.88).
Definition of explanatory and response variables from GIS.
The classified Sentinel-2 images were used to quantify the effect of the presence/absence of cheniers on changes in mudflat cover and mangrove border along the dominant wind-direction during the monsoon season, which is north-west (MMAF, 2012). In order to obtain information along this direction, a total of 3255 north-west bearing lines were drawn from every cell that contained mangroves at baseline in October 2015 (the first selected Sentinel-2 image available) in the whole project area from Semarang to the Wedung Delta (Fig. 4a). Each bearingline contained sampling points every 14.14 m, based on the diagonal width of the Seninel-2 raster cells. The feature classification was subsequently extracted at each sampling point from each date's classified raster with mangrove, mudflat, sand and water pixels. Bearing lines that contained clouds seaward of the mangrove border were excluded from further analysis.
Mangrove cover change between the acquisition dates was extracted from the classified images and used as the response variable. Mangrove cover change was categorized into one of three relevant response classes between two consecutive points in time: the classes being: "expanding", "stable", or "retreating" if the number of vegetation cells between the mangrove-sea border at t n and t n+1 was respectively larger than -, equal to -, or less than zero.
To obtain chenier presence-absence data and mudflat data, the classified images were subjected to a smoothing algorithm according to van Bijsterveldt et al. (2020), which excluded small patches (10-20 m wide) of a certain category, such as ships (classified as sediment) in the water, or puddles of water on the mudflat. The smoothed classified Sentinel images were then used to extract shelter-related variables from the images such as, chenier presence and mud-flat width per bearing line for each of the selected acquisition dates. These characteristics were obtained by quantifying the number of cells of that class from the mangrove border in seaward direction along each bearing line and multiplying that number by the cell length 14.14 m.
Hypothesis testing for the effect of cheniers and mudflats on mangroves.
To test the hypothesis that the presence of cheniers and mudflats drive mangrove border dynamics, we performed a linear regression separately on each of the three possible mangrove states Table 1 Error matrix resulting from sediment classification of exposed intertidal sediment in a Sentinel-2 image (November 15, 2018) and the sediment type observed in the field at 171 ground control points. (retreat, stable or expanding). For these models, we decided only to include wet-season data because the largest changes related to mangrove cover were expected during this season; the propagule dispersal peak inducing mangrove expansion occurs at the start of the wet season , and the most impactful storms that could induce mangrove retreat occur during this season.
The response variable for each of the three models (mangrove expansion, stability and retreat) was the proportion of transects with that mangrove response occurring (e.g. mangrove retreat) for each unique combination of chenier stability and mean mudflat width. For example: mangrove retreat occurred in 33 out of 47 bearing lines without a chenier (0 years), and a mudflat width of 40 m wide. Mean mudflat width and chenier stability during the study period were thus added to each of the three linear regressions as explanatory variables. Chenier stability was defined as the number of years that a chenier had been present in a bearing line (0, 1 or ≥ 2 years). Mean mudflat width over the 4 wet seasons was log transformed and binned, to obtain groups of transects of similar size (a similar number of transects per unique mudflat width), to account for the log-normal distribution of mudflat widths (there were many transects with a small mudflat width and fewer transects with large mudflats).
Wave attenuation by cheniers
An offshore wavebuoy (12 km offshore in NW direction) revealed that waves during the field campaign arrived from a north-west direction (Fig. A 6), in line with each transect. The significant wave height (Hs) at the most seaward station of both transects was 0.5 ± 0.2 m on average during the field campaign, indicating that the boundary conditions during average wet season conditions of the two transects were comparable. The 8-day-averaged significant wave height (dashed line in Fig. 5) then dropped below 0.26 ± 0.06 m in both transects between the first two stations, where the foreshore of both transects was still comparable in terms of sediment composition (Fig. 3) and profile (Fig. 5). From there on, the wave height remained stable in the exposed transect, only showing a strong drop at the mangrove edge between station E4 (Hs: 0.24 ± 0.06 m) and station E5 (Hs: 0.15 ± 0.08 m), indicating that the waves break on the edge of the mangroves forest. This is further supported by Fig. A 8 (a), which shows a linear relationship between wave height and water depth at E5, characteristic of depth-limited wave Fig. 5. Average significant wave height at each of the stations across the exposed transect and the chenier transect during storm conditions on the 1st of December (solid lines) and on average during the 8 days measured (dashed lines). NB: This graph only displays the wave transformation over the chenier when all stations were fully submerged simultaneously, thus when the chenier was also submerged. The location of each of the stations is indicated with colors relative to the colors of the schematized bathymetry profiles on the right. Depth was only measured at the stations and is displayed relative to mean water level during the campaign. The lines in between the stations are estimates of the profile contour.
breaking. In contrast, the waves in the chenier transect already showed a strong drop at the chenier stations A2a, A2b, and A2c (Hs: 0.26 ± 0.06, Hs: 0.26 ± 0.06, and Hs: 0.14 ± 0.06 resp.), indicating that the waves break on the chenier. All stations landwards from A2a thus display a linear relation between water levels and wave heights (Fig. A 8 b). This resulted in a significant wave height of 0.13 ± 0.06 m at the mangrove edge of the vegetated chenier (A3a) and waves of 0.11 ± 0.04 m at the edge of the main mangrove forest (A4) of the chenier transect under average wet season conditions. The significant wave height at both of these mangrove stations was significantly lower (F = 229.9, df = 2, p < 0.0001) than the waves at the mangrove border in the exposed transect (E4, Hs: 0.24 ± 0.06). The full time series of wave heights and water levels at all stations can be seen in Fig. A 6 and Fig. A 7.
Wave attenuation by cheniers under storm conditions
In addition to the average wet season conditions, we plotted the wave conditions measured on the first of December separately in Fig. 5. On this day, the instruments detected a significant increase in water level and significant wave height. This signal was caused by cyclone Dahlia passing nearby, along the south coast of Java. The cyclone increased the water level 60 cm above mean sea level (Alferink, 2022), and the flooding that followed was reported by villagers to be the worst flooding in the last 30 years. During this storm, the chenier had a similar effect on the waves, decreasing the significant wave height by >10 cm (from 0.39 ± 0.07 m to 0.28 ± 0.06 m). As a consequence, the significant wave heights at the mangrove edge of the vegetated chenier (Hs A3a: 0.26 ± 0.07) and the edge of the main forest of the chenier transect (Hs A4: 0.21 ± 0.04) were both significantly lower (F = 12.94, df = 2, p < 0.0001) than at the edge of the mangrove forest of the exposed transect (Hs E4: 0.32 ± 0.05). Further landward from the chenier, the wave height also decreased over the vegetated chenier (from 0.26 ± 0.07 m to 0.18 ± 0.06 m) and between the two stations inside the mangrove forest (from 0.23 ± 0.07 m to 0.11 ± 0.03 m). This illustrates how the canopy of the young, shrub-like, trees are able to cause further wave attenuation during the high water levels of a storm.
Forest characteristics behind cheniers
The forest characteristics at the most landward stations were very different for both transects (Table 2); the chenier transect had a high seedling density of the two common Avicennia species in the area (Avicennia alba and Avicennia marina), whereas no seedlings of these species were found in the exposed transect. In the exposed transect, 20% of the mature trees were dead, mostly occurring at the edge, indicating mangrove retreat as a result of erosion. Saplings were completely absent from both forest plots, indicating that seedlings that had established before the wet season of 2017-2018 had not survived.
The multi-year net effect of cheniers and mudflats on mangrove dynamics
The probability of mangrove expansion on a larger scale and over multiple years in relation to the presence of cheniers and the width of mudflats was investigated using linear regression. Both mean mudflat width and chenier stability proved to have a significantly positive impact on mangrove expansion (F = 33.1, df = 2 & 58, R 2 = 0.52, p < 0.001). The probability of mangrove retreat decreased significantly with larger mudflat widths and more stable cheniers (F = 77.6, df = 2 & 58, R 2 = 0.72, p < 0.001). The proportion of stable mangrove fringes was small under all foreshore conditions, indicating that mangroves tend to be dynamic, switching between a state of expansion or retreat, although mudflat width did have a positive effect on mangrove stability (F = 22.56, df = 2 & 58, R 2 = 0.42, p < 0.001). Plotting the observed probabilities of each mangrove state shows that without a chenier and the smallest observed mudflat, the probability that mangroves retreated was much higher (70%) than the chance that they were stable (13%) or expanding (17%) (Fig. 6). However, mangrove forest retreat clearly Table 2 Forest parameters at the most landward stations of the exposed and chenier transects after the wet season in 2017-2018. decreased with an increase in mudflat width, even in the absence of a chenier (chenier stability = 0 years). Without a chenier, mangroves were more likely to expand than retreat from a mudflat width of 110 m onward (95% CI: 76-183 m). This tipping point between mangrove retreat and mangrove expansion occurred at smaller mudflat widths when a chenier was present offshore. When a chenier was stable for one year in front of the mangrove fringe, a mudflat width of only 70 m (95% CI: 35-145 m) was needed to flip the odds in favor of mangrove expansion. When a chenier had been present for two years or longer, this tipping point occurred at a mudflat width of 16 m (95% CI: 0-43 m). Therefore, the more stable a chenier, the larger the chances of mangrove expansion (Fig. 6).
Discussion
In this study, we investigated the effect of both i) cheniers and ii) intertidal mudflat width on mangrove dynamics, using i) wave measurements at cross-shore transects in the field and ii) multi-year satellite data on mangrove dynamics. Our field data show that existing cheniers reduce the height of the waves arriving at the mangrove fringe, thereby creating a shelter for mangroves as long as the chenier is present. Our GIS data confirm that the temporary shelter created by cheniers increases the chances of net mangrove expansion and reduces the occurrence of mangrove retreat. In the absence of cheniers, a much wider intertidal mudflat is required to facilitate mangrove expansion.
Local chenier effects: wave reduction and habitat creation
The fact that offshore cheniers reduce the wave height at the mangrove fringe is in itself not unexpected. In sandy systems, sand banks and barrier islands are well known to cause wave height reduction at the shoreline (Short, 2001). However, the mechanism of wave height reduction over sandy offshore features is different from wave height reduction over muddy foreshores. Muddy foreshores cause wave attenuation due to bottom friction caused by sediment resuspension and the absorbing effect of the liquid mud top layer (Sheremet and Stone, 2003), whereas the relatively steep and hard surface of sandy foreshores cause waves to break (Short, 2001;Wolf et al., 2011). These two wave reducing processes are seemingly combined in the case of sandy cheniers atop of a muddy foreshore, where wave height is reduced over the muddy foreshore (from A1 to A2a and from E1 to E2 in Fig. 5), before breaking on top of the sandy chenier (A2a-A2c, Fig. 5). The difference between the storm conditions and average wet season furthermore show how the effect of the chenier is influenced by the water depth. When a chenier is fully emerged during low tide it acts as a barrier, and the water surface on the landward side of the chenier is completely still (e.g. Fig. 1 Field site picture). When a chenier is submerged, for instance during the measured wet-season conditions (dashed lines in Fig. 5), the chenier reduces the wave height (in this case by 10 cm). The absolute amount of wave reduction by the studied chenier remained the same (±10 cm) when the water level peaked during the storm of the 1st of December. However, proportionally the wave height reduction over the chenier was smaller during the storm, as the incoming waves were larger, ultimately allowing significantly larger waves to reach the mangrove edge. The chenier that was measured during this field campaign was relatively low in elevation. Cheniers can be dynamic both in position and height, as was demonstrated for a different chenier in our study area by Tas et al. (2020). A larger and higher chenier would be emerged from the water for longer periods of time during the day, and thus provide a more effective shelter from waves for the mangroves behind it than a smaller and lower chenier. Nevertheless, the field data show that even submerged cheniers have a clear sheltering effect on existing mangroves.
The relatively calm backwater that is created by cheniers affects both mudflats and mangroves. The low wave height that was measured directly landward of the chenier in this field study is favorable for deposition of small sediment particles. The high silt content (Fig. 3) and the soft quality of the mud that were observed landward of the chenier (Table 2 picture) indeed suggest that cheniers facilitate mudflat formation in the area that they shelter. Mudflats in their own right are known to have a protective (Bouma et al., 2016;van Bijsterveldt et al., 2020) and nursing role (Swales et al., 2007) towards mangroves. Unfortunately, the size of intertidal areas has been declining on a global scale over the last 30 years as a result of, among others, coastal development, decreased sediment input, and increased drainage and compaction (Murray et al., 2019). The few tropical sites that show an expansion of intertidal area, also display a seaward migration of mangroves (Murray et al., 2019), illustrating the importance of a sizable intertidal area for mangrove development. Our GIS results showed that the likelihood of mangrove expansion indeed increased significantly with increasing size of intertidal mudflats, with mangrove expansion becoming more likely than mangrove die-back from an intertidal mudflat size of 110 m (95% CI: 76-183 m) onward in this microtidal system. In macrotidal systems the necessary intertidal mudflat width to support mangrove expansion might be larger, as deeper water at high tide allows for higher waves to reach the shoreline, though intertidal areas tend to be wider in such systems as well (Murray et al., 2019). Nevertheless, one third of the world's tropical mangroves can be categorized as micro-tidal and sedimentary (Balke and Friess, 2016), like the coastline of Demak. Therefore, the tipping points in mangrove expansion in relation to mudflat width found in this study could potentially be helpful in management of other micro-tidal mangrove forests around the globe as well. Fig. 6. Probability and 95% confidence intervals of mangrove response (retreat, stable or expansion) in relation to mean mudflat width (m, in bins) for various degrees of chenier stability (the number of years a chenier was present per transect) during the 4 year time frame of the study. The mean mudflat width required to make mangrove expansion more likely than mangrove retreat is indicated with a black vertical line in each panel.
Large-scale and long-term chenier effects: the importance of a calm wet season
The data in this study revealed that the presence of a wide mudflat or the presence of a chenier can support net mangrove expansion over multiple years. However, this does not mean that the presence of a chenier or mudflat during a wet season necessarily results in mangrove expansion during that fruiting season. The latter requires also a Window of Opportunity to occur, consisting of an inundation-, wave-and erosionfree period to strand, root and anchor themselves, and thereby survive the first life stages (Balke et al., 2011). In the field, the forest characteristics at the most landward edge of the chenier transect showed that there had been no window of opportunity for mangrove growth during the previous season, because saplings were completely absent from the site. The absence of saplings, while seedlings were abundant, suggests that the seedlings that had established at the start of the wet season (and would have grown into saplings before this field campaign) did not survive this particular storm season. This observation indicates that, while cheniers and mudflats reduce wave height and promote mudflat accretion, successful mangrove establishment in the seaward direction remains an event-driven process.
Satellite analyses support that the colonization of mangrove habitat occurs episodically and is therefore non-linear through time. Net mangrove expansion occurred primarily during only one wet season (2016-2017: A 5). During this wet season, a Window of Opportunity probably occurred due to the remarkably low maximum daily wind speeds of ±10 m/s (as retrieved from the Ahmand Yani airport station in Semarang (http://dataonline.bmkg.go.id/data_iklim). Thereby the dryseason-like wind speeds, though in onshore direction, coincided with the fruiting season of the common Avicennia species in the area, blowing the propagules towards the shore. The combination of available propagules and the presence of wide mudflats for establishment, followed by months of calm conditions were likely the cause of the positive mangrove cover change during that same season. This combination of favorable conditions resulted in a staircase-like appearance of the forestcanopy (e.g. Fig. 3d in van Bijsterveldt et al., 2020), caused by separated events of mangrove expansion interspaced by years of non-expansion. Non-linearity in seaward mangrove expansion is not uncommon, and has also described in the coastal system of the Guianas in South America, where migrating mud banks offer temporary shelter and habitat for mangrove expansion, interspaced with decades of non-shelter and nonexpansion (e.g. Fig. 11 in Anthony et al., 2010). Similar periodic mangrove expansion, though by a different mechanism, has been observed in the Firth of Thames, New Zealand, where reduced wind and wave energy during the El Niño events of 1978-1981 and 1991-1995 resulted in two major seaward forest expansion events (Lovelock et al., 2010). Our findings therefore illustrate how seaward mangrove expansion can be induced by a combination of temporal shelter and temporal calm conditions.
Management implications
The observation that cheniers create temporary shelter for mangroves from waves, especially when they are stable over longer periods of time, has implications for mangrove conservation and restoration. For example, despite their role in mangrove establishment, cheniers have been mined to use their sand for construction in Demak, which deprives the coastline from their erosion mitigation function. Sand mining should thus be strictly regulated to maximize mangrove colonization and mitigate retreat. Conversely, mangrove persistence and expansion could be favored if existing cheniers are stabilized or supplied with sand from a sustainable source. Although artificial sand nourishments have been used as wave breakers before (e.g. Hwung et al., 2010), little is known about sand nourishments on muddy substrate, presumably because sand is relatively rare along muddy coasts. The tipping points for mangrove retreat at specific mudflat widths also have useful, and perhaps more feasible, implications for management. For instance, satellite imagery of low tide conditions or the use of a tidal flat change map as presented in Murray et al. (2019) (http://intertidal.app), could help coastal managers to assess the width of the existing intertidal foreshore along the coastline and identify locations where the mudflat width is low or decreasing rapidly. Those locations could then be targeted with foreshore modification methods, such as nourishments (Baptist et al., 2019) or the erection of brushwood dams or fences, which are placed parallel to the coastline to trap sediment. The latter method has proven to be particularly effective along muddy shorelines (Winterwerp et al., 2020). So far, these structures have been intended to elevate mudflats high enough (> MSL) to restore mangrove habitat (Mancheño et al., 2022), but the insights gained in this study show that restoration of lower elevation mudflats could already be worthwhile to reduce the chances of mangrove retreat. Foreshore modifications that create wide intertidal foreshores may thus be useful measures to ascertain that waveattenuating ecosystems such as salt marshes and mangroves become stable enough to be utilized in coastal protection schemes.
Author contributions
CvB: Methodology (field and remote sensing), Investigation, Formal analysis, Visualization, Writing-original draft.
Declaration of Competing Interest
The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: Celine van Bijsterveldt reports financial support was provided by Boskalis Dredging and Marine experts. Celine van Bijsterveldt reports financial support was provided by Van Oord Dredging and Marine Contractors bv. Celine van Bijsterveldt reports financial support was provided by Deltares. Celine van Bijsterveldt reports financial support was provided by Witteveen en Bos. Celine van Bijsterveldt reports financial support was provided by Wetlands International.
Data availability
Data in support of this manuscript are available at https://doi. org/10.4121/21667685. Wesenbeeck and Han Winterwerp for the many fruitful discussions on chenier dynamics and chenier-mangrove interactions.
Appendix A. Appendices
A 1 A. Consolidated mud with cliff formation (+/− 10 cm) around the breaker zone at the seaward edge of the chenier before placement of station A2a. B. Sand to consolidated mud transition. Note how the footsteps get deeper towards the seaward edge of the chenier, where the sand layer on top of the mud becomes thinner. C. Picture direction of A and B, and a series of transparent cores taken on the seaward side of the chenier transect in the dry season prior to this study. Note how all cores contain a layer of sand (white lines), and below that, a muddy layer with intermixed layers of sand (dashed lines). The most seaward core and the core closest to the chenier also contain a thick layer of mud (black) on top of the mixed layers.
A 2 Flow chart of the steps for unsupervised classification of all Sentinel-2 images of dates of interest between 2015 and 2019 in 4 relevant classes (water, mud, sand and mangroves).
A 3 Histograms showing the number of pixels that were clustered into one of the 5 classes by the isocluster unsupervised classification tool based on maximum likelihood clustering of the NDVI rasters computed from each of the Sentinel-2 satellite images.
A 4 Spectral signals of each of the 8 clusters produced by the unsupervised classification step. The sand cluster distinguishes itself from the "mud" clusters by relatively high surface reflectance values in the SWIR bands (wavelength > 900 nm) and low to average reflectance in the visible (wavelength < 700 nm) and the near infrared spectrum (wavelength: 700-900 nm). The tidal stage at the time and date of image acquisition is indicated with a red dot within the tidal cycle of that date in the upper right corner of each panel.
A 5 NW bearing lines along which the information of the classified Sentinel-2 images is displayed, with mangroves (green), water (blue), mud (beige) and sand (yellow). These classified images are zoomed in on the site where the two field transects were deployed in November 2017.
A 6 Time series of significant wave height (a), peak wave period (b), and wave direction (c) from an offshore buoy (Wave Droid) during the measurement campaign (shown with a black rectangle). Source: (Van Domburg, 2018).
A 7 Time series of the water depth in (a) the exposed transect, and (c) the chenier transect. Time series of the wave height in (b) the exposed transect, and (d) the chenier transect.
A 8 Ratio of significant wave height to water depth at (a) exposed transect and (b) chenier transect.
|
2022-12-11T16:05:40.937Z
|
2023-02-01T00:00:00.000
|
{
"year": 2023,
"sha1": "42a43759395f5aea3c6ee3bee0d47801b5edf018",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.ecoleng.2022.106863",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "7bc74d2c84f05a0899ba6faed1a31356e2b63bf6",
"s2fieldsofstudy": [],
"extfieldsofstudy": []
}
|
84700066
|
pes2o/s2orc
|
v3-fos-license
|
Influence of Levantine Artificial Reefs on the fish assemblage of the surrounding seabed
Four Artificial Reef (AR) units were deployed at a 20m depth on a flat hard substrate 3 km west of Haifa, Israel and then surveyed for fish for 12 months. AR units supported 20 times the biomass of control quadrates and their enrichment impact was still significant at a radius of 13m away from units. The 13m values were also significantly higher than those of quadrates adjacent to units, suggesting the existence of a halo of relative depletion within the outer enrichment halo. The main species contributing to this pattern was the migrant herbivore Siganus rivulatus. A decrease in grazing resources is thus suggested as an explanation for creation of this halo. The most consistent AR residents were also Lessepsian migrants Sargocentron rubrum, nocturnal predators which displayed high microhabitat fidelity and a steady increase in density. The 6 species of migrants recorded accounted for 65.3% of the commercially exploitable biomass and 25.2% of the specimens in the AR site. Other constant AR residents were the groupers Epinephelus costae and Epinephelus marginatus, which are rare and commercially important species. Site protection from fishing and storms were found to be of utmost importance, and design and deployment considerations are discussed.
Introduction
The magnitude of the world fishery yield makes the practice of deployment, monitoring and harvesting Artificial Reefs (ARs) a subject of active interest globally (BOHNSACK & SUTHERLAND, 1985).Today ARs are used for diverse applications even though the principal one remains enhancement of the fishing yields.This enhancement, however, is not to be taken for granted, as ARs are assumed to function in a combination of two mechanisms: aggregation of scattered specimens and secondary biomass production through increased survival and growth of juveniles (e.g., BOHNSACK & SUTHERLAND, 1985;PRATT, 1994;BOHNSACK et al., 1997;SEAMAN, 2000;JENSEN et al. 2000;JENSEN, 2002;OSENBERG et al., 2002).An AR may even entirely deplete stocks by merely concentrating the fishing effort (POLOVINA, 1989).This conflict has been dubbed the 'attraction production debate'.Yet, there are great variations in ARs and the behaviour of different species of fish may vary depending on the locations, occasions and conditions of the ARs and thus the predominating mechanism (attraction or production) varies accordingly (e.g., SPANIER, 1996).One of the main goals of ongoing AR research is to spatially and temporally chart these differences in order to gain a deeper understanding of the mechanisms through which ARs attract and facilitate the production of fish.Recruitment is a key factor which has to be quantified in order to study an AR's ability to produce new individuals.Recruitment is the addition of new individuals to populations or to successive life-cycle stages within populations (CALEY et al., 1996).As ARs depend greatly on import (both juvenile recruitment and addition of adults) from nearby existing stocks, a study of their inter-relations with the ecotone is indispensable.
Fish and invertebrates use both natural and artificial surfaces for shelter, feeding, spawning, energy economy and orientation (BOHNSACK et al., 1994;CARR & HIXON, 1997).Their accumulation around ARs is a stupendous outcome of behavioural ecology.Nevertheless, a great portion of the enhanced biomass comes from materials consumed in forage areas outside the AR complex.Depending on each species' association with the AR and its foraging range and behavioural patterns, feeding halos are formed around the AR (BOHNSACK, 1989;CARR & HIXON, 1997;BORTONE et al., 2000;SHENG, 2000).These halos are critical to sustain-ing the AR biomass.Their radii indisputably vary with AR size, design, material, location, depth and distance from natural relief -both the supply source of adult settlers and potential gene pool (CARR & HIXON, 1997).TURNER et al. (1969) suggested leaving 15-18m diameter open spaces between AR units.STONE et al. (1979) noted that an AR placed within 25 m of a natural habitat recruited juveniles, and did not reduce the population of the existing natural reef.OGAWA (1982) concluded that for benthic species influence radii change from 1-100m and recommended 'a few meters' as a good choice for distance between AR and natural habitat.For pelagic fishes he determined this radius stretches up to 800m.FRAZER and LINDBERG (1994) proposed a 60m gap between units.For hard substrate habitats in the southeastern Mediterranean, SPANIER (2000a) has suggested 3m 3 of AR for every 1000m 2 of seabed as optimal AR density.Assuming three separate 1m 3 cubic units per 1000m 2 , a 10.3m influence radius is to be expected.The present study focuses on these closerange inter-relations between fish and ARs in the eastern Mediterranean.As this basin is comparatively poor in both nutrients and fishing yields (e.g., SAURNIA, 1973;BER-MAN et al., 1984, AZOV, 1986;HERUT et al., 2000), ARs are a subject of great interest in its waters.Relative scarcity of fish is also presumed to result in vacant ecological niches, which allow species of Indo-Pacific origin that migrate into the Levantine basin from the Red Sea through the Suez Canal (GOLANI, 1998) to establish and develop permanent and considerable populations.This phenomenon, called Lessepsian migration, has intensified in recent years and fish of Red Sea origin were observed as far as Sardinia (PAIS et al., 2007).Species from tropical origin are con-sidered more competitive than autochthonous Atlanto-Mediterranean species (GOLANI, 1998).Thus, the appearance of a Lessepsian migrant in the Levant basin may result in a competitive exclusion of indigenous species or their displacement to another habitat or depth range (e.g., SPANIER & GALIL, 1991).The present study also demonstrates how small AR units can provide an in situ look at Lessepsian migrants in their vicinity.
Previous studies in nearby sites showed AR deployment to concentrate fish in larger amounts and diversity than surrounding natural reefs (SPANIER, 2000a;2000b;SPANIER et al., 1983;1985a;1985b;1989;1990).The present study set out to examine the nature in which deployment of small AR units affects the ichthyofauna of the surrounding seabed.
Artificial Reef -Location and Structure
Four 1.2m sided cubical steel reinforcedconcrete structures, weighing 1500kg in water were used as ground units.16 sections of 25 cm diameter polyethylene pipes were fitted into each such structure.Floating units, or FADs (Fish Attracting Devices), were tethered to the ground units.These were 1m steel profile cubes, into which 16 polyethylene pipes were similarly fitted.Without the concrete ballast, they supplied 160kg of buoyancy.The AR field was composed of four such double units of a ground unit and a FAD suspended 10m above the seafloor (Fig. 1).Typical seafloor property was of the flattest ground with as little complexity as possible.Distances between units were 25-40m, which enabled the whole 4 unit field to be monitored within one dive and were presumed far enough to avoid overlapping influence radii.A site with the suitable properties was located 3km NW of the Carmel promontory at 32AE51'02''N -034AE56'33''E.Twelve 5X5m rope frames were laid in the AR field as survey quadrates: Four were set one around each ground unit (unit quadrates).The next four were set alongside them (adjacent quadrates) and the last four 13m (C-T-C) away from units (detached quadrates).Two more frames were set as controls on similar hard substrate, 500m to the south of the AR field on a rocky substrate at the same depth as the AR units (Fig. 1) at 32AE50'46''N -034AE 56'27''E.These control quadrates varied morphologically -one represented complex control features with dense porous outcrops of 0.4-0.8mheight, much like those of a complex natural reef.The second had plain control features, with few small 0.1m outcrops, representing similar bathymetry to that on which AR units and survey quadrates were deployed.
Data collection
Fourteen surveys were conducted by two divers as point count surveys, adapted from SHENG (2000) during 7 sub-seasons: summer 2004 -summer 2005.Each count lasted 2 minutes per quadrate.One diver concentrated on quantifying the bigger schools of fish and the more abundant species, the other on identification of rare and cryptic species.After the completion of the visual census, a third diver recorded the fish and macro-invertebrates on video and still photography for later aid in taxonomy, comparisons and study.Survey results were written on pre-designed PVC slates in order to save time underwater and then transferred to excel sheets for processing.Underwater visibility was measured using both horizontal (at the seafloor) and vertical secchi disk depth.Water temperatures were measured by a Nitrox Suunto Solution gauge.Currents were measured with an Interocean S4 current meter.Video and still photo records were examined by ichthyologists and compared with data from LYTHGOE and LYTHGOE (1971) and GOLANI and DAROM (1999), to identify cryptic species and back up in situ counts.Fourteen census sorties were executed in a back to back day format.Data was pooled from each such pair of consecutive surveys and then into 7 seasonal data sets.All surveys were executed at the same time of day (between 0900 and 1200).Biomass estimates were shown to facilitate approximation of the magnitude of AR fauna (BORTONE et al., 2000) and were therefore employed in the present study.
Statistical analysis
Biomass estimates of fish in the AR site were based on diver records of L T in cm, taken in situ, and a later calculation via Length-Weight Tables in FROESE and PAULY (2006) from the nearest sighting of the species to Haifa.Shannon's species diversity index (H') was calculated for the quadrates according to SHANNON (1948): H' = ™ (Pi*lnPi), where Pi represents the proportion of the i'th species.
For inter-quadrate comparisons, data was pooled from all surveys for every quadrate type.A non-parametric Wilcoxon signed rank-test was employed (WILCOXON, 1945) in order to determine whether differences in abundance, species richness, biomass and diversity were significant between quadrate pairs and a level of P<0.05 was determined as significant for comparison.
Results
Thirty species belonging to 18 families were observed during surveys (Table 1).Twenty seven of the species were recorded in AR unit quadrates, as opposed to only 11 and 18 species in the plain and complex control quadrates respectively (representing similar seabed to AR deployment site with no AR unit, and a high-relief natural reef).Unit quadrates also supported a mean of 85.7 specimens and a mean biomass of 237g/m 2 per survey, whereas plain and complex control quadrates held a mean of only 15 and 36.4 specimens and a mean biomass of 18 and 68.1g/m 2 respectively.The abundance, species richness, Shannon's diversity index and estimated biomass of fish in the site during the 7 seasons are presented in Figures 2a-d.Values peaked during the first summer and autumn in the post deployment phase, then declined during winter and increased again the following summer.Unit quadrates generally displayed higher abundance, richness and biomass values than both control quadrates and ad-Table 1 List of species recorded in The AR and FAD site and their total abundance, including origin, trophic level, estimated biomass and feeding habits.jacent and detached quadrates, with Shannon's diversity index values showing a greater variability (Fig. 2c).AR ecotone (adjacent and detached quadrates) normally displayed slightly higher values than the plain control, although complex control values exceeded those of the ecotone (Fig. 2).
Total
Thirteen species were of commercial importance (according to SNOVSKY and SHAPIRO 2003), thirteen were piscivores and only two were obligatory herbivores -Siganus rivulatus Forssk l, and S. luridus (Rüppell), both Lessepsian migrants.Seven of the 30 species observed were of Red Sea origin.Lessepsian migrant percentage, calculated for the 6 benthic species recorded in surveys, is presented in Figure 3.A separate "inner unit" data series is presented, to underline the massive presence of the Red squirrelfish Sargocentron rubrum (Forssk l) in the inner AR assemblage.Along with another migrant -the Filefish, Stephanolepis diaspros Fraser-Bruner, Red-Sea species accounted for the majority of individuals observed inside AR units (Fig. 3).S. di-aspros was, much like S. rubrum, closely associated with the AR units, however only single specimens or couples were recorded and their numbers did not increase with time.F. commersonii was recorded hovering in close proximity and parallel to quadrate lines, presumably mimicking them for camouflage.The detached quadrates assemblage, also showing a large proportion of migrants (Fig. 3), was comprised mostly of S. rivulatus specimens, usually observed in motion, displaying foraging behavior.S. rivulatus was also the only migrant species observed in control quadrates.
AR unit quadrates were compared with the complex control, in order to determine whether the AR provides a superior habitat to that of a natural fully-developed reef (Table 2).They were indeed found to have significantly higher diversity and biomass (Wilcoxon, P=0.047) than complex control (Wilcoxon P=0.006).Unit quadrates also carried almost 3 times the abundance and 50% more species than complex control quadrates, however differences for abun-a a .In order to determine attraction radius, detached quadrates were compared to plain control, and then to adjacent quadrates (Table 2).Richness, diversity and biomass values were significantly (Wilcoxon, P=0.03, 0.023 and 0.001 respectively) higher in detached quadrates than in plain control; but surprisingly, abundance diversity and biomass (Wilcoxon, P=0.005, 0.042 and 0.001 respectively) were also significantly higher in detached quadrates than adjacent ones (Table 2).During summer, the great number of fish shoaling around units occasionally 'spilled' into adjacent quadrates and yet in 19 out of a total of 25 observations in which both quadrates were surveyed, more specimens were recorded in detached quadrates than adjacent ones.
The most common fish in surveys were the Mediterranean damselfish, Chromis chromis (Linneaus) (Table 1), most of which were observed shoaling in great proximity to AR units.The Rainbow wrasse Coris julis (Linneaus) and the Ornate wrasse Thalassoma pavo (Linneaus) were also very common in surveys, however they were not as tightly grouped around AR units as C. chromis were.Other common fish (Table 1) included the Two-banded sea bream, Diplodus vulgaris (G.Saint Hilaire), and the Blue-spotted sea bream, Pagrus coeruleostictus (Valenciennes), as well as the Painted comber, Serranus scriba (Linneaus).All Sparids and Serranids were common around AR units in summer and autumn but disappeared completely in winter and spring.
The larger predators found in unit crevices were the Brown moray, Gymnothorax unicolour (Delaroche), the Mediterranean moray, Muraena Helena Linneaus, the Gold blotch grouper, Epinephelus costae Steindachner and the Dusky Grouper, Epinephelus marginatus Bloch and Schneider.Figure 4 displays the patterns exhibited by the most common dominant large predators, i.e. the groupers and squirrelfish, in the AR unit quadrates throughout the study period, contrasted with water temperature.Groupers lurked mostly in the lower rows of pipes inside units or in the crevices formed under AR units before scouring closed them.Like squirrelfish, none were viewed over flat substrate and were altogether absent from non-AR quadrates.Groupers reached a maximum of 12 fairly large individuals (30-50cm L T ) inhabiting AR units in the early winter of 2005.By this stage, specimens could be individually identified by their size, color patterns and choice of microhabitat within units.This changed when in the spring of 2005, the numbers of groupers in the site dropped within 48 hours from nine specimens in 28.2.05 to none in 2.3.05 (Fig. 4).E. marginatus was never sighted again after this date and all E. costae observed hereafter were relatively small (max.25cm L T ).In contrast to groupers, the number of S. rubrum, (302 observations -Table 1) grew steadily with time throughout the study period (Fig. 4).
Damselfish, wrasse and rabbitfish juveniles were recorded almost exclusively in the warm seasons (Fig. 5), following their spawning period in spring and summer.Juveniles of C. chromis were observed exclusively within 1m of the structures.
The four FADs showed little resilience to winter storms.The FADs caused ground units to scour, flip or altogether break into pieces.Therefore, the FAD units were removed after only five months at sea.
The species composition of the FADs in these five months is displayed in measure, to only 5m above the seafloor and this apparently enabled species more closely associated with the benthos, such as S. rivulatus and C. julis to ascend to FADs (Fig. 6).The enlarged surface area of FADs facilitated settlement of a thick epybiota, mostly of the Pearl oyster, Pinctada radiata (Leach).No fish were observed feeding on it and none were viewed under or inside the FADs.
Discussion
Artificial Reef unit deployment was found to affect the fish assemblage in different intensities and radii for different species.AR units themselves provided habitats for several species that were rare or absent from other quadrates.These were mostly cryptic species and/or nocturnal carnivores, but also juveniles of reef-associated species.Units were shown to significantly raise the ecotone carrying capacity for fishes.Although capacity exceeded that of a flat control site, it did not match that of the complex control (Fig. 1 and Table 2).This means absolute enhancement occurred at <3m, except during summer, when the increase in abundance and richness caused spillover into adjacent quadrates.Detached quadrates, located 13m away from the units showed significantly greater richness, diversity and biomass than the flat, plain control site (Table 2).Thus, AR induced enrichment was still discernible at this distance.Nevertheless, these detached quadrates displayed significantly higher values than those of quadrates adjacent to AR units.This finding suggests the existence of a halo of relative depletion within the outer enrichment halo.The prominent species exhibiting behaviour which fitted this pattern was the Lessepsian herbivore S. rivulatus.Unlike its congener S. luridus, more frequently observed in AR unit quadrates, S. rivulatus was observed in schools of 5-20 specimens, grazing farther away from units.It is therefore suggested that upon sensing unit presence directly (by sight, lateral line, and/or smell) these fish elect to approach to within 1-2m and benefit from shoaling advantages (e.g.PITCHER et al. 1982).It is further speculated that they used AR units as navigational benchmarks (e.g. , 1998) in their grazing excursions into the ecotone, when near AR resources were exhausted.The attractive properties of AR units towards the 20m depth Eastern Mediterranean fauna were demonstrated at first by a post deployment overshoot, in the summer of 2004.This type of overshoot has been described in previous AR studies (BOHNSACK and SUTHERLAND, 1985;MORENO et al., 1994).It is thought to represent an early curiosity of fish towards the newly established habitat, prior to reaching a seasonal, dynamic equilibrium.
BRAITHWHAITE
In the ensuing winter some species declined in numbers while others disappeared altogether.The following summer saw most of them return, although in slightly lower numbers.BOMBACE (1989) andBOMBACE et al. (1994) suggested that in the Adriatic Sea, the decrease in winter species richness is due to migration into deeper water.This phenomenon is documented for groupers along the Israeli coast as well (DIAMANT et al., 1986;GOLANI and DAROM, 1999;ARONOV and GOREN, 2003).It may account for the winter decline in grouper numbers (Fig. 4) as well lower abundances of other species.Most notable among these was the sparid population, which vanished in the cold season and recuperated the following summer.
Nonetheless, had vertical migration been the reason for grouper decline, why did their population fail to recover the following summer as the sparids' did?No climatic changes were noted during this period, the current was weak (0.1Kn.), water temperature albeit low at 16AEC, remained steady (Fig. 4) and visibility was excellent (horizontal Secchi depth >30m).Water temperatures had dropped 2 months prior to this survey (Fig. 4), and with it AR biomass and the abundance of many of the groupers' prey items (Fig. 2a).Why then had vertical migration not occurred earlier?Enquiries among local fishermen suggested an alternative explanation.The site was familiar to SCUBA divers and was visited quite often by spear fishermen.It is highly probable then that the disappearance of its larger inhabitants was indeed fishing related.Grouper absence from control quadrates stressed these overexploited species' demand for additional relief and complexity.Nevertheless, unless protected from fishing, such habitat erection is futile.As recommended by PITCHER and SEAMAN (2000), AR deployment in notake zones can and should play a positive role in future restoration and fishery management programs.This is also the case in the Levant region (SPANIER, 2000a) where site protection must be given high priority, so that ARs can produce, rather than merely attract, fish.
In contrast to the groupers, the population of S. rubrum continued to grow throughout the study period (Fig. 4).Squirrelfish are not as highly prized as groupers, due to their smaller size, hard scales and sharp dorsal spines (GOLANI and DAROM, 1999).They are thus not targeted by spearfishermen.Consequently, they have been able to establish themselves as dominant Levantine cave and AR dwellers in recent decades (GOLANI and BEN-TUVIA, 1985;DIAMANT et al., 1986;SPANIER et al., 1989;SPANIER and GALIL, 1991;SPANIER, 2000a,b;BARICHE et al., 2004;GOREN and GALIL, 2005).So far as diurnal species are concerned, however, the most frequent protagonists of the assemblage were wrasses and damselfish.C. julis, T. pavo and C. chromis (Table 1), are similar to the most abundant species found by AZZURRO et al. (2007) in an AR in the Straits of Sicily. CHARBONNEL et al. (2002) deduced that the increased density and biomass of predators in a north Mediterranean AR did not result solely from the increased food availability offered by surfaces of the AR, but from sheltering in the interstices as well.The present AR data concurs with this model, as inner and under unit crevices of AR units were indeed densely and rapidly inhabited by squirrelfish (Figs. 2 and 4) and other larger predators.SEAMAN (2000) noted that in the south-east Mediterranean, with no sea-grass meadows, a filter feeder dominated AR is likely to develop.The importance of herbivores will thus diminish, whereas predation intensity will increase.Despite an active presence in the vicinity of AR units, herbivores were not observed feeding directly off units.Grazing activity, most notably for S. rivulatus, was indeed observed mostly in the ecotone.Nevertheless, these Lessepsian herbivores' wide range diets and their ability to feed off various indigenous biogenic resources (STERGIOU, 1988), do contribute significantly to herbivory in the ARs of the Levantine basin.
When assemblages of two natural rocky habitats and a small Mediterranean AR were compared in the mid 1980s (DIAMANT et al., 1986), Red Sea migrant species constituted only 7.4% of the fish, but contributed >20% of the standing crop.SPANIER (2000a,) has found migrants constituted 16.7% of the species composition in tire ARs deployed in 1985 on a similar substrate at the same depth and 18.9% in the same ARs in 1995 (SPANIER 2000b).GOLANI et al. (2007), using rotenone ichthyocide in rocky coastal littoral found a 12.16% migrant species ratio.In the present AR study, the percentage of migrant species was 22.6%.A newcomer to the assemblage was the Blue spotted cornet fish, Fistularia commersonii Rüppell, recently detected in the Mediterranean (GOLANI et al., 2002) and by now common throughout the Levantine basin (KARACHLE et al., 2004;PAIS et al. 2007).A single winter observation was made of the Brownband goatfish, Upeneus pori Ben-Tuvia and Golani.Although it habitually prefers soft bottoms (LYTHGOE and LYTHGOE, 1971;GOLANI and DAROM, 1999), it was recorded over the sandstone ridge, in vicinity to ARs and further contributed to the increase in Lessepsian species richness.Migrants accounted for 25.2% of total specimens and 65.3% of the commercially exploitable biomass in AR quadrates.
The higher figures of the present study may be explained by differences in the sampling methods and/or habitat depth.Additional explanations can originate in the higher efficacy of ichthyocide in exposing cryptic species (mainly members of the families bleniidae and gobiidae).Since most of these species are indigenous, they decrease the relative proportion of Lessepsian species.However, the large migrant proportions may also point to two trends: a spatial one, which reflects a competitive edge migrants have over indigenous species in AR sites, similar to the one demonstrated by TYRRELL and BYERS (2007) for fowling species, and a temporal trend -the increase in time of the rate of Lessepsian colonization.
Other than S. rubrum, the other foremost migrants to benefit from AR presence were rabbitfish.In their original, Indo-Pacific habitat, rabbitfish are found in small schools in shallow water close to the bottom (FROESE and PAULY, 2006).They feed on a wide range of benthic algae (trophic level 2 -Table 1) and their success as migrants is attributed to the scarcity of indigenous herbivores in the Levant (LUNDBERG and GOLANI, 1995;BARICHE et al., 2004).Their high feeding intensity and high competitive potential (STERGIOU, 1988) have enabled them to become dominant in the Levantine herbivore niche.The absence of the indigenous herbivore Saupe, Sarpa salpa (Linnaeus), which was not sighted in this study, as well as its absence from the Israeli fishing yield reports in recent years (SNOVSKY and SHAPIRO, 2003) provides some support to the hypothesis raised by BARICHE et al. (2004) regarding its exclusion by rabbitfish.
Large numbers of juveniles, mostly damselfish, were very closely associated with the structures in summer (Fig. 5).This close range interaction stressed the advantage of AR units over natural reef control sites in their role as nurseries.Mediterranean reefs are non-living rocky outcrops and thus resemble ARs in temperate or less stable environments (SEAMAN, 2000).Whereas tropical reef recruitment is chiefly governed by juvenile fish, reefs in temperate seas gravitate towards adult colonization (SEAMAN and SPRAGUE, 1991).Our findings, however, detected large numbers of damselfish and wrasse juveniles during the warm season.Therefore, since the seasonal temperature gradient in the Levantine basin is so acute (from 15AEC in winter to 30AEC in summer) and since the appearance of juveniles was witnessed only in the warm season, it is suggested that a seasonally alternating recruitment mechanism took place in the AR field: a limited temperate adult recruitment to the assemblage every winter-spring and a more sub-tropical like juvenile recruitment every summerautumn.This is possibly a magnification of the similar mechanism occurring in natural reefs, as the only record of C. chromis juveniles (n=11) other than in unit quadrates was in great proximity (<1m) to a large rocky outcrop in the complex control in the summer of 2004.
The short duration of the FAD study did not allow a complete colonization pattern to be described and only partial conclusions can be drawn.During summer, FADs attracted schools as big as 110 specimens of the Lessepsian Shrimp scad, A. djedaba (Fig. 6).Its dominance as well as the utter lack of indigenous pelagic fishes near FADs may be evident of the susceptibility of the Mediterranean Sea to the Lessepsian invasion.It was joined in autumn by the transient Round sardinella, S. aurita, as well the demersal damselfish C. chromis.FADs were then lowered to 15m depth in an attempt to prevent unit destruction after the first winter storm.The assemblage then became more heterogeneous when at this height, rabbitfish and wrasses joined the damselfish to ascend from ground AR units to the FADs.Since natural depth distribution of all three necto-benthic species does not limit them to 20m (GOLANI and DAROM, 1999;FROESE and PAULY, 2006), an isobath must exist therefore, between 4-10m above the seafloor, below which fish relate to FADs as close enough to function as a single structure.This ascent also supports the hypothesis that vertical relief plays a key role in AR's success as habitat (RILOV and BENAYAHU, 2000).In contrast to bottom units, the lack of fish inside FADs suggests it was not shelter from predation fishes sought, but current lee, as suggested for fish larvae by LINDQUIST et al. (2005).This dominance in biomass of species not trophically dependent on the bio-fowling accumulated on FADs is also in accordance with findings by DEUDERO et al. (1999).
The present study was carried out over only 12 months of sampling.MONTEIRO and SANTOS (2000) found the cumulative species richness in a Portuguese AR took 5 years to reach the succession plateau.CHARBONNEL et al. (2000) found that an AR assemblage still evolved and density and biomass continued to grow after 7 years.RELINI et al. (2002) reported species diversity and richness were still steadily increasing even after 10 years.However, GOLANI et al. (2007) as well as DIAMANT et al. (1986) demonstrated that after a complete defaunation, it only took ARs along the Israeli coast 1 year to return to pre-defaunation values.The data gathered for the Haifa AR project, although it provided some good basic and comparative figures, was insufficient for long-term predictions of ecological processes.For example -the decline in both adult and juvenile numbers over the study period can only be fully understood in a multi-annual study.A longer duration of data collection is therefore highly advised.
Twenty years ago, BOHNSACK and SUTHERLAND (1985) termed AR construction 'more of an art than a science' and stressed the need for inexpensive, effective, long-lasting, easily handled, easily transported structures.Marine structures are by and large planned to withstand storms of a certain repetition probability, based on the probability of a worse storm occurring during the suggested period.Structure resilience, cost-benefit and potential damage considerations must be taken into account in these planning stages.The concrete structures used as ground units in the Haifa AR project were originally prefabricated sewer ponds.They were thus cheap and relatively small and were easily fitted with the pipes for enhanced complexity.
Nevertheless, when coupled with the FAD units, they proved inadequate for use as ARs, as their life span was only 3-5 months.The relative longevity (over 3 years so far) of the two remaining bottom units, once severed from their FADs, advises against such a tether of bottom to midwater structures in the future.
Fig. 1 :
Fig. 1: Haifa AR Location map and illustration of the deployment scheme (map adapted from SPANIER et al. 1989, with permission).
Fig. 2a-d: Mean Fish Abundance, Species Richness, Shannon's Diversity Index and Biomass ± S.E.during 12 months of census (divided into 7 sub-seasons) in the Haifa AR Site quadrates: Unit quadrate, Adjacent quadrate, Detached quadrate, Complex control, Plain control.
Fig. 3 :
Fig. 3: Mean Lessepsian migrants percentage during 12 months of census in Haifa AR site in the study quadrates: Inner unit, Unit quadrate, Adjacent quadrate, Plain control, Complex control.Detached quadrate.
Fig. 5 :
Fig. 5: Juveniles/females recorded in censuses.Siganus rivulatus and Chromis chromis figures refer to juveniles (easily distinguishable by size and colour) whereas females of the wrasses Thalassoma pavo and Coris julis may highly resemble juveniles in both size and morphology.
|
2019-03-21T13:07:19.455Z
|
2009-06-01T00:00:00.000
|
{
"year": 2009,
"sha1": "5ed3a8895ca5b162746b3fa3a07d3fbbb26b3126",
"oa_license": "CCBYNC",
"oa_url": "https://ejournals.epublishing.ekt.gr/index.php/hcmr-med-mar-sc/article/download/12094/12103",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "5ed3a8895ca5b162746b3fa3a07d3fbbb26b3126",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
}
|
18537298
|
pes2o/s2orc
|
v3-fos-license
|
Concurrent BMP7 and FGF9 signalling governs AP-1 function to promote self-renewal of nephron progenitor cells
Self-renewal of nephron progenitor cells (NPCs) is governed by BMP, FGF and WNT signalling. Mechanisms underlying cross-talk between these pathways at the molecular level are largely unknown. Here we delineate the pathway through which the proliferative BMP7 signal is transduced in NPCs in the mouse. BMP7 activates the MAPKs TAK1 and JNK to phosphorylate the transcription factor JUN, which in turn governs transcription of AP-1-element containing G1-phase cell cycle regulators such as Myc and Ccnd1 to promote NPC proliferation. Conditional inactivation of Tak1 or Jun in cap mesenchyme causes identical phenotypes characterized by premature depletion of NPCs. While JUN is regulated by BMP7, we find that its partner FOS is regulated by FGF9. We demonstrate that BMP7 and FGF9 coordinately regulate AP-1 transcription to promote G1-S cell cycle progression and NPC proliferation. Our findings identify a molecular mechanism explaining the important cooperation between two major NPC self-renewal pathways.
I terative nephron induction is based on a program of reciprocal interactions between the ureteric bud and the surrounding nephron progenitor cells (NPCs) of the cap mesenchyme. The cap mesenchyme is divided into compartments expressing distinct transcription factors. The highest order compartment expresses Cbp/p300-interacting transactivator 1 (CITED1) and Six homeobox 2 (SIX2). This compartment transitions to the CITED1 À /SIX2 þ compartment that subsequently differentiates into the pre-tubular aggregate, the precursor of the epithelial renal vesicle 1,2 . A fine balance between NPC self-renewal and differentiation is critical in determining nephron number in the adult kidney. Three major growth factor pathways are essential for NPC maintenance and self-renewal: Wingless-type MMTV integration site family member 9B (WNT9B) /b-catenin, fibroblast growth factors (FGF) 9/20, and bone morphogenetic protein-7 (BMP7)/MAPK (refs 3-7). Previous reports suggest important signalling interactions in NPCs. For example, BMP and FGF synergistically promote progenitor cell maintenance in organ culture 8 . Understanding the mechanistic bases for these interactions is important to advance our understanding of renal organogenesis and for attempts at de novo nephrogenesis from stem cells.
To define the circuitry connecting signalling pathways and thus build an integrated model for regulation of NPC self-renewal, we first need to map signal transduction mechanisms used by each growth factor. In this study, we define the MAPK signalling cascade that transduces the proliferative response to BMP7 using complementary primary cell culture and conditional gene inactivation approaches. We show that the BMP7 signal is transduced through TAK1 and JNK to activate the transcription factor JUN in NPCs. JUN is required for proliferation of these cells and directly governs G1-phase cell cycle regulatory genes including Myc and Ccnd1. JUN is a component of the dimeric AP-1 transcription factor that also includes FOS (ref. 9). AP-1 components are differentially regulated by growth factor stimuli, and changes in dimer composition strongly influence AP-1 function 10 . Using both genetic and primary cell models we show that BMP7 and FGF9 have distinct effects on JUN and FOS. While FGF9 controls FOS activation, BMP7 activates JUN, which is the DNA-binding partner in the JUN-FOS heterodimer. JUN-FOS heterodimers are strong transactivators with enhanced DNA-binding ability compared with JUN homodimers 10 . Compared with either BMP7 or FGF9 treatment, combined BMP7 and FGF9 stimulation of NPCs amplifies activation of an AP-1 transcriptional reporter, increases transcription of the G1-S cell cycle regulator Ccnd1, and promotes G1-S transition and proliferation of cells. On the basis of these findings we propose that BMP7 and FGF9 cooperatively control AP-1 function to promote NPC proliferation, providing an explanation for the important synergy between these growth factors in NPC maintenance.
Results
BMP7 promotes NPC proliferation through TAK1 and JNK. Previous work from our laboratory indicated that BMP7 promotes proliferation of cells in the nephrogenic zone through MAPK signalling 7 . To determine the kinetics of pathway activation specifically in NPCs, we measured the phosphorylation states of each of the MAPK components TAK1, JNK and JUN in response to BMP7 in NPCs purified by immunomagnetic separation (Fig. 1a) 2,11 . Following BMP7 treatment, we detected a step-wise sequence of phosphorylation events with peak activation of pTAK1 at 10 min, pJNK at 15 min and pJUN at 20 min (Fig. 1b,c and Supplementary Fig. 1a). Activated JUN binds to AP-1 elements in target genes including itself and Myc (refs 12,13). In NPCs, Jun and Myc were upregulated 2 h after BMP7 stimulation and pre-treatment with TAK1 and JNK inhibitors significantly reduced this response, indicating that they are early transcriptional targets of the pathway in these cells (Fig. 1d). Tak1-and Jun-deficient NPCs showed a significant reduction in BMP7 stimulation of Jun and Myc transcription, corroborating the finding that BMP7 controls transcription of Jun and Myc through TAK1-JNK signalling (Fig. 1e,f). Myc is required for renewal of NPCs in vivo, and our findings outline one signalling mechanism for the control of Myc expression in these cells 14 .
To evaluate the role of the BMP7-TAK1-JNK-JUN pathway in cellular proliferation, we assessed the growth curves of BMP7stimulated NPCs treated with inhibitors of TAK1 or JNK. As expected, BMP7-stimulated proliferation was reversed by TAK1 or JNK inhibition (Fig. 1g). To confirm that NPCs retained their phenotype in the experimental conditions, we measured expression of CITED1, SIX2 and LEF1 as well as evaluating a panel of markers for cap mesenchyme and cortical interstitium ( Supplementary Fig. 1b,c). To confirm that BMP7-stimulated proliferation depends on kinase activity of pathway components, wild type (WT) and kinase-dead versions of TAK1 and JUN were expressed in NPCs, which were stimulated with BMP7. Transfection efficiency was analysed by expressing a GFP construct, and by measuring the expression of Tak1 and Jun transcripts in transfected NPCs ( Supplementary Fig. 1d,e). Wild type TAK1 and JUN expression augmented the BMP7-induced proliferative response, whereas kinase-dead variants reduced it, confirming that phosphorylation of pathway components is essential for proliferation of NPCs ( Fig. 1h and Supplementary Fig. 1f). On the basis of our primary cell analysis we conclude that BMP7 promotes NPC proliferation through activation of the TAK1-JNK-JUN signalling cascade.
Bmp7 and Tak1 interact to control NPC renewal. To confirm the BMP7-TAK1 relationship in vivo, we used the Bmp7 þ /cre strain to inactivate a single copy of Tak1. Bmp7 þ /cre is an inactivating mutation and heterozygous animals express only one copy of the gene 15 . We reasoned that limiting the availability of TAK1 would exacerbate the effect of reduced BMP7 ligand availability if these molecules operate in the same pathway. Although the body weight of Bmp7 þ /cre embryos appeared slightly lower than WT, no difference could be detected between Bmp7 þ /cre and Bmp7 þ /cre ;Tak1 þ /c mice ( Supplementary Fig. 2a). Morphometric analyses at E14.5 and P0 revealed significant reductions in size (*Po0.05) and weight (**Po0.001, Student's t-test, n ¼ 6) of Bmp7 þ /cre ;Tak1 þ /c kidneys compared with Bmp7 þ /cre , supporting the notion that BMP7 indeed does signal through TAK1 in vivo (Fig. 2a,b and Supplementary Fig. 2b,c). To verify loss of Tak1 and Jun in mutant kidneys, we measured expression of Tak1, Jun, and their downstream target Myc in isolated NPCs from P0 WT, Tak1 þ /c , Bmp7 þ /cre and Bmp7 þ /cre ;Tak1 þ /c mice. Tak1, Jun and Myc were reduced by B60% in Bmp7 þ /cre ;Tak1 þ /c compared with WT (Fig. 2c). Activated pJUN levels were decreased in the cap mesenchyme, but not in the collecting duct tips of Bmp7 þ /cre ; Tak1 þ /c kidneys compared with Bmp7 þ /cre and WT, confirming that compound heterozygosity for Bmp7 and Tak1 results in reduced activation of JNK-JUN signalling specifically in NPCs ( Supplementary Fig. 2d,e).
Cell death in the nephrogenic zone and premature loss of NPCs are hallmarks of the Bmp7 null mutants [16][17][18] . We therefore measured proliferation and cell death in NPCs of single and compound mutants. Using SIX2 with the proliferation marker Ki67, we observed a marked reduction in Ki67 þ /SIX2 þ cells at E14.5 (15%) and P0 (10%), and a concomitant decrease in the number of SIX2 þ cells per kidney in the Bmp7 þ /cre ;Tak1 þ /c kidneys relative to the Bmp7 þ /cre mutant ( Fig. 2d-g and Supplementary Fig. 2f-i). Apoptosis analysis showed no evidence of increased cell death in E14.5 Bmp7 þ /cre ;Tak1 þ /c or Bmp7 þ /cre kidneys, suggesting that Tak1 is involved only in the proliferative response of NPCs to BMP7 ( Supplementary Fig. 2j). Growth and branching of the collecting duct is controlled by factors secreted by NPCs, and to determine if branching was secondarily affected in Bmp7 þ /cre ;Tak1 þ /c kidneys, we quantified the number of collecting duct tips. Bmp7 þ /cre kidneys show reduced branching relative to the WT, which strongly suggests an effect of diminished NPC numbers in this mutant considering that the reduction of Bmp7 caused by heterozygosity is predicted to promote ureteric bud outgrowth and branching 19,20 . Compared with Bmp7 þ /cre , Bmp7 þ /cre ;Tak1 þ /c kidneys showed a further reduction of collecting duct branching proportional to the reduction in NPC number ( Fig. 2h and Supplementary Fig. 2k). Overall, our genetic interaction study supports control of NPC renewal by BMP7 signalling through TAK1 in the developing kidney.
Deletion of Tak1 and Jun in NPCs reduces their renewal. To stringently determine the requirement for components of the BMP7-TAK1-JNK-JUN pathway in NPCs in vivo, we inactivated Tak1 and Jun using Six2-cre. Both Tak1 NPC (Six2-cre;Tak1 c/c ) and Jun NPC (Six2-cre;Jun c/c ) P0 kidneys showed significant reduction in kidney weight (50%) and size (30-40%) compared with Tak1 het (Six2-cre;Tak1 þ /c ) and Jun het (Six2-cre;Jun þ /c ) kidneys (**Po0.005, Student's t-test), confirming that these genes are essential in the Six2 lineage, which is limited to NPCs and their derivatives (Fig. 3a-c and Supplementary Fig. 3a) 21 . Body weights of these different strains did not show significant differences (P40.05, Student's t-test; Supplementary Fig. 3b). To verify loss of Tak1 and Jun, we measured expression of Tak1, Jun and their target Myc in NPCs isolated at E17.5 (Tak1 NPC and Tak1 het ) or E14.5 (Jun NPC and Jun het ). Tak1 and Jun were reduced by 80% and Myc by B50-60% in Tak1 NPC and Jun NPC NPCs, respectively (Fig. 3d,e). As expected, Tak1 was unchanged in Jun NPC NPCs (Fig. 3e). pJUN and MYC were reduced in mutant kidneys, confirming that inactivation of Tak1 and Jun results in reduced activation of JNK-JUN signalling and downstream targets in NPCs ( Fig. 3f and Supplementary Fig. 3c).
Morphologically, Tak1 NPC and Jun NPC kidneys revealed several atypically organized cap mesenchymes carrying few NPCs ( Fig. 3g and Supplementary Fig. 3d,e). SIX2 þ cells were reduced by B50% in mutant kidneys, suggesting that Tak1 and Jun inactivation results in premature loss of NPCs ( Fig. 3h and Supplementary Fig. 3e). To understand if this was due to reduced proliferation, we measured coexpression of Ki67 or pHH3 and SIX2 in Tak1 NPC (P0) and Jun NPC (E14.5 and P0) kidneys. We observed 50% reduction of Ki67 þ /SIX2 þ cells and pHH3 þ /SIX2 þ cells in Tak1 NPC (P0) and Jun NPC (E14.5, P0) kidneys (Fig. 3i,j and Supplementary Fig. 3e). TUNEL and caspase3 staining showed no evidence of cell death in the Tak1 NPC kidneys, suggesting that loss of NPCs was strictly due to reduced proliferation ( Supplementary Fig. 3f).
To rule out the possibility that Tak1 and Jun mutant NPCs may take on a cortical interstitial fate, we analysed Tak1 NPC (E17.5) and Jun NPC (E14.5) NPCs for markers of cap mesenchyme (Cited1, Six2, Dpf3 and Meox1) and cortical interstitium (Foxd1 and Sfrp1). Cap mesenchyme markers either remained unchanged or showed a slight increase in Tak1 NPC (E17.5) and Jun NPC (E14.5). However, neither Foxd1 nor Sfrp1 were elevated in Tak1 NPC or Jun NPC indicating that the cellular identity of NPCs is unaltered (Fig. 3d,e and Supplementary Fig. 3g). To confirm that Tak1 and Jun mutant NPCs retain their cellular identity in vivo, we performed lineage tracing analysis by crossing the Six2 þ /cre ;Tak1 þ /c and Six2 þ /cre ;Jun þ /c mice with the R26RLacZ reporter. b-galactosidase and SIX2 immunostaining revealed that tagged cells were confined to SIX2 þ cap mesenchyme and its derivatives in both Tak1 NPC and Jun NPC kidneys ( Fig. 3k and Supplementary Fig. 3h). Thus, inactivation of Tak1 and Jun in NPCs partially phenocopies the Bmp7 null phenotype, suggesting that they operate in the same pathway to regulate NPC self-renewal.
The few Ki67 þ /SIX2 þ NPCs we observed in Tak1 NPC kidneys localized predominantly to the distal cap mesenchyme under the collecting duct tips, in which CITED1 expression is normally lost (see insets in Fig. 3i). To understand if the Tak1 NPC phenotype results from gene inactivation specifically in the CITED1 þ compartment, we used the Cited1-creER T2 strain. Cited1-creER T2 ;Tak1 c/c (Tak1 C-NPC ) and Tak1 c/c (Tak1 C-WT , littermate control) mice were tamoxifen injected at either E11.5 or E14.5 and collected after 72 h. Tak1 C-NPC kidneys were significantly smaller than Tak1 C-WT at both time points (**Po0.005, Student's t-test), and their cap mesenchymes were depleted ( Fig. 4a and Supplementary Fig. 4a,b). Tak1 transcript was reduced by 80% and both Jun and Myc were reduced in mutant NPCs (Fig. 4b). Immunoblotting confirmed the reduction of TAK1 in mutant NPCs ( Fig. 4c and Supplementary Fig. 4c). pJUN and MYC were also reduced, indicating that Tak1 inactivation results in reduced JNK-JUN signalling in NPCs ( Fig. 4d and Supplementary Fig. 4d,e). To understand if Tak1 is required to maintain NPCs in the CITED1 þ /SIX2 þ state, we analysed cap mesenchyme markers in Tak1 C-NPC NPCs. Tak1-inactivated NPCs maintained expression of Cited1 and Six2 at levels similar to WT, indicating that they retain the appropriate cellular identity. However, the number of CITED1 þ NPCs was reduced by 25-30% at both E14.5 and E17.5 ( Fig. 4e-g and Supplementary Fig. 4f). Co-immunostaining for CITED1, SIX2 and pHH3 confirmed decreased pHH3 staining in CITED1 þ / SIX2 þ cells of Tak1 C-NPC kidneys relative to Tak1 C-WT but no difference in the CITED1 À /SIX2 þ compartment at either E14.5 or E17.5, validating our findings from Tak1 NPC mutants (Fig. 4h,i and Supplementary Fig. 4g). Collectively, our genetic analysis suggests that the BMP7-TAK1-JNK-JUN pathway is required for proliferation of the early CITED1 þ /SIX2 þ compartment in vivo.
BMP7 promotes G1 to S cell cycle progression in NPCs. Having defined the requirements for components of the BMP7-TAK1-JNK-JUN signalling cascade in NPCs, we wanted to understand how this pathway interfaces with cellular proliferation control mechanisms. We have shown that the BMP7-TAK1-JNK-JUN pathway activates Jun and Myc transcription in NPCs (Figs 3 and 4). JUN and MYC are key transcriptional regulators of the cell cycle that modulate expression of genes involved in the G1-phase 22,23 . To test how the BMP7-TAK1-JNK-JUN pathway controls NPC proliferation, we investigated the effects of inhibiting pathway components on G1-S transition in BMP7stimulated NPCs. Immunostaining for specific markers of G1 (CCNE1) and S (PCNA) was performed to calculate the percentages of G1 and S-phase cells in each experimental condition. After 24 h stimulation, BMP7 robustly promoted G1 to S transition in both E14.5 and E17.5 NPCs (Fig. 5a, promotes NPC proliferation by controlling G1 to S transition through TAK1-JNK-JUN signalling (Fig. 5a,b and Supplementary Fig. 5a,b).
To understand the mechanism underlying the effect of BMP7-TAK1-JNK-JUN signalling on the G1-S transition, we set out to define the repertoire of G1-phase cell cycle regulatory genes modulated by the pathway in NPCs. Several cell cycle regulators containing AP-1-binding sites are JUN targets including Ccnd1, Ccnd3, p21, p16, Jun and Myc (refs 24,25). Like JUN, MYC regulates the cell cycle by controlling G1-phase genes. Although a number of targets are shared, MYC has a unique repertoire including Ccne1, Cdc25a, p27 and Ccna2 (ref. 22). BMP7 may therefore control G1-S cell cycle regulators not only through JUN but also through MYC. Conditional gene inactivation shows that Myc is required for NPC renewal at E15.5-E18.5 but not earlier in nephrogenesis, suggesting that the contribution of MYC to cell cycle control by BMP7 might be limited to later stages of nephrogenesis 14 . To understand if this is the case, we compared the responsiveness of MYC and JUN targets to BMP7 in NPCs at E14.5 and E17.5. The JUN targets Ccnd1, Ccnd3 and p21 were regulated by BMP7 in a TAK1-and JNK-dependent manner in both E14.5 and E17.5 NPCs (Fig. 5c,d). However, the MYC targets Ccne1, Cdc25a and p27 were regulated by BMP7 in a TAK1-and JNK-dependent manner only in E17.5 NPCs (Fig. 5c,d and Supplementary Fig. 5c-f). Our analysis indicates that the BMP7-TAK1-JNK-JUN pathway regulates JUN cell cycle targets including Myc throughout nephrogenesis, but that the contribution of MYC itself to control of G1 targets is limited to later stages of nephrogenesis.
To confirm these observations in vivo, we first measured target gene activation in NPCs isolated from E14.5 Jun NPC and E17.5 Tak1 NPC kidneys. As expected, JUN targets were misregulated in mutant NPCs at both E14.5 and E17.5, whereas MYC targets were misregulated only at E17.5 (Fig. 5e,f). Next, we immunostained Bmp7 null, Jun NPC and Tak1 C-NPC kidneys for the JUN-activated target CCND1 and the MYC-activated (green) and SIX2 (red) co-immunostaining of P0 Tak1 het , Tak1 NPC , Jun het , Jun NPC kidneys. Insets show magnifications of cap mesenchymes with arrows pointing to Ki67 þ cells. Scale bars, 100 mM. (j) Number of SIX2 þ /Ki67 þ cells per kidney section. Error bars represent s.d. and **Po0.005, Student's ttest (k) b-galactosidase (red) and SIX2 (green) co-immunostaining of E17.5 Tak1 het (Six2 þ /cre ;Tak1 þ /c ;R26RLacZ) and Tak1 NPC (Six2 þ /cre ;Tak1 c/c ;R26RLacZ) kidneys. Three mice were analysed per genotype at E14.5 and E17.5 (n ¼ 3). CD, collecting duct; CM, cap mesenchyme; PTA, pre-tubular aggregate. target CCNE1 at E14.5 and E17.5. CCND1 has been used as a marker of the distal tubule; therefore we first verified its expression in cap mesenchyme using two different antibodies 26 . CCND1 was expressed in a salt-and-pepper distribution in WT cap mesenchyme, as expected considering that its expression is limited to the G1-phase of the cell cycle ( Fig. 5g and Supplementary Fig. 5g). CCND1 expression was reduced in the cap mesenchymes of all mutants, suggesting that the BMP7-TAK1-JNK-JUN pathway indeed controls CCND1 in vivo and regulates JUN targets both early and late in nephrogenesis ( Fig. 5g and Supplementary Fig. 5i). To understand if this could represent a general reduction in expression of G1 cell cycle genes in NPCs of mutant kidneys, we also measured expression of Ccnd3, which is expressed in a temporally overlapping manner with Ccnd1. Although RNA expression was reduced by 20%, protein expression was not significantly altered in mutants, supporting the notion that CCND1 is specifically misregulated in BMP7-TAK1-JNK-JUN pathway mutants in vivo ( Supplementary Fig. 6h). Expression of the MYC-activated target CCNE1 was reduced in the E17.5 mutant kidneys but not in the E14.5 mutants, confirming our previous observation that MYC targets are regulated by the BMP7-TAK1-JNK-JUN pathway preferentially at later stages of nephrogenesis ( Fig. 5h and Supplementary Fig. 6j). From these analyses, we conclude that the BMP7-TAK1-JNK-JUN pathway controls cellular proliferation of NPCs by regulating different G1-phase cell cycle regulators in early and later phases of nephrogenesis (Fig. 5i).
BMP7 and FGF9 cooperatively control AP-1 transcription. FGF9 has been reported to synergize with BMP7 to promote maintenance of isolated metanephric mesenchyme in vitro, however the molecular mechanism underlying this cross-talk remains unknown 5,8 . Metanephric mesenchyme consists of a mixture of cell types, and we first analysed proliferation in BMP7and FGF9-treated purified E17.5 NPCs to understand if the pathways intersect in this cell type. Using 5 0 -ethynyl-2 0 deoxyuridine (EdU) to label the S-phase and pHH3 to mark cells undergoing mitosis (M), we measured the overall proliferation of NPCs stimulated with BMP7, FGF9 or BMP7 þ FGF9. BMP7 or FGF9 stimulation showed a significant increase in EdU þ and pHH3 þ nuclei compared with vehicle, and this effect was further augmented in BMP7 þ FGF9 stimulated cultures (Fig. 6a). Immunostaining and transcriptional analysis of cap mesenchyme markers showed that BMP7 þ FGF9-treated cultures remained in the CITED1 þ state following treatments ( Supplementary Fig. 6a,b). Quantitation of number of EdU þ and pHH3 þ nuclei revealed a significant increase in S and M-phase cells in BMP7 þ FGF9-treated cultures relative to either BMP7 or FGF9 stimulation, suggesting that these growth factors indeed collaboratively promote NPC proliferation (Fig. 6a). To understand if FGF9 interfaces with BMP7 to regulate G1-S progression, we labelled NPCs treated with BMP7, FGF9, or BMP7 þ FGF9 with CCNE1 and PCNA to distinguish cells in G1 and S phases. BMP7 þ FGF9 stimulation resulted in B50% fewer G1 and G1-S cells and 30% more S-phase cells compared with BMP7 or FGF9 treatment suggesting that BMP7 and FGF9 promote NPC proliferation by accelerating the G1 to S cell cycle progression (Fig. 6b). To determine how FGF9 and BMP7 control the G1-S transition, we measured stimulation of the BMP7-TAK1-JNK-JUN-controlled G1 regulatory genes Ccnd1, Ccnd3, Myc, Ccne1 and Cdc25a by either factor or both factors together. Expression of all five transcripts was upregulated by BMP7 and interestingly, FGF9 stimulation also increased their transcription indicating that FGF9 contributes to regulation of AP-1 targets. BMP7 þ FGF9 combinatorial stimulation showed an additive effect on these targets, indicating that BMP7 and FGF9 coordinately control transcription of G1-phase cell cycle regulators ( Fig. 6c and Supplementary Fig. 6c). AP-1 function is regulated by dimer composition as well as the phosphorylation status of its constituents, and JUN-FOS heterodimers activate targets more efficiently than JUN homodimers 9,10 . Given that BMP7 and FGF9 combinatorial stimulation increased transcription of G1-phase cell cycle regulators containing AP-1-binding elements, we speculated that FGF9 may modulate transcription and phosphorylation of JUN or its dimeric partner FOS concomitantly with BMP7 to regulate AP-1 function. We first measured the effects of FGF9 and BMP7 stimulation on Jun and Fos transcription. As expected, Jun transcription was upregulated by BMP7, but surprisingly it was unaffected by FGF9 (Fig. 6d). FGF9 stimulation of cells was verified by measuring expression of the FGF-target gene Spry1 (Supplementary Fig. 6d). Fos transcription, on the other hand, was strongly induced by FGF9 compared to BMP7 and this effect was further enhanced by combined stimulation with BMP7. Thus, while FGF9 and BMP7 cooperatively promote Fos transcription, the obligate DNA-binding partner Jun is controlled by BMP7 alone (Fig. 6d). Examination of JUN and FOS phosphorylation showed that BMP7 robustly activates JUN (3.15-fold), whereas FGF9 activates FOS (2.85-fold) ( Fig. 6e and Supplementary Fig. 6e). BMP7 þ FGF9 stimulation resulted in simultaneous phosphorylation of FOS and JUN suggesting that AP-1 transcriptional activation may be potentiated (Fig. 6f). To test this, we transfected NPCs with an AP-1 luciferase reporter (3 Â AP1-Luc) and measured reporter activity in response to BMP7 and FGF9 stimulation 27 . BMP7 or FGF9 treatment resulted in less than a twofold luciferase response, but combined treatment caused more than a 2.5-fold increase, indicating that simultaneous JUN and FOS activation promotes AP-1 transcriptional activity (Fig. 6g). To test this in a gene that directly influences proliferation in NPCs, we compared activation of a Ccnd1 luciferase reporter (CCND1-Luc) with a variant in which the AP-1-binding site has been mutated (CCND1 DAP-1 -Luc) 28 . BMP7 or FGF9 treatment alone showed less than twofold luciferase response, whereas BMP7 þ FGF9 stimulation resulted in a threefold increase, demonstrating that concurrent BMP7 and FGF9 signalling strongly promotes AP-1 function compared with either factor alone (Fig. 6h). Dependence on the AP-1 element for this transcriptional activity was confirmed by the finding that the Ccnd1 promoter with mutated AP-1-binding site was unresponsive to BMP7 and/or FGF9 stimulation (Fig. 6i). We previously reported that transgenic expression of the FGF feedback regulator Spry1 in NPCs results in increased apoptosis in cap mesenchyme 6,29 . To confirm the contribution of FGF signalling to FOS regulation in vivo, we generated Six2-cre; Spry1-Tg mice. Kidneys were severely hypoplastic at P0, and body weight was reduced compared with WT littermate controls (Fig. 7a,b; and Supplementary Fig. 7a-c). Spry1-Tg kidneys revealed a thin nephrogenic zone with depleted cap mesenchymes and distended tubules (insets in Fig. 7a and Supplementary Fig. 7d). SIX2 immunostaining confirmed premature loss of NPCs in Spry1-Tg kidneys with B65% reduction in NPC number ( Fig. 7d and Supplementary Fig. 7e). Spry1 expression increased sevenfold in Spry1-Tg NPCs, whereas the FGF-target gene Pea3 was reduced by 55%, confirming inhibition of FGF signalling. Fos transcript diminished by 70%, whereas Jun and its upstream regulator Tak1 remained unchanged (Fig. 7e). To determine if Spry1-mediated attenuation of FGF signalling strictly results in reduced activation of FOS, we compared pFOS and pJUN expression in cap mesenchymes of P0 WT and Spry1-Tg versus Jun het and Jun NPC kidneys. Expression of pFOS was reduced, whereas pJUN levels remained intact in cap mesenchymes of Spry1-Tg kidneys. Reciprocally, pFOS levels were unperturbed while pJUN was strongly reduced in cap mesenchymes of Jun NPC kidneys ( Fig. 7f and Supplementary Fig. 7f). We next asked if reduced activation of JUN and FOS in Jun NPC and Spry1-Tg NPCs, respectively, decreases AP-1 transcriptional activity in response to BMP7 or FGF9 stimulation, and whether this effect can be rescued by expressing JUN or FOS in mutant NPCs. We treated 3 Â AP-1Luc-transfected E17.5 NPCs isolated from Jun het and Jun NPC , WT and Spry1-Tg kidneys with BMP7 and/or FGF9. Interestingly, Jun-deficient NPCs failed to activate the AP-1 reporter in response to both BMP7 and FGF9 treatment, whereas Spry1-Tg NPCs only showed a slight reduction in AP-1 reporter activity in response to BMP7, but were completely unresponsive to FGF9 (Fig. 7g,h). Expression of a wild type JUN construct (pCMV-JUN) rescued AP-1 reporter activation in both BMP7and FGF9-stimulated Jun-deficient NPCs, and expression of a FOS phosphorylation-mimic construct (pcDNA-FOSDD) rescued AP-1 reporter activity in FGF9 stimulated Spry1-Tg NPCs (Fig. 7g,h) 30,31 . This suggests that JUN is essential for AP-1 activation by both growth factors, and the availability of FOS determines the amplitude of AP-1 activity. Cellular identity and transfection efficiency of NPCs was verified by examining cap mesenchyme and pre-tubular aggregate markers, expression of a GFP construct, JUN and pJUN immunostaining and RT-qPCR analysis of Jun transcript levels in transfected cells ( Supplementary Fig. 7g-l). To determine whether JUN and FOS are required for the proliferative response of NPCs to BMP7 and FGF9, we performed EdU labelling of NPCs isolated from E17.5 Jun het and Jun NPC , WT and Spry1-Tg kidneys and stimulated with BMP7 and/or FGF9. Jun-deficient NPCs failed to respond to both BMP7 and FGF9 stimulation. However, proliferation of Spry1-Tg NPCs in response to FGF9 was severely attenuated, and only slightly reduced in response to BMP7 suggesting that Jun is essential for proliferative response of NPCs to both BMP7 and FGF9 (Fig. 7i,j). On the basis of these studies, we propose that BMP7 and FGF9 cooperatively control the composition of AP-1 dimers in NPCs, and that AP-1 composition influences the strength of activation of cell cycle regulators such as Ccnd1 (Fig. 7k).
Discussion
BMP7 is required for NPC maintenance in the developing kidney [15][16][17] . We show that the signalling cascade is initiated by activation of the MAPKKK TAK1 in response to BMP7 stimulation. TAK1 can be activated by numerous stimuli, but the finding that Bmp7 and Tak1 interact to regulate NPC renewal in vivo indicates an essential role for TAK1 specifically in the BMP7 pathway 32 . TAK1 activates JNK, which phosphorylates the transcription factor JUN, and the kinase activity of each of these components is essential for proliferation of NPCs. Kidneys lacking Tak1 or Jun in cap mesenchyme display identical phenotypes characterized by premature depletion of NPCs, indicating that JUN may be the sole essential mediator downstream of TAK1 in this signalling process. We show that Myc is a transcriptional target of the BMP7-TAK1-JNK-JUN pathway in NPCs. MYC is essential for proliferation of cap mesenchyme in vivo and, together with JUN, activates G1-phase cell cycle regulators, explaining the proliferative effect of BMP7 stimulation 14 . NPCs display different average cell cycle lengths during early (E13.5) and later (E17.5) stages of nephrogenesis. Proliferation profiles of cap mesenchymes at these stages suggest that they are heterogeneous, containing both slowly and rapidly dividing cells 33 . An interpretation consistent with models of other stem/progenitor cell populations is that the slowly dividing subset may represent the self-renewing CITED1 þ /SIX2 þ population, whereas the rapidly dividing subset represents the CITED1 À /SIX2 þ population that is differentiating 34,35 . Our finding that the CITED1 þ /SIX2 þ population is reduced following conditional inactivation of Tak1 using Six2-cre and Cited1-creER T2 drivers without any notable effect on proliferation of the CITED1 À /SIX2 þ population indicates that the BMP7-TAK1-JNK-JUN pathway is used primarily by the slowly dividing, self-renewing cells in the cap mesenchyme. The cycle length of NPCs increases as the embryo ages indicating that control mechanisms are added as development progresses. Interestingly, MYC targets are primarily regulated late in nephrogenesis, which is consistent with the observation that conditional Myc inactivation slows cap mesenchyme proliferation only after E15.5. Whether cell cycle control through MYC late in nephrogenesis represents addition of a control mechanism or simply redundancy with N-MYC, whose expression is lost late in nephrogenesis will need to be answered by comparison of conditional inactivation of both factors 36 .
In addition to reduced proliferation in the cap mesenchyme, loss of Bmp7 causes ectopic cell death within the nephrogenic zone [16][17][18] . We do not see any effects on survival in kidneys in which Tak1 or Jun have been inactivated in the cap mesenchyme. However, inactivation of Smad4 using Bmp7-cre results in ectopic cell death within the nephrogenic zone of mutant kidneys, indicating that cell survival may be regulated through the SMAD pathway 15 . Mechanisms that control the balance between SMAD versus TAK1 signalling downstream of BMP7 in the cap mesenchyme are not currently understood. However, recent work indicates that FGF signalling may negatively regulate SMAD signalling providing an explanation for the lack of SMAD signalling seen in the CITED1 þ compartment 37 .
Composition of the AP-1 dimer is a critical factor in determining cellular fates such as proliferation, apoptosis and differentiation 10 . We find that BMP7 robustly controls transcription and activation of JUN, whereas FGF9 strongly induces transcription and phosphorylation of FOS. Analysis of the Six2-cre;Spry1-Tg mouse strain in which the FGF feedback inhibitor SPRY1 is expressed in NPCs and primary cell transcriptional reporter assays indicates that the activation of JUN and FOS by simultaneous BMP7 and FGF9 signalling potentiates AP-1 transcription, and we propose that co-regulation of the AP-1 transcription factor is one basis for the cooperative effect of BMP7 and FGF9 in kidney development. While we have defined the signalling cascade between BMP7 and Jun, the pathway between FGF9 and Fos is less clear. The inhibitory effect of Spry1 indicates that RAS is essential, and a recent report showing that NPC self-renewal is dependent on PI3K suggests that FGF9-RAS-PI3K may be the pathway governing Fos expression 38 .
Although JUN can form homodimers to activate target transcription, these bind AP-1 elements less tightly than JUN-FOS heterodimers, and have weaker transcriptional activity. In the case of BMP7 stimulation alone, the ratio would be skewed towards homodimer formation, whereas concurrent FGF9 signalling would skew the ratio towards JUN-FOS heterodimers, thus amplifying target transcription. Combinatorial BMP7 and FGF9 stimulation promotes robust transcription of the G1 regulator Ccnd1 in an AP-1-dependent manner, and we propose that control of AP-1 targets by combinatorial BMP7 and FGF9 signalling promotes G1-S transition, explaining a mechanism for the cooperative effects of these growth factors on NPC proliferation. Our current findings identify AP-1 as a specific point of interaction of the BMP and FGF pathways in NPCs. We have previously shown that WNT and FGF activate common targets in NPCs 6 . It therefore seems possible that WNT9B-b-catenin signalling could converge with BMP7 and FGF9 on the regulation of AP-1. Signalling cross-talk between BMP, FGF and WNT pathways is a recurring theme in organogenesis, and WNT/b-catenin signalling can regulate transcription of AP-1 targets such as Myc, Ccnd1 and Ccnd2 (refs 39,40). Understanding this point of intersection further could explain the molecular basis for the combinatorial effects of these three distinct pathways on NPC proliferation.
Methods
Mouse strains. Animal care was in accordance with the National Research Council Guide for the Care and use of laboratory animals and protocols were approved by the Institutional Animal Care and Use Committee of Maine Medical Center. Cited1-creER T2 mice, R26RlacZ mice and Spry1-Tg mice were maintained on an FVB/NJ background 29,41,42 . Bmp7 þ /cre mice were maintained on an ICR background. Six2-TGC tg (Tg(Six2-EGFP/cre) 1Amc/J ), Tak1 c/c (Map3k7 tm1.1Mds ), Tak1 þ /c , and Jun c/c (Jun tm4wag ) mice were maintained on the C57BL/6 background 21,43,44 . For tamoxifen-inducible cre mice, pregnant dams were injected at the indicated times with 3 mg tamoxifen in corn oil per 40 g mouse. Immunohistochemistry. Dissected kidneys were fixed in 4% paraformaldehyde (PFA) for 30 min (E14.5, E17.5 kidneys) to 1 h (P0 kidneys) at room temperature. Paraffin-embedded sections were incubated with blocking solution containing 1X phosphate buffered saline (PBS), 1% bovine serum albumin, 5% serum of secondary antibody species (Jackson ImmunoResearch) and 0.05% hydrogen peroxide (Sigma) for 1 h at room temperature. Primary antibodies were diluted in 1X PBS and incubated at 4°C overnight: anti-SIX2 ( antibodies were used at 1:250 for detection of labelled cells. Nuclei were stained using DAPI (Molecular probes) for immunofluorescence and hematoxylin for immunohistochemistry. Sections were mounted using vectashield Mounting medium (Vector Laboratories). TUNEL staining was performed using ApopTag-Plus peroxidase in situ apoptosis detection kit (EMD Millipore) according to manufacturer's instructions.
Purification of NPCs by magnetic bead depletion. Total NZCs were isolated from E14.5 and E17.5 ICR mice by enzymatic digestion as previously described 7 . For isolation of NZCs from E17.5 and P0 conditional mutants, control and mutant kidneys were sorted based on size and GFP expression (Six2-cre-EGFP and Cited1-creER T2 -EGFP), and confirmed by genotyping. Enrichment for CITED1 þ cells and purification (referred to as NPCs) was performed by negative depletion with magnetic activated cell sorting using phycoerythrin-conjugated antibodies and anti-phycoerythrin Microbeads according to the manufacturer's protocol (Miltenyi Biotec) 2,11 . Purified NPCs were cultured in monolayer in keratinocyte serum-free media (KSFM, Thermo Fisher Scientific) supplemented with rh-FGF2 (50 ng ml À 1 , R&D Systems) and 100 U ml À 1 penicillin-streptomycin in plates coated with human plasma Fibronectin (100 mg ml À 1 , EMD Millipore). The identity of purified NPCs from ICR and conditional mutant mice was verified by immunostaining using anti-CITED1 (1:200, Cell Signaling), anti-SIX2 (1:200, Proteintech) and anti-LEF1(1:100, Cell Signaling) antibodies, and RT-qPCR analysis of cap mesenchyme and cortical interstitium markers before and after growth factor/inhibitor treatments.
Transfections and dual-luciferase reporter assays in NPCs. E17.5 NPCs cultured in KSFM (Thermo Fisher Scientific) media with rh-FGF2 (50 ng ml À 1 ) were transfected for 24 h using lipofectamine 2000 (Life Technologies) 24,25 . Briefly, 1-2 mg of plasmid DNA and 1-2 ml of lipid was mixed in a 1:1 ratio in Opti-MEM (Life Technologies) and added to NPCs in KSFM without antibiotics and incubated for 1 h. Medium was replaced with fresh KSFM supplemented with rh-FGF2 1 h after transfection to minimize cytotoxicity. Transfection efficiency was estimated using a pCX-EGFP construct at 24 and 48 h after transfection and by RT-qPCR for overexpressed genes. For luciferase reporter assays, transfected cells were stimulated with vehicle or indicated growth factors for 24 h. Cells were lysed and luciferase activity was measured using the dual-luciferase reporter assay kit (Promega). Relative luciferase activity was normalized to Renilla-luciferase and average fold changes relative to vehicle treatment from four biological replicates and two independent experiments (n ¼ 2) are represented in the graphs.
Quantitative RT-PCR. RNA extraction from E14.5 and E17.5 NPCs was performed using RNeasy Microkit (Qiagen). Concentration of RNA was measured using NanoDrop2000 Spectrophotometer (Thermo Fisher Scientific), and a final concentration of 100-250 ng ml À 1 of RNA was used for cDNA synthesis by iScript Reverse Transcription Super Mix (BioRad). Quantitative RT-PCR was performed using iQ-SYBR Green Super mix (BioRad). Primer sequences of genes are listed in Supplementary Table 1. Fold changes were normalized to the housekeeping gene b-actin and average values (mean±s.d.) of three technical replicates and from two to three independent experiments (n ¼ 2 or 3) are shown in the figures. P values were calculated using a two-tailed Student's t-test, and Po0.05 was considered significant. Whole mount immunostaining. Dissected kidneys were fixed in 4% PFA for 10 min at room temperature and washed with 1X PBS for 5 min at 4°C. Kidneys were permeabilised with 1X PBS containing 0.1% Triton-X for 10 min at 4°C, followed by a wash in 1X PBS containing 0.01% Tween. Kidneys were incubated with blocking solution containing 1X PBS containing 0.01% Tween with serum of secondary antibody species for 8 h. Primary antibodies to anti-SIX2 (1:200, Proteintech) and anti-cytokeratin8/TROMA-1 (1:100, DSHB) were diluted in blocking solution and added to the wells containing the kidneys and incubated for 24 h at 4°C. Alexa-Fluor 488/568 secondary antibodies (Molecular Probes) were used at 1:250 and incubated for 24 h to detect staining in the kidneys.
Cell cycle marker analysis. NPCs were cultured in monolayer with rh-BMP7 and/or rh-FGF9 (50 and 100 ng ml À 1 , R&D systems) in KSFM (Thermo Fisher Scientific) for 24 h. Cells were fixed in 4% PFA and blocked in 1X PBS containing serum of the secondary antibody species, following which they were incubated in primary antibodies to CCNE1 and PCNA (1:200, Santacruz). Alexa-Fluor-488 (CCNE1) and Alexa-Fluor-568 (PCNA) secondary antibodies were used to visualize the staining. Images (5)(6)(7)(8) were taken per well for each condition with a minimum of three biological replicates and three independent experiments (n ¼ 3). Pooled images were analysed by Image-J and number of cells positive for G1 (CCNE1 þ ), G1-S ( CCNE1 þ /PCNA þ ) and S (PCNA þ ) phases were counted and divided by the total number of DAPI þ nuclei to determine the percentage of cells representing G1, G1-S or S-phase. Data are represented as percentage of G1 or S-phase cells in each condition.
EdU labelling of NPCs. E17.5 NPCs were cultured in monolayer with rh-BMP7 and rh-FGF9 (50 and 100 ng ml À 1 , R&D systems) in KSFM. Cultures were incubated with 20 mM EdU (Click-iT EdU Alexa-Fluor 488 Imaging Kit, Life Technologies) 4 h after growth factor stimulation and pulse-chased for 20 h. Fixation, permeabilization and Click-iT reaction was performed according to manufacturer's instructions. Cultures were incubated with anti-pHH3 (1:100, Cell Signaling) antibody for 1 h and Alexa-Fluor-568 secondary antibody was used to visualize the staining. Nuclei were stained with Hoechst 33342 (Life Technologies). Images (5-10)were taken per well for each condition with a minimum of three biological replicates from two independent experiments (n ¼ 2). Pooled images were analysed by Image-J and number of EdU þ (S-phase) and/or pHH3 þ (Mitosis or M-phase) nuclei were counted and divided by the total number of nuclei to determine the percentage of cells in S-and M-phases. Data are represented as percentage of S-or M-phase cells in each condition.
Morphometrics and statistical analyses. (i) Kidney weight measurements were normalized to body weights to account for any differences in overall body size. Kidney size measurements were determined by calculating the cross-sectional area of the pole-to-pole distance on dissected whole kidneys using SPOT 5.1 imaging software. Measured kidney weights and sizes of individual animals per group are represented in scatter plots. (ii) Quantitation of NPC number and proliferation was performed manually for a minimum of five serial sections 100 mM apart per kidney per genotype and the total number of individual or tamoxifen-treated mice (n) analysed per time point is indicated in the figures. Error bars represent mean (s.d.) for each animal per experimental group. (iii) Growth curve analysis was conducted by counting cells using a hemocytometer for a minimum of three to five biological replicates per condition from three independent experiments. Two-tailed Student's t-test was performed for all statistical analyses and the resulting P values are noted in the figures.
|
2018-04-03T06:24:26.148Z
|
2015-12-04T00:00:00.000
|
{
"year": 2015,
"sha1": "6b8da99c1ef4bf197ec52f50b869f32cf7bfd8a5",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/ncomms10027.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6b8da99c1ef4bf197ec52f50b869f32cf7bfd8a5",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
232073851
|
pes2o/s2orc
|
v3-fos-license
|
A systematic review to assess the impact of the Elderly Health Care Voucher Scheme (EHCVS) and the feasibility to fully adopt in Hong Kong elder care services
Background: Finding a solution to tackle the overcrowding and over-reliance on public health care services has been a policy agenda of the Hong Kong Government throughout the past decade. The purpose of this review is to provide valuable insight for policymakers to understand whether the Elderly Health Care Voucher Scheme (EHCVS) is a realistic policy tool to shift service demand from the public to the private sector and its possibility to apply in other similar publicly funded settings.Methods: Included records in this review were selected through CINAHL, PubMed, and Google Scholar peer-reviewed articles databases and nine targeted government websites. All potential records were assessed based on the prespecified inclusion and exclusion criteria. Thematic synthesis was used to combine the extracted data and to construct key themes of the impact of the EHCVS.Results: The findings highlight some of the successes of the policy that focus on strengthening the connection between government, elders and private health care providers, and improving the quality of acute care. However, less than successful elements that require revision include designing the purpose of voucher for preventive care and disease management and shifting elders from the public to private health sector through financial incentives. Overall, the analysis suggested the financial subsidies have not motivated elders to utilise private health care services, but rather it demonstrates an effort by the Hong Kong Government to begin addressing public health care waiting lists while prioritising quality care for senior citizens throughout the last 10 years.Conclusion: Better consideration of the subsidy amount to remove the financial burden of the older population, along with greater information disclosure and promotion may increase elders’ willingness to utilise private elder care services, potentially improve the quality of life for seniors, and ultimately reduce the burden on public elder care sector in the future.
incentivising and hence shifting the care services for older persons to the private sector to improve public sector responsiveness and e ciency [50]. Further, it was also projected the scheme might reduce the number of patients waiting for treatment and increase the quality of care and patient satisfaction with public sector providers [29]. The idea of implementing a voucher mechanism to support the older population of Hong Kong was also seen as a tactic for the Chief Executive of Hong Kong back in 2008, which was believed to be politically favourable for an upcoming election [48].
How the intervention might work for elder care services Implication of the EHCVS Successful implementation of the EHCVS would likely induce behavioural change among older Hong Kong citizens [17]. Most of the senior population began to utilise private health care services rather than remain dependent on public health care services [24,26]. The shift can ultimately lessen the burden, shorten the waiting list of the Emergency Department, and allow limited resources, such as health nance, hospital beds, and health workers, to be allocated to emergency patients in the public health care sector [24,26].
The feasibility of fully applying the voucher system in elder care The existing EHCVS only allows elders to apply vouchers to aged-care services to a certain extent. Elders are not allowed to use their voucher to purchase private residential home service, respite service, elderly support service (carer who aids elders in various activities of daily living), and medical supplies and incontinence aids [20,38]. The factors that contribute to the above restriction were simply because other types of nancial subsidies/allowance provided by the government under the Social Welfare Department (SWD) are available to support elders with nancial hardships [32]. However, older adults living with their family often failed to pass the mean test or were denied subsidies or allowances after considering household income [7].
Community care and residential care services At the beginning of 2019, 12,300 applicants were on the waitlist for government-funded community care services and 40,434 applicants were waiting for government-funded residential home care services, with an average wait time of 1.5 years and 3 years, respectively [7,32,33,34].
Often, some elders die while waiting for placement in aged-care facilities. Although private aged-care homes also required applicants to wait for 9 months, there is a higher capacity (51,299 beds) for private elderly care sector to provide services to elders compared to the 23,422 beds subsidised by the government [31]. In addition, elders with greater nancial ability can purchase private community care services at HK$9,000 per month in approximately 80 day-care centres/social care centres with a shorter waiting time [25]. Nonetheless, most low-income older adults view private aged-care services as unaffordable [32]. Without a subsidy, elders are less willing or are unable to afford to pay for private aged-care services, which indirectly forces them to stay in the public system.
Elderly support services
Apart from residential and community care services, different types of assistance services are also in favour of most senior citizens who live alone. In Hong Kong, more than 13% of the 1.1 million senior adults choose to live independently in community dwellings [36]. Elders who cannot function properly often require someone to assist with their daily life activities, such as meal delivering, housecleaning, assisted bathing, and accompanying them to medical appointments. Similar to any other aged-care services subsidised by the government, government-funded support services not only require elders to wait at least 3 months to more than 1 year after the application, but also demand elders to pay for the tools and travel expenses for the support teams or volunteers [35]. In other words, elders did not have a full subsidy for support services. Due to the above reasons, many elders would rather seek support services from private and self-nanced organisations [47]. However, the service charge ranging from HK$62 to HK$160 per hour from private organisations creates a signi cant burden for elders to purchase the services for the long term. Often, elders need to spend approximately HK$248 to HK$640 on support for a 4-hour consultation in the public hospital [10].
Therefore, adding the choice of applying the voucher to support services would allow elders to purchase services at least 18 times with an average of HK$111 under the current grant of HK$2,000 per annum.
Medical supplies and equipment
Under the current EHCVS, elders are also prohibited from purchasing any medical equipment and incontinence aids with the voucher [20]. Members of the Legislative Council proposed expanding the service provision to include purchasing medical supplies and equipment during the rst phase of the EHCVS [41]. Due to concerns about double subsidies provided to elders and preventing self-prescribing, the Chief Secretary for Administration rejected the members' suggestion [17,39,41]. However, the fact is that institutionalised elders hardly use their voucher outside of the aged-care home, and therefore allowing this group of elders to purchase incontinence supplies with the voucher, such as ostomy bags or diapers, could be more bene cial than saving the nancial subsidy. This adjustment does not only bene t elders in residential homes but also elders who are living independently or with their partner and require continence and incontinence aids to manage their daily life. Although the existing disability allowance has covered some of the medical supply expenses for elders, applicants are required to obtain proof of their disability or medical condition [9]. The high eligibility rules of disability allowance along with the low monthly payment of HK$1,770 to HK$3,540 per month, was shown to be insu cient for elders with stoma to cover the costs of essential medical products, such as stoma glue and powder as well as afford better quality of ostomy bags [9]. Providing the choice of purchasing health care necessities with the voucher will allow greater exibility of voucher usage tailored to the elders' actual needs as well as offer a solution to long-standing policy issues-endless waiting times in public health and elder care services.
The importance of conducting a systematic review Although the EHCVS has been implemented in Hong Kong for 10 years (2009-2019), no systematic review has been conducted to obtain a complete picture of the impacts and effectiveness of the EHCVS over this period to allow health policymakers to assess and re ect on policy implementation. This systematic review will enable policymakers to recognise the strengths and weaknesses of the EHCVS, support adjustments to increase the potential success of the policy, and allow for policy diffusion into elements of aged-care reform in Hong Kong. Although the discussion in this paper is restricted generally to elder care, many of these lessons may also apply to other settings.
Identi cation of studies
Before undertaking the literature search, a Population, Intervention, Comparisons, and Outcomes (PICO) model was adopted to ensure clear de nition of the research question and subsequent inclusion and exclusion parameters for identi ed papers [19]. The search was divided into two categories: peer-reviewed journals and searches of targeted sites. Both of the searches were conducted and completed at the beginning of March 2019. For peer-reviewed journals, three search engines were used: CINAHL, PubMed, and Google Scholar. Medical Subject Headings (MeSH) were utilised to identify alternate phrasing and synonyms for each component of the PICO [20]. All search terms were built on the key components of the PICO, which includes the older population of Hong Kong, voucher*, nancial incentive*, impact* on elders. A detailed search strategy can be found in Supplementary le 1. A BOOLEAN operators was employed to combine the keywords of the search, ensuring a comprehensive yet focused search [19]. In the initial search process, 888 articles were identi ed in CINAHL and PubMed, 30 articles were discovered in the Google Scholar engines.
Screening
Peer-reviewed journals After performing the initial search process in the three search engines, all potential studies were exported to a reference management software, Endnote, to store the potential records and perform the screening task [19]. The built-in function of Endnote rstly removed 46 duplicate records. Each title was screened based on the search plan shown in Table 1 to identify potentially relevant studies for the review. In this stage, 3 of 842 records in the two academic databases were identi ed as related to the review, and 19 records in the Google Scholar engine met the eligibility criteria for further screening.
Searches of targeted sites
The Health Care Voucher website, established by the Health Care Voucher Unit under the Department of Health, was the primarily targeted website for grey literature as it is a central information platform for the scheme [20]. A scan through the health care voucher website and related links under the resources corner were performed to identify potential publications meeting the search criteria shown in Table 1. In one of the related links -Elderly Health Service all websites of elderly service departments and agencies were discovered. Seven of 23 sites on this web page were chosen for screening, which was believed to have a linkage with the development and implementation of the EHCVS. These included the Department of Health (DH), Central Health Education Unit, Healthy HK, Primary Care Directory, Elderly Commission, eElderly, and Hospital Authority. Scanning of the page list and publication corner was performed on the above websites referring to the search plan. The Legislative Council website was later selected for inclusion in the study after examining the press release announced in the DH website. Instead of using the press release records in the DH website, a keyword search, which utilised a combination of Elderly Health Care Voucher, English, Paper, after inclusive or relevant articles. 'Paper' was then replaced by ' Documents' in the search engine to discover if any publications pertinent to the review. A total number of 24 records were considered eligible for the second stage of the screening process.
In the second screening stage, all eligible records were screened using the table of contents, summary, or abstract to con rm their relevance to this study. If studies were unsure whether it met the eligibility criteria, a full-text review was conducted to determine nal inclusion. At the end of the screening process, searching for potentially relevant articles was performed in the reference lists of all selected studies to discover if any articles are meeting the inclusion criteria. [12,14,16,21,43]. An additional nding box was added at the end of each appraisal tool to summarise major ndings. The detailed quality appraisal tables regarding the study types are presented in Supplementary le 2.
In general, most answers for the assessment items in the quality appraisal tools were 'yes' or 'excellent' which suggested included records were credible and relevant to the research question. In particular, the two pieces of grey literature drafted in 2015 provided a comprehensive picture of the take-up rate of the EHCVS for both elders and private health service providers throughout the past 7 years. These unpublished papers not only lled the gap of the missing data in the 10 included academic studies, but also minimise the formation of conclusion bias by only considering published articles to evaluate the impact of the EHCVS on the older population of Hong Kong [19].
A 'no' was found in the three cross-sectional studies and qualitative research in the criterion of justifying of the selected population for study. This loophole may lead to bias in generalizing the effect of the intervention to the entire study population, resulting in inaccurately representing the knowledge, understanding, and perception of the EHCVS among the older adults of Hong Kong [19,24]. However, this bias was believed to have no signi cant in uence on the systematic review, as this issue would likely to be overcome when combining the studies in the synthesis process [19]. The samples of included studies came from different locations across Hong Kong not only offered a comprehensive picture of the impact of the EHCVS among the older population, but also enriched the content of the review. Therefore, the quality assessment process was aimed to ensure high qualities of articles were used to inform the analysis rather than to exclude unmet studies [19].
Data extraction
For all included studies, relevant data was imported into an Excel spreadsheet for information management. Relevant data was populated into the pre-set table, including the rst author name and year of publication, study location or district, study design, number of participants, and major factors that are related to the impact of the EHCVS. By classifying data into different categories, it provided a clear indication of what kind of impacts are likely to be discovered across studies, which also simpli ed the code-building process later in the thematic synthesis process.
Data Synthesis
Thematic synthesis was adopted in the data synthesis process. The review process identi ed 15 relevant studies which examined the impact of the EHCVS to the older population of Hong Kong. Eligible articles were entered into NVivo qualitative data analysis software to allow secure storage, comparison, and line-by-line coding [19,44].
Stage 1 & 2: line-by-line coding and developing descriptive themes
Line-by-line synthesis was performed twice in this study. The rst time was to explore the common descriptions of the impact of the EHCVS across studies to ensure the consistency of interpretation and allow most of the data to t into each code. This process generated 31 initial codes. Based on the primary line-by-line synthesis, a coding frame (see Supplementary le 3) was developed to identify the differences between codes, and provide a clear guide to determine themes [19]. The second synthesis was performed to categorise the broadly de ned themes into speci c codes regarding the coding frame. A total number of 50 speci c codes were developed. The 10 descriptive themes soon became apparent after classifying relevant data into corresponding codes: Attitudes, Awareness, Application, Enrolment, Behaviour, Achievement, Utilisation, Redundancy, Government Agencies Efforts, and Information.
Stage 3: generating analytical themes
In the last step of the thematic synthesis, descriptive themes were grouped together based on their characteristics and 'going beyond' the content to think about how these themes answer the research question -the impact of the EHCVS among the older population of Hong Kong [19,44].
The grouping of descriptive themes utilised the prevalence of data extracted from included studies (also known as the percentage of coverage) in each code to determine the major characteristic of the descriptive theme, allowing the grouping of themes with similar attributes [19,44]. For example, 'Attitudes-negative-Elders' and 'Attitudes-negative-Providers' both had a relatively higher percentage of coverage among other subthemes under the descriptive theme of Attitudes, which means the descriptive theme 'Attitudes' generally had a negative attribute. Therefore, 'Attitudes' was then grouped with other descriptive themes that also had negative traits, such as 'Behavioural-unchanged-Elders.' Each descriptive theme was analysed and clustered using the same approach. Three analytical themes emerged after going beyond the primary data to explain and narrate ndings within and across studies [19].
Results of the search
In the initial search process, 888 papers were identi ed in two of the academic databases, 30 papers were found in the Google Scholar engine, and 24 documents were discovered in the nine targeted government websites. There were 43 studies successfully entered during the rst stage of the screening process after the removal of duplicates and performing the primary screening of heading and abstracts. During the process of primary screening, 27 studies were excluded due to failing to meet the selection criteria, including records that highlighted or summarised the same studies (n = 5), provided brief description or introduction of the EHCVS (n = 20), and published in the format of letter or for the purpose of press release (n = 2). One more record was excluded after conducting a full-text screening because it only provided a brief summary of the EHCVS. In the nal stage of the screening process, 15 studies were chosen for qualitative analysis (see Figure 1). The characteristics of the 15 studies are illustrated in Table 2.
Included Studies
Among the 15 selected records, 5 studies initially aimed to discover the impact and achievement of the EHCVS and address the need for changing policy direction in the future. Most of these studies used survey, interviews, and focus group discussions to gain a better understanding of the impact and effectiveness of the EHCVS from both the elders and private health service providers' perspectives, including their attitudes, awareness, knowledge, actual take-up, and application of the voucher. Five studies mentioned the demands and attitudes of the scheme by examining different types of health care service usage and health intervention programs in Hong Kong. One study included stakeholders' views towards the EHCVS, which primarily intended to investigate the economic impacts of changing demographics. Since these six studies did not aim to discover the impact and effectiveness of the EHCVS, only a few sentences related to the EHCVS were included in these publications. However, these studies were published in different years and therefore could provide useful clues to determine if the attitudes and the take-up rate among elders and service providers had changed over time, which helps better draw a conclusion on the impact of the EHCVS over 10 years [19]. The remaining four included articles were based on discussions held between the health agencies and members of the Legislative Council, which aimed to allow members to understand the e cacy of the EHCVS and seek members' advice to improve the effectiveness and e ciency of the scheme. These four articles did not involve any data collection methods since all were plain summaries and descriptions of the meeting minutes.
The sample size and response rate of the 15 included papers varied. Indeed, these two components did not have a signi cant impact for the data and information used in this study since four of eight studies that provided a sample size and response rate were not directly examining the impact of the EHCVS; the other four studies that examined the impact of the EHCVS all presented a high response rate. The seven remaining studies did not require or involve the discussion of sample size and response rate based on their study nature. Among the 15 included papers, 3 studies explored the reasons for using or not using the voucher, 5 studies contained the types of service used by elders and if the nancial subsidy encouraged elders to private health care services, 10 studies discussed elders' likes or dislikes about the EHCVS, and 11 studies mentioned the rate of the voucher. These ve categories were set before the data extraction, which are the target areas that help to answer the research question. Other features related to the impact of the EHCVS, such as providers participating in the scheme as well as scheme adjustment and enhancement, were discovered after undertaking the review. These elements are also believed to have a signi cant in uence on the EHCVS.
Synthesis
The process of thematic synthesis generated 50 subthemes, which are associated with the in uence of the EHCVS implementation as described in each study. After grouping the subthemes into a tree data structure based on their similarities and differences, 10 overarching descriptive themes were then developed [19,44]. Three analytical themes emerged after going beyond the content to discover the linkage between each descriptive theme and considering how these descriptive themes answer the research question-the impact of the EHCVS [19,44]. These analytical themes are (1) strengthening government relationships with elders and private health care providers, (2) improving the quality of acute care instead of preventive care and disease management, and (3) unsuccessfully shifting elders from the public to private health care sector (see Figure 2).
The 'going beyond' process has utilised the percentage of coverage (the amount of data extracted from the studies in each code) to determine and construct unique interpretations of the impact of the EHCVS (see Supplementary le 4) [27]. Some of the descriptive themes interpreted in the third stage of the synthesis process are interrelated and presented in more than one analytical themes. A grid shown in Supplementary le 5 was designed to identify the contribution of each study, ensure the synthesis was closely related to the primary ndings, and minimise bias related to selective reporting of outcomes [19].
Strengthening government relationships with elders and private health care providers
Over the 10 years of the policy implementation, the government, members of Legislative Council, and the DH continuously provided recommendations and modi ed the scheme to attract and motivate elders to utilise private primary health services. Since the number of account creations was low during the rst phase (2009-2011) of the EHCVS, four out of six articles discussing this area demonstrated that only 57% of eligible elders had their eHealth account opened and only 45% of them had made use of the voucher [5,17,39,48,50]. In response, the DH began to map out strategies to promote the usage of the voucher through mass media, distribution of lea ets in the public health sector, and displaying of posters in malls and on metro billboards [5,40,42]. Apart from increasing the publicity of the EHCVS to encourage eligible elder enrolment, the government has shown an ongoing effort to improve the effectiveness and e ciency of the voucher utilisation process for elders. This includes simpli ed registration and consent processes as well as enabling elders to more conveniently use the voucher by presenting their Hong Kong Identity Card [5,6,17,39,40,41]. To enhance the uptake of the voucher, the government also adjusted the eligible age from 70 to 65 years to expand the population of the scheme [5,8,17,18,24,51]. Notably, widening the service areas from 9 to 14 types of allied health services (particularly the inclusion of optometry), enabling elders to use the voucher in preventive, curative and rehabilitation services, applying the voucher to Shenzhen outpatient clinics, increasing the subsidy amount from HK$250 to HK$2000, and permitting the unspent voucher amount to be carried forward to the next year all had a positive impact in enhancing the enrolment rate of the EHCVS among elders [6, 17,24,39,40,41,42,50]. Under the sub-theme of Joining-Elders, there was a 20% growth in both voucher account creation and voucher utilisation rate by the end of May in 2015 [41,48]. The increasing number of elders admitted to the scheme implied positive experiences, and the scheme began to take root in the community. With the government subsidy, elders have access to a broader choice in health care services, receive health care services closer to home, and receive higher quality of care with regards to reduced waiting times within the private system [17,24,39,40].
Targeting the low enrolment rate (32.4%) of private health service providers, the government and the DH stepped forward to address the technical and supply issues that were constricting private health service providers' willingness to participate in the EHCVS [17,24,41,48]. Four articles suggested the contributing factors to the low participation rate among private health service providers during the rst phase of the EHCVS were the complicated voucher claiming procedure, absence of computers to access the eHealth system, pre-existing discounts to elders, and the lack of guarantee of elders utilising private health services [17,24,41,48]. Considering the abovementioned reasons, the DH procured Smart Identity Card Readers and distributed these free -of -charge to enrolled providers in the second year of the pilot scheme to reduce the manual input errors and simplify the registration process. This adjustment is also likely to have ow-on effects insofar as mitigating the chance of voucher reimbursed refusal and expediting previous delays associated with providers receiving their monthly reimbursed payment [17]. In addition, the DH also implemented several mechanisms, including the requirement for private health service providers to insert the co-payment made by elders, and performed inspection visits to monitor and strengthen the voucher claiming process [17,39,40,41]. On one hand, it aimed to generate statistical reports for the Health Voucher Unit and the DH to identify common transaction errors between providers, which allow the DH to modify the eHealth system as well as provide timely feedback and assistance to private health service providers [17,39,42]. On the other hand, it also ensures public money is being used in the correct manner [17,40]. These adjustments and improvements had a substantial effect on stimulating private health service providers' enrolment throughout the 10 years [3,17,39,40]. By the end of October 2015, 5,235 private health care providers were enrolled in the EHCVS, which accounted for an approximately 206% growth in participation since 2009 [41,42]. A signi cant rise in the participation rate among private health service providers suggested the collaboration between the government and private health care sector was strengthened because of these reform adjustments [39,48]. The number of private health service providers joining the EHCVS is a testament to a PPP of this nature and the ability to iterate and adapt the scheme to the changing demands and challenges [24,39,48].
Improving the quality of acute care instead of preventive care and disease management Signi cant awareness of the EHCVS among elders was noted across four studies. Approximately 70% of the respondents in the included studies acknowledged the existence of the scheme and were able to correctly identify the scheme logo and articulate what types of health services the scheme supported [5,17,39,50]. However, despite elders having a signi cant awareness of the EHCVS, insu cient awareness of how to apply for the voucher and which health service providers in their community participated in the scheme prevented elders from effectively utilising it in an effective manner [5,17,24,39,50]. Most elders involved in the studies expressed their preference to spend their allocation on acute care instead of chronic disease management in the private health care sector, as the restricted subsidy makes it nancially unrealistic to continually manage chronic diseases in the private sector [2,5,17,24,26,48,50]. This corresponds with ndings across three of seven studies under the sub-theme of Utilisation-Voucher with 70% of elders expressing a desire to spend the voucher in acute care services in the private sector rather than on health checks, dental care, and chronic disease management [17,24,39,41,50]. Consequently, 66% of elders prefer to stay in the public health care system despite their eligibility for the scheme [17,24,39,50].
Insu cient subsidy amount to cover the large proportion of the service fee further caused elders to place dental check-ups in the least preferred service [5,18,26]. Six studies reported that elders are particularly reluctant to spend their voucher in dental care, as each episode of dental care is equivalent to 90% of the voucher value, far more than the 50% attributed to both Western and Chinese medicine services [5,17,39]. Elders also identi ed multiple concerns when considering receiving dental treatments in the private sector given health services in the sector are renowned for being expensive, and the cost is unpredictable [5,17,24,26,39,50]. In fact, the subsidy amount provided under the EHCVS only enables elders to receive dental treatments fewer than twice a year [5,24]. The low utilisation rate of dental services may also be attributed to elders' perceptions regarding the unnecessary nature of dental care, with many believing self-performed oral hygiene alone is su cient to maintain adequate oral health [5,26]. No perceived need and the absence of regular health check-ups were also presented in ve studies [5,17,26,39,50].
The above ndings suggest elders placed preventive care, dental care, and chronic condition management in a non-essential position [39]. Yet, the government's lack of recognition of this voiced need has restrained the scheme from achieving its intended objectives and desired outcome to promote preventive care and disease management among elders, reduce elders' dependence on public health care services, and encourage greater connection between elders and their private doctors [24,39,41,50].
Unsuccessfully shifting elders from the public to private health care sector Five of the 15 studies reported that the nancial subsidy did not shift demand from the public to the private health care sector due to a lack of clear information delivered to elders about the purpose of the EHCVS [24,26,39,41,50]. Elders' discussion concerning acute care and the lack of desire to spend their voucher on preventive care in the private health care sector indicates they did not have a complete understanding of the policy intentions, which aims to support them in detection of disease, illnesses, and other health-related problems and reduce their reliance on public health care services [17,24,26,39,50]. Most of the elders believed the EHCVS provided full health care coverage and they were insu ciently informed about the need to co-pay [24]. Ten of the 15 studies hence reported elders generally perceived the subsidy amount provided under the EHCVS did not ease their nancial burden in purchasing health care services in the private sector [5,8,17,18,24,26,39,48,50,51]. This subsequently reduced their desire to seek health care services in the private sector, simply because the public health care sector provides the same treatment at a lower cost and consequently allows for continuous follow-up treatment despite the lengthy wait times [24,26,50]. Elders' disinterest in the EHCVS also has ow-on effects to private health care providers' perceptions of the scheme [50,51]. Physicians in FHB & DH [17] mentioned the scheme did not boost their patient numbers nor did it enhance the health service usage of their existing clients. In other words, the EHCVS did not have a signi cant impact on changing the health-seeking behaviour among elders and failed to reallocate the health service demand from the public to the private health care sector over a 10-year period [24,26,39,50].
Discussion
This systematic review provides a comprehensive picture of the impact of the EHCVS across the last 10 years. The ndings of the review highlight some of the successes of the policy are strengthening the connection between government, elders and private health care providers as well as improving the quality of acute care. However, less than successful elements that require revision include designing the purpose of voucher for preventive care and disease management and shifting elders from the public to the private health care sector through nancial incentives. It is evident that the Hong Kong Government has implemented a number of quality improvement processes across the life of the scheme to ensure it su ciently addresses the needs of private service providers and, by extension, the health needs of older persons [17,39]. The adjustments made to the EHCVS were identi ed to have had a positive impact insofar as utilisation by eligible elders ultimately reduce reliance on the public health care services [17,24,39,40,41,42,50]. However, limited consideration of elders' health-seeking behaviour and health care needs of this demographic constrained the achievement of the full potential of the scheme [24,39,41,50]. Elders generally applied the voucher to short-term treatment, such as acute episodes of illness. Elders were reluctant to spend their voucher on disease prevention and management, as there appeared to be no perceived need and the subsidy amount was too low to cover follow-up treatment in the private health care sector. As a result, eligible elders preferred the public health care system despite long waiting times [17,24,39,50]. Overall, the analysis suggests the nancial subsidies did not motivate the older adults to utilise private health care services, but rather it demonstrates an effort by the Hong Kong Government to address public health care waiting lists while prioritising quality care for older citizens.
Review implications and applicability of evidence to elder care Although the ndings of this study suggested the use of the voucher has steadily increased over the last 10 years, no signi cant impact in shifting consumer demand from the public to the private health care sector was evident [24,26,39,41,50]. From the result of this review, it is likely elders will continue to prefer public elder care even they are given a choice to apply the voucher in private elder care services, in particular for community care and residential home care services. Since the government incentive only offered an annual amount of HK$2000 for eligible elders, elders are required to pay a monthly amount of HK$9000 to HK$13,000 (the price was adjusted for in ation in 2018) out-of-pocket expenses for the remaining months while utilising private community care or residential home care services [23,25]. Similar to the ndings related to health checks, oral health care, and chronic disease management, elders may consider the subsidy nancially inadequate to motivate a shift in care provision [17,24,39,41,50].
Nevertheless, allowing elders to purchase aged-care services, medical supplies, and equipment with the voucher is expected to have a favourable impact on the quality of life and well-being of elders due to the afforded self-determination of this model [7]. Despite the fact that the policy has so far failed to achieve the government's intended goal of shifting the majority of the older population from the public to private health care sector, the intervention has proven to permit elders expatiated access to private health care services during instances of ill health [24,39]. In a sense, the application of the EHCVS is hence likely to reduce some degree of disease complication due to delayed treatment, ultimately in uencing health status and quality of life of this demographic [7,24,39,40]. Chou et al.'s [7] paper surrounding the use of voucher mechanism such as the voucher for long-term care identi es that this modality not only empowers the elders but also creates a competitive market, which is likely to increase respective quality and safety standards of the industry. Many scholars have highlighted that merely removing the nancial barriers to care is insu cient in improving an individual's quality of life and health outcome [7,28,46]. Instead, delivering care balanced between ful lling individual needs and optimising the care delivery processes genuinely enhances clinical quality and outcomes and further reduces health care expenditures of the health care system [45]. Therefore, the implementation of a voucher mechanism in elder care may aid in addressing the current issues associated with poor living conditions, an over-crowded public aged-care sector, and substandard quality of care in private aged-care facilities [7].
Furthermore, expanding the service area to elder care services would enhance the purchasing power of elders in selecting service providers or medical products that bene t on their overall quality of life. Recent evidence suggests that the Hong Kong Government has begun to appreciate the entrenchment of value-based care afforded by a voucher mechanism of this nature [7]. Instead of relegating elders to the Central Waiting List for different types of elder care services, the Social Welfare Department launched a pilot voucher scheme in 2013 to offer alternatives for elders awaiting allocation to aged-care facilities. The scheme provided a monthly value of HK$6250 for elders with moderate impairment and experiencing nancial hardship to choose the community care services tailored to their individual needs in the private sector [7,30,38]. With nancial assistance, elders can have a broader choice and purchasing power in acquiring elder care services that may subsequently mitigate the impact of their age-related deterioration. This may eventually reduce the number of residential home care applicants in the Central Waiting List, as elders may identify more suitable or timely options for managing their conditions, which ultimately reduces the burden on the governmentfunded aged-care homes. Further research is needed to con rm whether more substantial nancial subsidies may motivate elders to choose private elder care services over government-funded elder care services.
Concerns about the application of a voucher mechanism in elder care Concerns surrounding the application of voucher mechanism to elder care largely centre on the prevention of double subsidies and avoidance of the inappropriate use of medication by elders. The government also appears to be concerned about dishonest providers who may exploit this opportunity to upsell unnecessary and expensive medical products [17,39,41]. These concerns are not without evidence, as cases of improper usage of the voucher have been identi ed in the last number of years [11,15]. For example, one Chinese medicine practitioner claimed that he had assisted elders in exchanging dried food for the voucher, particularly at the end of each year to prevent exceeding the accumulated limit, which consequently leads to a waste of nancial subsidy [15]. The Consumer Council also reported receiving almost double the number of complaints each year from 2014 to 2018 from the voucher holders who were lured into buying non-e cacious and expensive glasses and Chinese herbal medicines [11,37]. To overcome the adverse effects resulting from the use of the voucher in elder care, greater regulation of the supply-side, as well as increased consumer awareness are likely to allow more informed and e cacious use of the scheme [7]. A suggestion for imposing greater restrictions on the supply-side would be to develop and impose mandatory reporting processes, similar to those used in the Australian health care system, where a provider has a duty and ethical obligation to report colleagues suspected of engaging in malpractice or deception [1]. Elders can also le a complaint against deceptive business practices or in instances where they may have been victims of fraud.
This strategy may aid in closing the loophole of the existing voucher monitoring and auditing mechanism and ensure public money is spent on the right care, administered by the right provider, at an appropriate cost [7,17,41].
Strengths and limitations of the review
To the author's knowledge, this review is the rst systematic review conducted to assess the impact of the EHCVS throughout its 10-year duration. Despite the lack of previous research conducted to examine the impact, effectiveness, and change of the scheme implementation, this review combined eligible studies to generate a higher level of insight into the impact of the EHCVS from elders, private service providers, and government perspectives over the past 10 years across different settings of Hong Kong. The study systematically sought to include all published and unpublished studies that met the predetermined criteria to provide a comprehensive picture of the impact of the EHCVS and further examined the feasibility of expanding the voucher system in elder care. Although it is obvious the demand of public elder care services would remain the same even if the EHCVS expanded its service area, it is uncertain whether a higher monthly subsidy amount would motivate elders to private elder care services. Hence, further research in this area should seek to understand factors that may induce consumer change among elders when a greater nancial subsidy is provided. The ndings offer a breadth understanding of the advantages and disadvantages of expanding the EHCVS to elder care services as well as the possibility of shifting demand from the public to private elder care sector with the current amount of nancial subsidy to policymakers. However, the synthesis of qualitative ndings in this systematic review may overgeneralize the impact of the EHCVS due to the unavailability of data throughout the 10 years. The missing data, such as the attitudes, awareness, , and achievement of the EHCVS in some of the years, means the impact of the EHCVS cannot be truly re ected in a particular period. Broadening the inclusion criteria to include press releases or newspaper articles may overcome the issue of missing data, which further minimise the chance of over-generalization [13]. However, media bias and reliability may be a concern when employing these two media sources in a study [28]. Further, the impact of the EHCVS was limited to studies that were published in English. Future research should include published and unpublished Chinese materials to reduce the chance of over-or underestimating the effectiveness of the EHCVS due to language bias, which may enable policymakers to map out appropriate strategies to shift demand from the public to private sector as well as enhance elders' quality of life.
Conclusions
This review explored whether the expansion of service area to elder care would motivate elders to utilise elder care services in the private sector through examining the impact of the EHCVS to elders in health care. The thematic synthesis provided a clear conceptual framework to inform the potential response and effectiveness of the voucher use when applied to elder care. Findings indicate the expansion of service areas may strengthen the relations between elders, private elder care providers, and the government by demonstrating the government's commitment towards the situation in public aged-care sector and elders' quality of life. Allowing elders to purchase elderly support services, medical supplies, and equipment with the voucher may partially relieve the growing nancial burden of elders and further permit greater choice and quality of medical products and services suited to their individual care needs. However, elders may still consider the subsidy amount to be insu cient to engage in long-term private sector elder care services. Consequently, they may not be willing to apply the voucher in elder care services and would rather stay on the public waiting lists for elder care services.
For the future implementation of such voucher mechanism for elder care, it is crucial for policymakers to consider how much monetary amount needed to remove the nancial burden of the older population by employing cost-bene t analysis, which helps to set an attractive incentive to motivate elders in utilising private elder care services. Ideally, the government should conduct a survey similar to the one conducted by Liu and colleagues [26] to examine the willingness to pay for various types of health care services in the private sector when the same services are available publicly. This approach would assist policymakers in achieving two outcomes. First, it would inform the design of optimal subsidy limits, which would enable elders to see the value of nancial aid. Second, it would permit policymakers to understand what amount elders would be willing to pay for elder care services, which further provides evidence on whether a voucher mechanism should be applied partially or fully in elder care services. Nevertheless, it is essential for policymakers to rst assess the ability for the private elder care sector to supply services when a higher subsidy is being provided to elders, as resource shortages are also present in the private sector [49]. This foresight will ensure public money is being spent appropriately and that intervention is likely to achieve intended results-successfully shifting elders from the public to private elder care sector, reduce the burden on public elder care sector, and potentially improve the quality of life for seniors [7]. Finally, greater information disclosure and promotion of private elder care services will deepen elders' understanding and is likely to change their perceptions towards private elder care sector, ultimately enabling elders to make informed decisions when applying vouchers to elder care services that best suit their care needs. Figure 1 PRISMA ow diagram. The diagram shows the selection process of the included studies/records through each stage of the systematic review (identi cation, screening, eligibility and included).
|
2020-07-30T02:06:16.311Z
|
2020-07-28T00:00:00.000
|
{
"year": 2020,
"sha1": "dc6c504007fb3e9879918f7bbb6fac76860a9322",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-48276/v1.pdf?c=1595971770000",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "092a7be8e0112baf37bf4ec03d701c3e77f1b10b",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": []
}
|
133516267
|
pes2o/s2orc
|
v3-fos-license
|
Continuous intravenous vitamin C in the cancer treatment : re-evaluation of a Phase I clinical study
Background: Intravenous high-dose vitamin C (IVC) therapy is widely used in naturopathic and integrative oncology. A number of Phase I and Phase II clinical trials were launched to prove the benefits of the IVC therapy. Many case studies demonstrated the effectiveness of IVC, with various degrees of success. Clinical trials using IVC to treat cancer have, to date, demonstrated its safety without conclusively proven its efficacy. One difficulty in administering IVC is determining the optimal treatment schedule. To this end, data from a previous Phase 1 clinical trial conducted in 1998 using continuous vitamin C infusions was analyzed to examine the effects of this regimen on key prognostic parameters. Method: Twenty-four subjects were given continuous IVC at doses between 150 and 710 mg/kg/day. Most of the patients had colon cancer with liver and lung metastasis and three patients had pancreatic or liver cancer. All patients had several chemotherapy/radiation treatments before entering the study. Patients were treated by pharmaceutical grade sodium ascorbate diluted in Lactated Ringers solution with the rate of infusion of 20 ml/hr or 10 ml/hr for lower doses. This diluted solution was administered by continuous infusion. Results: Prior to treatment, serum lymphocyte counts and ascorbate concentrations tended to be low while serum levels of lactate dehydrogenase (LDH), neutrophils, and glucose tended to be high. Improvements were seen during IVC therapy. In patients with initially elevated neutrophil levels, numbers tended to decrease. In contrast, increased absolute neutrophil and lymphocyte numbers were seen in patients with initially low counts. Neutrophil to lymphocyte ratios (NLR) proved to be a good indicator of cancer patients’ survival times (high NLR, low survival). This was also true of LDH, creatinine, and glucose concentrations. In patients with the highest pretreatment NLR, rate of growth of this ratio decreased significantly during therapy. IVC treatments were also associated with decreases in glucose concentrations, restoration of vitamin C levels, and, in about 40% of cases, reductions in LDH levels. Functional Foods in Health and Disease 2019; 9(3): 180-204 Page 181 of 204 Conclusions: As the result of the study we found that continuous IVC infusions improved several parameters associated with poor cancer prognosis. The data suggests a strategic benefit to using lower IVC doses in continuous infusions: raising the dose above 300 mg/kg/day (20 grams in 70 kg human) increased the frequency of side effects without noticeably increasing plasma ascorbate levels. Moreover, improvements in lymphocyte counts at low IVC doses tended to decrease at the higher doses. In conclusion, continuous infusions had benefits to cancer patients and further research in this area is warranted.
INTRODUCTION
Intravenous vitamin C therapy has garnered increased interest as a potential treatment for cancer [1][2][3] and is being used widely in naturopathic and integrative oncology [4]. IVC was first proposed for cancer treatment in the 1970's [5].
Studies on understanding the biological activities of ascorbate have led to a number of hypotheses for mechanisms of anti-cancer activity, such as the generation of significant quantities of hydrogen peroxide by the autoxidation of pharmacological concentrations of ascorbate [7,8], changes in the metabolic activity [18], and stimulation of the 2-oxoglutarate dependent dioxygenase family of enzymes (2-OGDDs) that have a cofactor requirement for ascorbate [14,17]. The 2-OGDDs include the hydroxylases that regulate the hypoxic response, a major driver of tumor survival, angiogenesis, and metastasis, and the epigenetic histone and DNA demethylases [13,14,17].
However, high dose IVC treatment has not yet been proven to be a cancer cure. As part of a comprehensive cancer treatment program, vitamin C was proven to be a powerful tool to help people to have stronger immune systems and an improvement in the quality of life. IVC treatment reduces levels of inflammation markers, ameliorates symptoms in cancer patients, and increases chances of surviving their disease [34][35][36][37].
Several Phase I and Phase II clinical trials have been conducted in the last ten years to test safety and efficacy when IVC is used as an adjuvant with chemotherapy [38][39][40][41][42]. The results of these trials confirm that IVC can be administered safely. However, there are mixed results concerning efficacy.
Thus, many researchers consider the greatest potential for IVC to be as an adjuvant to conventional treatment. Some support for this notion includes the observation that cancer patients tend to be deficient in vitamin C. This ascorbate deficiency correlates with inflammation, infection, and disease. Moreover, it is exacerbated by chemotherapy and radiation [43][44][45][46][47].
The effectiveness of chemotherapeutic regimens is often dependent upon the treatment schedule used, with large intermittent doses often being more toxic and less effective than smaller repeated doses [48]. The response to treatment may not be proportional to cumulative drug dose or the area under the disposition curve [49][50][51]. Instead, the time during which the drug concentration is maintained close to target concentration may be a more important factor for antitumor activity and highest efficacy of treatment [48].
In most clinical applications, IVC is administered through a one-hour infusion two or three times a week. In a Phase I clinical trial conducted by Riordan et al., however, patients were given continuous infusions using an infusion pump [37]. This may represent a more favorable treatment schedule. Therefore, we decided to study this trial in more detail.
The published report on this Phase I trial focused on plasma ascorbate levels attained, effect on blood chemistry parameters associated with renal function, and adverse events [37]. The report concluded that all doses tested were safe provided the subject did not have a history of kidney stones.
In the present manuscript, previously unpublished parameters from the Riordan clinical study, including blood chemistry and blood count parameters that are reportedly related to patient prognosis and degree of inflammation are examined further. This includes absolute neutrophil and lymphocyte counts and the neutrophil-to-lymphocyte ratio (NLR). NLR in particular has been shown to be useful in predicting survival in cancer patients [52][53][54]. Blood chemistry parameters analyzed include: lactate dehydrogenase, an enzyme involved in tumor initiation, metastasis, and recurrence that serves as a prognostic marker for poor cancer outcome [55][56][57]; creatinine, the depletion of which is associated with cachexia [58][59][60]; and glucose, as hyperglycemia is common in cancer patients and there is evidence that vitamin C reduces hyperglycemia [61,62]. We report that IVC had positive effects on several of these parameters during the Phase I study of continuous IVC infusion that we analyzed.
Moreover, our analysis suggests that the higher doses within the study were associated with more side effects but were not associated with dramatically increasing benefits.
MATERIALS AND METHODS
A detailed description of how the Phase 1 IVC continuous infusion clinical trial was conducted was given previously [37]. Briefly, patients were divided into groups and treated by continuous infusion. A total of 24 patients with late stage cancer were included in our study. The characteristics of each patient, along with the ascorbate dose each patient was given, and prior therapies are listed in Table 1. Fifty percent of the patients were males and fifty percent were females (in the ID# in Table 1, "M" refers to male and "F" refers to female). All patients had several rounds of chemotherapy or radiation prior to entering the study. 79% of the patients had a metastatic tumor. 71% (17 patients) had colon cancer with liver and lung metastasis, three patients had pancreatic or liver cancer and the rest of the patients had esophagus or rectal cancer.
The investigation was carried out following the rules of the Declaration of Helsinki of 1975 (https://www.wma.net/what-we-do/medical-ethics/declaration-of-helsinki/), revised in 2008. Written informed consent was provided by all patients. The study was approved by the ethics committee of the Eppley Institute for Research in Cancer and Allied Diseases at the University of Nebraska Medical Center (Omaha, NE) and the Institutional Review Board of the Riordan Clinic (Wichita, KS).
Patients were divided into five groups and treated by continuous infusion of 150 mg/kg/day (three patients), 290 mg/kg/day (seven patients), 430 mg/kg/day (six patients), 510 mg/kg/day (three patients) and 710 mg/kg/day (five patients). Pharmaceutical grade sodium ascorbate was diluted in Lactated Ringers solution and infused with the rate of infusion of 20 mL/hr or 10 ml/hr for lower doses. The diluted solution was administered by continuous infusion using a Travenol Infuser (Pharmacia Deltac, St. Paul, MN) with a Cad-5400 or Sabrateck 6060 infusion pump. The ascorbate solution was changed daily. The infusion system was flushed with 100 mL normal saline daily to prevent buildup of crystals in the access line and then "reloaded" with fresh ascorbate. The duration of the continuous infusion was at least 20-22 hours.
Patients' health, adverse events, and tumor progression were monitored during treatment. Samples for routine blood chemistry were collected one week prior to therapy and at roughly weekly intervals during treatment. White blood cell counts, hemoglobin, hematocrit, red blood cell counts, and standard blood chemistry parameters were determined by using standard procedures at the Eppley Institute for Research in Cancer at the University of Nebraska Medical Center (Omaha, NE). Plasma vitamin C concentrations were measured as a function of time for twenty two of the twenty-four patients. Serum was collected for this purpose one week prior to therapy, daily for the first four days of therapy and weekly thereafter. To determine ascorbate concentrations plasma samples were stabilized in 3% metaphosphoric acid and sent to the Bio-Center Laboratory (Wichita, KS) for colorimetric analysis by the reduction of 2,6dichlorophenolindophenol. The lower limit of ascorbate detection was 0.2 mg/dL. The data were analyzed by Systat software (Systat, Inc) and Kaleidagraph software. Variables were presented as means ± SD, or as medians with corresponding 25th and 75th percentiles. Association between different conditions and factors were assessed by using nonparametric statistics (Spearmen's rank correlation coefficient and t-value).
General pre-treatment parameters and patient outcome
Median values of blood test results for patients prior to IVC therapy are presented in Table 2, along with first and third quartiles and a breakdown of how many values fell above, below, or within the normal range for each parameter.
According to these data, blood levels for hemoglobin, hematocrit, RBC, lymphocytes, and albumin tended to be lower than the levels of normal range, while the medians for ALP, LDH, AST, and glucose tended to be higher.
The most common outcome was progressive disease, defined as a twenty-five percent or greater increase in size of all measurable lesions. This was expected for a patient cohort of this sort. However, one patient had stable disease (no increase in size of pre-existing lesions and no new lesions). Each patient was treated over an eight-week period, unless an adverse event or progression of disease required that the treatment be stopped. Among those with progressive disease, eleven completed the eight weeks of therapy. Two were removed from the study due to Grade Three of Four adverse events and two elected to stop treatment due to problems with venous catheter occlusion. The remainders were removed from the trial before the eight-week time period expired due to progressive disease. Overall, eighteen of the patients received the treatment for at least six weeks. One subject, a man with colon cancer and liver metastasis who was treated with 430 mg/kg/day ascorbate showed stable symptoms and test results and elected to continue ascorbate therapy for an additional forty-eight weeks. He survived a total of 336 days from the onset of therapy. The average survival of the patients from the beginning of the treatment was 110 days (IQR = 63 -304 days).
Plasma ascorbate levels
The concentration of ascorbic acid in blood was measured the first four days and at the end of each week of treatment. The levels of ascorbic acid that were achieved in the blood are shown in Figure 1. Pre-values are the individual measurements of ascorbic acid for each subject before treatment and post-values are the average ascorbic acid concentrations in blood for each patient during treatment. The pie chart insert refers to the number of subjects who showed normal (34%) or below normal (66%) plasma vitamin C concentrations. Two thirds of the subjects had levels below the normal range (0.6mg/dL-2mg/dL), with the majority of them (47%) having levels undetectable by colorimetric assay. We should note that at this time the clinical laboratory used a colorimetric method for measurements of ascorbate, which was not very sensitive. During IVC treatment, ascorbate levels increased to a mean value (for all patients) of 1.1 mM. Figure 2 shows the examples of the plasma concentrations versus time graphs for several patients given continuous infusions of 150 mg/kg/day to 710 mg/kg/day IVC. The plasma concentration rose in the first few days, and then remained in the range 0.8 mM -1.7 mM. We did not see dependence of the average plasma ascorbate concentrations after IVC treatment as a function of dosage. The mean plasma ascorbate level, at steady state, for the three subjects at the dose 150 mg/kg/day was 1.00 ± 0.34 (SD) mM. Higher doses did not significantly increase plasma ascorbate values, and at higher dosages the average concentration during the IVC treatment reached 1.3 mM ± 1.4 mM for 290 mg/kg/day, 0.9 mM ± 0.6 mM for 430 mg/kg/day, 1.0 mM ± 0.5 mM for 570 mg/kg/day and 1.4 mM ± 0.8 for 710 mg/kg/day. For four patients the level of ascorbate in blood reached much higher concentrations (2.8 mM to 5 mM), that probably, can be explained by decreased clearance of ascorbate by kidneys. One of these patients had rectal cancer and three others liver and colon cancers with metastasis.
The leveling off of the ascorbate concentration is consistent with observations that high doses will saturate renal tubular ascorbate reabsorption, leading to increased ascorbate excretion [63]. Our data suggest that, when IVC is provided by continuous infusion, plasma concentrations quickly reach ~ 1 mM and are maintained, but that increasing dosage did not provide added benefit.
Neutrophil and lymphocyte counts
As lymphocytes and neutrophils have important roles in tumorigenesis and carcinogenesis, we analyzed the effect of the treatment on these parameters. In chemotherapy, neutrophil and lymphocyte counts typically decrease, with the effect being more severe for lymphocytes [64,65]. For cancer patients in general, increased neutrophil counts are consistent with systemic inflammation. The normal range for absolute neutrophil counts (ANC) is 2000 to 7000 cells/µL, while that for absolute lymphocyte counts (ALC) is 1300 to 4000 cells/µL. Patients in the study did not have neutropenia, nor did they develop it during treatment, but more than half of the patients had lymphopenia.
Mean value of ANC for all patients prior to treatment was 5240 ± 1850 (SD) cells/µL and 5630 ± 1740 (SD) cells/µL after therapy. Prior to therapy, neutrophil counts were elevated in three subjects. Subjects with low ANC and ALC values tended to see those values increase. We noticed, however, that such increases were not prevalent for patients who initially had ANC or ALC values in normal range. We tested this by plotting the changes in ANC or ALC values versus the initial values (Figures 3 A, B).
To find the effect of IVC treatment on neutrophil count, we analyzed the changes in the average absolute neutrophil counts before and after treatment.
The percentage of change of ANC for patients who completed 6-8 weeks of treatment is shown in Figure 3 (A). These values were higher than normal range for two patients with metastatic colon cancer (8250 cells/µl and 9417cell/µl) and decreased after treatment on 10% ÷ 37%. Another patient with metastatic colon cancer had ANC at the upper level of normal range and his value was increased on 38%, demonstrating the increased level of inflammation in this patient. Two patients with pancreatic cancer with metastasis had the initial levels of ANC on the lower level of normal range (2570 cells/µl and 2930 cells/µl), which increased to 46% and 94% at the end of the treatment. For the rest of the patients the tendency was in improving the ANC at the low level of this parameter and decreasing for the higher values.
In addition, we investigated the effect of treatment on the absolute lymphocyte count. Mean value of ALC for all patients prior to treatment was 1570 ± 1590 (SD) cells/µL. The initial level of ALC was lower than normal range (1300 /µl -4000 /µl) for 14 of 24 patients who started intervention. Figure 3 (A, B, C). Percentage of change in absolute neutrophil counts (A), percentage of change absolute lymphocyte count vs. pre-treatment levels (B) and absolute values of ALCs before and after intervention for patients with ALCs lower than normal range (C). Different dosages in Figures 3(A, B) are indicated by the different shapes. Spearmen's rank correlation coefficient, non-parametric p-values, and regression lines are given.
Severe lymphopenia (ALC<1000 cells/µl) was measured in 10 patients. Only six patients with severe lymphopenia completed 6-8 weeks of treatment. The most severe lymphopenia was measured in a patient with pancreatic cancer (245 cells/µl) and in a patient with rectal cancer/liver, lung, pelvis metastasis (325/µl). The percentage of improvement in ALC was calculated based on the initial ALC values and the ALC at the end of the treatment. In average, for six patients with the ALC <1000 cells/µl there was improvement in the lymphocyte count at the end of the treatment on 69% (median), IQR: 129%, -6%.
The highest improvement was seen in a female with rectal cancer liver, lung and pelvis metastasis (355 cells/µl initial and 135% improvement). Two patients with metastatic colon cancer and ALC counts 520/µl and 900/µl had improvement in ALC of 73% and 128%. Another patient, the female subject with pancreatic cancer/liver metastasis and initial ALC 663/µL had improvement in the score of 50%. In the patient with carcinomatosis, with initial ALC of 550/µL, the increase in lymphocyte count was 87%. There was no improvement for the patient with pancreatic cancer who had the lowest initial ALC score of 245/µL.
In average the improvement in lymphocyte count for all patients who completed 6-8 weeks of treatment and had ALC <1300 /ul was 22% (IQR: 89%, -24%). Results are shown in Figure 3(B). According to these data there was improvement in ALC for patients with ALC below the normal range. In addition, the absolute counts of lymphocytes pre and post intervention for patients with low initial ALCs are shown in Figure 3 (C). According to these data there was improvement in ALC for patients with ALC below the normal range. For five patients the ALC values returned to the normal level (ALC>1300 cells/ul) and for five patients the values reached the level 1000 cells/ul.
As we had values of ALC one week upon study entry, at the beginning of the study and at the end of every week, the trend in this parameter before and during treatment was evaluated. The data demonstrates that before treatment there was a decrease in ALC with median -9.8% (IQR:-20%, 13.5%), but after treatment the median increase in ALC was 22% (IQR: -8.9%, 73%).
These data show that continuous IVC can have a positive effect on the improvement of lymphocyte count and immune function in patients with lymphopenia.
There was a correlation between initial values and percentage of change in ANC and ALC where increasing initial value corresponded to decreasing final values seen in each case (Spearman's ρ values of -0.43 for each), yielding, for two tailed testing, p-values between 0.10 and 0.05. This suggests a potentially useful moderation of ANC and ALC where levels that are decreased due to a factor such as chemotherapy are restored, while levels that are increased due to inflammation are reduced.
We analyzed to see if there was improvement of the absolute lymphocyte count with increasing dosages of infusion. For all patients who completed 6-8 weeks of treatment, the effect of vitamin C dosage on the change in lymphocyte counts was examined.
At the low doses (150 or 290 mg/kg/day IVC), the median change in lymphocyte counts was 9%, with four subjects seeing increases and three seeing decreases. With the high doses (430, 570, and 710 mg/kg/day), however, the median change in lymphocyte counts was -12 %, with seven subjects seeing decreases and only two seeing increases (with two patients showing virtually no change in ΔALC). These data may indicate that lower doses are more favorable for the improvement of lymphocyte count.
The neutrophil-to-lymphocyte ratio may be a useful prognostic factor in a variety of cancers [53], with higher values indicating lower survival times. We examined the rate of change in this ratio (ΔNLR) for each patient before and after therapy. To calculate the initial ΔNLR (prior to therapy), NLR on day zero was subtracted from NLR measured one week prior to therapy, and this difference was divided by the number of days between the two measurements. Similarly, the final ΔNLR was calculated based on the last two measurements during therapy. Changes in these values for thirteen patients who completed 6-8 weeks of treatment and for whom pre-treatment data were available are shown in Figure 4 (A). At the beginning of IVC therapy 75 % of subjects had increasing NLR levels higher than normal range (0.78 -3.53) and at the end of IVC therapy the same percentage of patients still had increased levels of NRL, but the comparison of the trend in the change of NLR measured for periods one week before treatment and during treatment demonstrated that the rate of change was decreased. The median ΔNLR values were 2.19 pre-therapy and 0.21 post-therapy (p = 0.045).
Data in Figure 4 (A) show the effect of the treatment on the NLR rate of change. The crossed bars in the graph present the rate of increase in NLR before treatment and black bars show the NLR growth after treatment. The six subjects who had the highest pre-therapy increase in NRL showed a decrease in ΔNRL during therapy. According to these data, the treatment resulted in the suppression or prevention of the progression of the rate of growth of NLR. This improvement of the rate of change of NLR was found for 54% of the patients. For patients with initial NLR higher than upper level of normal level of 3.53, the improvement was seen in 64% of patients who completed 6-8 weeks of treatment.
Our data also demonstrates the relationship between the survival of patients and the rate of growth of NLR. Figure 4(B) shows a statistically significant (p = 0.013) correlation between post-treatment ΔNLR and survival time, with high ΔNRL coinciding with lower survival times. This suggests that IVC may reduce NLR levels, thus improving prognosis.
Blood chemistry parameters
The normal range for blood lactate dehydrogenase is between 140 U/L and 280 U/L. LDH concentrations before IVC therapy were above the normal range in 50% of the patients (LDH range 300 U/L -1790 U/L). The median LDH prior to therapy was 240 U/L (IQR: 627 U/L, 161 U/L) while that after IVC therapy was 276 U/L (IQR: 739 U/L, 156 U/L).
The rate of increase of LDH was calculated before and after treatment. The value of this parameter (LDH rate of growth) was decreased in 38% of the patients, increased in 28.6% and was not changed in 33.4% of patients. The result that LDH decreased in 38% of the subjects is remarkable considering their illness.
The survival of patients was compared to those with normal and higher than normal range (NR) initial blood LDH levels in Figure 5. The median survival time for the all participants with initial LDH higher than normal range (LDH>245 U/L) was 95 days. In contrast, the median survival time for all subjects with normal initial LDH values was 173 days (p = 0.097). The median survival time for the patients who were able to complete the 6-8 weeks of the study was 153 days for patients with initial LDH higher than normal range and 238 days for patients with initial LDH within the normal range.
As activation of glycolytic metabolism is a significant characteristic of tumor cells, and since lactate dehydrogenase is an important coenzyme in glycolysis, elevated levels of serum LDH may be useful prognostic biomarkers [55,56].
Blood creatinine levels were also examined. Before intervention the level of creatinine was in normal range (0.5-1.0 mg/dl for women and 0.7-1.2 mg/dl for men) for 23 patients, the exception being a female with pancreatic cancer. During treatment the level of creatinine was decreased (median -14%, IQR: -5.5%, -23.5%) and for seven patients fell below the normal range.
Creatinine is a metabolite of L-carnitine, which plays a central role in the metabolism of fatty acids. Low serum creatinine and carnitine levels are associated with the muscle wasting seen in cancer patients, which is particularly severe in patients with colon and pancreatic cancers [59,60]. These cancers were prominent in this study (Table 1).
We calculated the percentage of change of the average creatinine concentration before treatment and during treatment, and divided all patients in two groups with higher and lower than 20% change in creatinine levels. The median survival time for patients who had a decrease of creatinine greater than 20% was 80 days (IQR: 142, 62) and 254 days (IQR: 451, 123) for patients with a creatinine change less than 20%. Large decreases in creatinine concentration correlated with low survival times (p = 0.043), as shown in Figure 6, consistent with prior results [60]. Hyperglycemia is common in cancer patients. Two thirds of the patients in our study had above normal blood glucose concentrations ( Table 2). Changes in blood glucose during IVC therapy for patients with the highest blood glucose levels are shown in Figure 7(A). For these patients, glucose concentrations decreased during IVC therapy. For all patients with increased level of glucose the concentration of glucose was decreased on -11% ÷ -45% during treatment. Figure 7(b) shows the correlation between initial glucose concentration and the change in glucose levels during IVC therapy. Glucose levels decreased during therapy most dramatically when the initial glucose concentrations were higher.
Safety and side effects
Side effects and safety were discussed in detail in the previous article [37]. Briefly, blood chemistry parameters that serve as indicators of renal function (BUN, creatinine, and uric acid) remained relatively stable or in the case of uric acid, decreased during therapy. Only four subjects experienced BUN increases during therapy.
Adverse events were assessed by the physician using the NCI Common Toxicity Criterion and were attributed to the agent according to the grades "not related", "possibly related'', or "probably related". The adverse effects that were experienced by patients are presented in Figure 8.
A total of five Grade 3 adverse events were observed during the study, with only four of them being considered possibly related to the therapy (the only Grade 4 event observed was cardiac arrest, which was considered unrelated to the therapy). One patient developed a kidney stone after thirteen days of treatment at 290 mg/kg/day ascorbate. Acute renal distress is usually accompanied by an increase in glucose, creatinine, urea and BUN levels; however, this subject's blood creatinine, glucose and BUN levels remained stable, but this subject had a prior history of kidney stones [37]. Most of the Grade 3 events involved hypokalemia, which is considered possibly related to the ascorbate therapy. Three subjects experienced a Grade 3 decrease in blood potassium level and one subject had Grade 2 while being treated with 430-710 mg/kg/day ascorbate. These patients saw their potassium level decrease by a quarter or a third during the study. This incidence of hypokalemia was possibly related to the ascorbate therapy. This is also supported by our data analysis of the electrolytes after high dose IVC injections for patients who were treated at the Riordan Clinic (not published data). It was shown that frequent high dose IVC injections resulted in the decrease of potassium concentration in blood from 25% to 36%. It is thus recommended to monitor potassium levels during treatment and, if necessary, give oral potassium supplements. Sodium levels were relatively stable during treatment. Table 3 shows the occurrence of adverse events by IVC dosage. It appears that there are fewer side effects at the lowest dose, although the increases at high doses were not dramatic and did not meet the criterion for stopping the trial. In an attempt to quantify this, the total grades per subject were computed at each dose ( Table 3). The results show a trend of more adverse events at higher doses. Grade 2 or 3 hypokalemia was only an issue at the three highest doses tested. Edema was also more frequent in high dosage infusions. There was a single edema event in a total of nine subjects given the two lowest doses of IVC, but there were six cases of edema for patients treated with higher doses. While the highest dose is still safe, the pharmacokinetic data of continuous IVC indicate that the plasma ascorbate concentrations do not increase very much as the dose is increased, so the better strategy may be to use lower doses with longer administration.
DISCUSSION
The purpose of this study was to perform deeper analysis of data from a previously published Phase I clinical trial giving continuous IVC infusions to terminal cancer patients [37]. The primary aim of the Phase I study was to assess risks and determine safety thresholds for IVC doses, particularly in regard to renal function. Efficacy was not expected, and any measures of effects on patient outcome were considered secondary. The eight-week trial involved terminal patients with poor prognosis, all but one of whom showed progressive disease during treatment. However, due to new information concerning potential biological effects of vitamin C potentially relevant to cancer and on the use of white blood cell counts and blood chemistry parameters as prognostic indicators, we decided to examine blood count and chemistry data from this trial to see if any changes in these parameters during IVC therapy could be detected.
The reason for this additional analysis concerns new information concerning IVC therapy in cancer. This includes a more sophisticated understanding of ascorbate's possible mechanisms of action against tumors. When the clinical trial was conducted, the primary goal was determining whether ascorbate could be administered safely at high doses without compromising renal function, and whether sufficient concentrations could be attained in plasma. The aims of this trial were met.
However, with more recent evidence that IVC could benefit cancer patients through multifunctional mechanisms, the present analysis was conducted to ascertain the effects of continuous IVC on key blood count and chemistry parameters.
In addition, this particular data set allows evaluation of continuous infusion, which would allow the use of lower ascorbate doses while providing longer exposure times at target concentrations. The results of this analysis suggest several potential benefits to continuous IVC infusion.
The most obvious effect of IVC therapy is to increase patient vitamin C levels. Consistent with other reports, the plasma ascorbate measurements conducted in this trial show that vitamin C depletion in cancer patients is common. In fact, ten out of twenty-four subjects entered the trial with plasma ascorbate concentrations undetectable by the colorimetric ascorbate assay used at that time with another four having ascorbate concentrations below the normal range.
IVC infusion increased plasma levels to the order of 1 mM. This likely replenished depleted tissue ascorbate stores as well. Interestingly, there did not seem to be a significant benefit in raising the IVC dosage beyond the first or second dose used, suggesting that for intravenous infusion, there exists a situation analogous to that for oral supplementation in which plasma concentrations reach a saturation point, although the potential saturation point with IV infusion is orders of magnitude higher than what can be attained orally. The leveling off of the ascorbate concentration can be explained by observations that high doses will saturate renal tubular ascorbate reabsorption, leading to increased ascorbate excretion.
Analysis of white blood cell counts for patients in this trial indicate the potential for IVC to increase lymphocyte and neutrophil counts for patients in whom these numbers are below normal while reducing neutrophil counts in patients for whom ANC counts are elevated.
It was particularly important for lymphocyte counts. Lymphopenia commonly occurs in cancer patients who had chemotherapy and high levels of oxidative stress induced by treatment predicting a poor prognosis [66].
In our study population, about half of the patients who started intervention had ALC lower than normal range. For patients with severe lymphopenia, who completed 6-8 weeks of treatment, the median improvement in the lymphocyte count was 69% and for all patients with ALC lower than normal range the median improvement was 22%. These data proved that continuous IVC can improve immune function of cancer patients by increasing ALC, especially in patients with low lymphocyte count.
The present analysis of neutrophil-to-lymphocyte ratios also demonstrated the regulatory effect of IVC. NLR has been used to assess inflammatory response and has been suggested as a prognostic factor in a variety of cancers [54,67]. In particular, cut-off values ranging between 2.0 and 4.0 were associated with a significant increase in all-cause mortality [54].
In the present study, most of the patients entered the trial with NLRs well above this cut-off. Continuous IVC therapy tended to decrease the rate of growth of NLR. Moreover, we were able to confirm the predictive potential of NLR: NLR increases correlated with lower survival times.
NLR may reflect the balance between the activation of the inflammatory pathway and the anti-tumor immune function. NLR elevated due to neutrophilia is linked to tumor granulocyte colony-stimulating factor (GCSF), accelerated tumor development, and increases in plasma cytokines IL-6 and TNF-α [68]. Since the rate of increase in NLR for patients with initially elevated values decreased during IVC therapy, ascorbate may be decreasing inflammation in these subjects, as suggested in our other work [69]. Also of note is that patients with elevated neutrophil counts tended to have ANC decrease during IVC treatment (while those with lower initial ANC tended to see increases during therapy). Neutrophils may act as tumor-promoting leukocytes [70,71]. A neutrophilic response is associated with poor prognosis, as it can inhibit the immune system by suppressing the cytotoxic activity of T cells.
In the present study, all but two patients had pre-treatment ANC in normal range. These two patients had above normal ANC initially, but saw their ANC decrease noticeably during IVC therapy. One patient with the initial ANC at the upper end of the normal range had an increase of this count by forty percent during IVC therapy. For the rest of the patients, the tendency was in improving the ANC at the low level of this parameter and decreasing for the higher values.
Concentrations of lactate dehydrogenase, an enzyme that catalyzes the conversion of pyruvate to lactate and is thus considered to be a key checkpoint of anaerobic glycolysis, decreased during therapy in 38% of the patients, increased in 28.6% and held constant in 33.4% of patients.
LDH is elevated in many types of cancers; it has been linked to tumor growth, maintenance, and invasion [57]. Among the transcriptional programs turned on by oncogenes, is the stabilization of hypoxia-inducible factor 1 alpha (HIF-1α) [55]. HIF-1α contributes to the upregulation of most of the enzymes and transporters involved in the glycolytic pathway, including lactate dehydrogenase A and glucose transporters, GLUT-1 and GLUT-3 [72].
Since ascorbate is thought to help regulate HIF, it was suspected that IVC might reduce LDH levels, or at least slow down the rate of increase in cancer patients. Data from the present study show an overall decrease in LDH in 40% of patients. In addition, when the subjects are separated into two groups based on initial LDH levels, the group with initial concentrations above the normal range had a significantly lower average survival time than the group with initial LDH levels in the normal range.
Hyperglycemia is another prognostic factor in cancer patients. It is common in cancer patients and represents a challenge during therapy. For example, about 70% of pancreatic cancer patients have impaired glucose tolerance [62]. Moreover, there is a link between the lowering of blood glucose concentration and remission of malignancy. For example, patients under insulin coma therapy for six months (for psychosis) were reported to become free of large tumor burdens considered incurable by their oncologists [62,73].
Excess glucose may impair cellular dehydroascorbate uptake [74], modify redox balance, activate oxidases, and interfere with the mitochondrial electron transport chain [75]. The elevated glucose levels compete and effectively restrict vitamin C from entering cells. Glucose self-oxidation also leads to free radicals and oxidative stress [74,75]. In the present study, IVC therapy was associated with decreases in plasma glucose concentrations, consistent with previous reports [76,77]. This is especially encouraging since two thirds of the patients were initially hyperglycemic.
Several clinical trials have established that IVC can be administered safely. In the continuous IVC infusion trial from which data for the present analysis are obtained, side effects were mostly minor and the criterion for stopping the clinical trial (two or more Grade 3 or higher adverse events at a given dose at least possibly related to the treatment) was never reached. Adverse events were more frequent at the higher IVC doses used. The adverse event data, along with the pharmacokinetic data, suggest that the use of higher doses in continuous infusion IVC therapy may result in more side effects without increasing plasma ascorbate concentrations significantly beyond those obtained at lower doses.
In summary, despite that the very poor health status of patients and absence of therapeutic efficacy of the treatment showed by radiographic scans, continuous IVC treatment had positive effects on the several important parameters such as NLR, ALC, ANC, LDH and blood glucose concentration. Moreover, these data suggest a therapeutic strategy based on relatively low IVC doses (150 to 290 mg/kg/day, or 10 grams -20 grams per day for a 70 kg patient) with continuous infusion warrants further consideration.
CONCLUSIONS
Vitamin C is a vital and functional food for presumed health benefits. Humans must obtain vitamin C from the diet, as they do not have gulonolactone oxidase, which is the final enzyme in a sequence of four enzymes in the liver required for vitamin C biosynthesis from glucose.
The use of ascorbate in cancer remains an area of controversy. How intravenous high-dose administration of ascorbate affects tumor growth is unknown and possible mechanisms of antitumor activity by ascorbate have not been monitored in an in vivo setting. Currently, there are no clinical data that show the importance of several factors in the treatment schedule by high dose vitamin C, such as dose, frequency and duration of administration on the effectiveness of the cancer patients' treatment. Most practitioners administer IV ascorbate to cancer patients by bolus infusions 2-3 times per week. Bolus infusion of high dose ascorbate (1g/kg) can reach very high levels in blood based on the data of pharmacokinetics, but ascorbate is very quickly eliminated from the body with half life time about one or two hours. Such treatment is generally well tolerated and safe, with few adverse events reported, and there are supporting case reports of an anticancer effect of such regimen.
There are reports regarding anti-tumor effect of relatively lower concentration vitamin C, less than 1.0 mM [78]. In most of the cases, low concentration of vitamin C could not induce extensive apoptosis, but shows the suppression of tumor proliferation and inhibition of the growth factor production [78]. Moreover, there are a number of possible anti-tumor mechanisms that do not require very high concentrations of ascorbate. For example, ascorbate can inhibit hypoxia-inducible factor-1 (HIF-1) activation in vitro at intracellular concentrations between 150 and 300 µM [79]. Pharmacokinetic data on ascorbate in tumor tissues following vitamin C administration determined the optimal dose regimen to achieve cellular levels optimal for HIFhydroxylase activity ~1-3 mM [80]. The study of the optimal concentration of ascorbate as a cofactor for hydroxylases that regulate gene transcription and cell signaling pathways shows that ascorbate concentration less than 1,000uM dose-dependently increases 5-hmC signal [81].
Ascorbate availability will influence immune cell function and tumor environments, affecting the resolution of inflammation and potentially tumor survival. The effect of different concentrations of ascorbate on activation of T cells was analyzed in the study [82]. According to this study, at high doses of ascorbate, proliferation was inhibited and there was an increase in apoptosis.
Frequencies of the treatments have effect on mechanisms of tumor suppression. Campbell et al [83] examined the effects of treatment schedule on the ability of intravenous ascorbate to inhibit HIF-1 expression (and the expression of its target proteins) in tumor bearing mice. As expected, a single bolus injection inhibited expression temporarily (it bounced backed in 20 -24 hours) while daily injections maintained the inhibition (while also reducing VEGF levels, tumor micro-vessel density, and hypoxia). Increased tumor ascorbate was associated with slowed tumor growth, but alternate day administration of ascorbate resulted in lower tumor inhibition and did not consistently decrease HIF-1 pathway activity [83].
Clinically, retrospective analysis of prostate cancer patients treated with IVC at the Riordan Clinic (1994-2015) showed that PSA levels increased more slowly in subjects given more frequent IVC treatments [84].
The Riordan Clinic trial patients were treated by continuous infusion, which is usually given over much longer periods of time. The choice of schedule was based on chemotherapeutic regimens, which show that large intermittent doses often being more toxic and less effective than smaller repeated doses.
The study was conducted in 1998 and the first data analysis published in 2005 was focused on the safety of the treatment. We conducted more detailed analysis of the available blood and urine tests and found that continuous IVC infusions improved several parameters associated with poor cancer prognosis.
The present analysis demonstrated the regulatory effect of continuous IVC on neutrophil-tolymphocyte ratios, lymphopenia, neutrophil count and hyperglycemia. The data suggest a strategic benefit of using lower IVC doses in continuous infusions, as raising the dose above 20 grams/70kg body weight increased the frequency of side effects without noticeably increasing plasma ascorbate levels and showed a tendency to decrease improvement in lymphocyte counts.
In conclusion, continuous infusions had benefit to cancer patients and further research in this area and clinical studies of the efficacy of continuous intravenous vitamin C are warranted.
Author Contributions: NM and JC analyzed and interpreted data. NM, JC and RH drafted the manuscript. All authors read and approved the final manuscript.
Funding: This research received no external funding.
|
2019-04-26T13:36:11.981Z
|
2019-03-29T00:00:00.000
|
{
"year": 2019,
"sha1": "692f1f3411b23cc0f4e0081e3a370a105231a463",
"oa_license": null,
"oa_url": "https://ffhdj.com/index.php/ffhd/article/download/590/1155",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a7fb86b4c38c8582a4311d49c38a8d812d441f08",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
268720382
|
pes2o/s2orc
|
v3-fos-license
|
Highly selective acylation of polyamines and aminoglycosides by 5-acyl-5-phenyl-1,5-dihydro-4H-pyrazol-4-ones
A highly selective acylating reagent with remarkable recognition of primary amines in monoacylation of polyamines and aminoglycosides.
Experimental Procedure for the Kinetic Study of the Reaction Between BCPP and Amines
Rate constants for the reaction of BCPP with a series of amines were determined using UV-Vis spectroscopy at 24±0.5 o C in dichloromethane. BCPP (1a) has an absorbance in the range 300 − 440 nm (covers visible light range), while other reactants and products absorb only in UV range. It allows monitoring the disappearance of BCPP at a certain wavelength (e.g. 380 nm). All kinetic measurements have been carried out above DCM cutoff (λ = 245 nm).
Absorbance for a series of solutions of BCPP in DCM was measured at 24 °C. Extinction coefficient ε was determined from Beer-Lambert law: A = εcl (l = 1 cm) as slope of the linear plot A = f(c). Extinction coefficient ε = 2920±70 was determined from three parallel experiments.
Determination of second order reaction rate.
Kinetic runs were performed for the reaction of BCPP with a series of amine solutions of different concentrations (C amine ≥ 10•C BCPP ).
Considering that the extinction coefficient for BCPP at 380 nm ε = 2920, a BCPP solution was prepared to have an absorbance A = 0.4−0.8 (at 380 nm). Concentrations of amine solutions (C amine ≥ 10•C BCPP ) were used to have reaction times less than one hour. mm, was mounted, using Paratone oil, onto a nylon loop. The data were collected at 98(2) K using a Rigaku AFC12 / Saturn 724 CCD fitted with MoKα radiation (λ = 0.71075 Å). Data collection and unit cell refinement were performed using CrystalClear software. 10 The total number of data were measured in the range 6.46° < 2θ < 50.1° using ω scans. Data processing and absorption correction, giving minimum and maximum transmission factors (0.546, 1.000), were accomplished with CrystalClear 10 and ABSCOR 11 , respectively. The structure, using
S38
Olex2 12 , was solved with the ShelXT 13 structure solution program using direct methods and refined (on F 2 ) with the ShelXL 14 refinement package using full-matrix, least-squares techniques. All non-hydrogen atoms were refined with anisotropic displacement parameters.
Electron density peaks were used to determine the hydrogen atoms bound to N2, C5, C6 and C7 atoms. All other hydrogen atom positions were determined by geometry and refined by a riding model.
|
2018-04-03T03:03:26.841Z
|
2017-08-30T00:00:00.000
|
{
"year": 2017,
"sha1": "a573c1850211d4215120c2ad2e401246413fd79a",
"oa_license": "CCBY",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2017/sc/c7sc03184j",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a573c1850211d4215120c2ad2e401246413fd79a",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
252770085
|
pes2o/s2orc
|
v3-fos-license
|
FeF3 as Reversible Cathode for All‐Solid‐State Fluoride Batteries
Fluoride batteries are attracting intensive attention because they can provide a higher energy density than conventional lithium‐ion batteries. Among various metal fluorides, FeF3 is a promising candidate for the cathode material of fluoride batteries because of its high theoretical capacity. In this report, the reversibility of an FeF3 cathode is investigated in conjunction with fluorite‐type Ba0.6La0.4F2.4 as the electrolyte and Pb as the counter‐electrode material. For the first time, the discharge–charge performance of a fluoride battery using FeF3 cathode is investigated. The initial discharge capacity is 579 mAh g−1, and a capacity of 461 mAh g−1 is retained at the 10th cycle. The reversible conversion reaction mechanism for FeF3 is clarified by X‐ray diffraction and X‐ray adsorption spectroscopy. The results revealed that FeF3 is reduced to FeF2 at the first‐stage plateau and then to Fe metal at the second‐stage plateau; they also reveal that the reverse process proceeded during charging. Ex situ scanning electron microscopy observations show that the morphology of the cathode changed reversibly and that, when the battery is in the discharged state, voids are present because of shrinkage of the electrode.
Introduction
Fluoride batteries have attracted significant attention because they demonstrate high specific energy densities and can accommodate flexible electrode materials. Fluoride (F À ) anions function as charge carriers, meaning that numerous metals can potentially be used as electrode materials in both the anode and cathode. In addition, numerous F À -ion conductors that exhibit high ionic conductivity have been reported. Therefore, in addition to batteries that use a liquid electrolyte, [23][24][25][26][27][28][29][30] some all-solid-state fluoride-shuttle systems have been demonstrated. [12,[31][32][33][34][35][36][37][38][39][40][41][42][43] As detailed in these previous reports, multivalent electrochemical reactions occur in such devices, and the devices can potentially deliver high energy densities via conversion-type reactions. Table S1, Supporting Information summarizes the theoretical energy density of some electrode materials. These values were calculated on the basis of the theoretical voltage and theoretical capacity. The theoretical capacity was calculated based on the change in the Gibbs free energy.
Among the possible cathode materials for fluoride-shuttle batteries, FeF 3 has the advantages of a large theoretical capacity (713 mAh g À1 ) and low cost. The theoretical gravimetric energy density for a full cell composed of an FeF 3 cathode and Mg anode is 1178 Wh kg À1 , which is substantially higher than that obtainable from a standard Li þ -ion battery. In the case of Li þ -ion batteries, FeF 3 is considered one of the most promising conversion-type cathode materials. [41][42][43][44][45][46] In a Li þ -ion battery with an FeF 3 -based cathode, the following reaction occurs and an insulator, LiF, is formed, which subsequently blocks the diffusion of Li þ ions However, in the case of the fluoride-shuttle system, the following simple reaction occurs without the formation of an insulator such as LiF This lack of insulator formation is possibly an advantage of the fluoride-shuttle system compared with lithiation when FeF 3 is used as a cathode material. Although some fluoride batteries have been studied, FeF 3 has not been previously reported as a cathode material in a fluoride battery. In the present study, we report all-solid-state fluoride batteries based on FeF 3 as the cathode material. Fluoride batteries are attracting intensive attention because they can provide a higher energy density than conventional lithium-ion batteries. Among various metal fluorides, FeF 3 is a promising candidate for the cathode material of fluoride batteries because of its high theoretical capacity. In this report, the reversibility of an FeF 3 cathode is investigated in conjunction with fluorite-type Ba 0.6 La 0.4 F 2.4 as the electrolyte and Pb as the counter-electrode material. For the first time, the discharge-charge performance of a fluoride battery using FeF 3 cathode is investigated. The initial discharge capacity is 579 mAh g À1 , and a capacity of 461 mAh g À1 is retained at the 10th cycle. The reversible conversion reaction mechanism for FeF 3 is clarified by X-ray diffraction and X-ray adsorption spectroscopy. The results revealed that FeF 3 is reduced to FeF 2 at the first-stage plateau and then to Fe metal at the second-stage plateau; they also reveal that the reverse process proceeded during charging. Ex situ scanning electron microscopy observations show that the morphology of the cathode changed reversibly and that, when the battery is in the discharged state, voids are present because of shrinkage of the electrode. Figure S1a,b, Supporting Information show scanning electron microscopy (SEM) images of the FeF 3 powder and Ba 0.6 La 0.4 F 2.4 (BLF, solid electrolyte) powder, respectively. The FeF 3 powder was used after being mechanically milled at a rotation rate of 600 rpm for 12 h. The FeF 3 particle size was uniform, and the size of the primary particles was %100 nm. The size of the secondary particles was %500 nm. By comparison, the BLF particle size was less uniform. Although the primary BLF particles were smaller than 1 μm, secondary particles larger than 10 μm were observed. In the present study, fluorite-type BLF was used as a solid electrolyte. BLF exhibits high F À ion conductivity with an interstitial-type transport mechanism. [21,22] Figure S2, Supporting Information shows the electrical conductivity of BLF prepared by mechanical milling. The electrical conductivity at 160°C was 3.8 Â 10 À5 S cm À1 . X-ray diffraction (XRD) patterns for the BLF electrolyte and FeF 3 composite electrode are shown in Figure 1a, together with the patterns for FeF 3 and BaF 2 . FeF 3 powder was used after being mechanically milled. For the BLF electrolyte, only a single phase with a fluorite structure was observed, suggesting the formation of a solid solution. The peak positions were shifted because of the doping by La 3þ , which has a smaller radius than Ba 2þ . The broadened peaks in the pattern for the BLF electrolyte are due to the small crystallite size after mechanical milling. In the pattern for the composite electrode powder, only peaks corresponding to BLF and FeF 3 were observed, indicating that the composite powder was successively mixed without a significant side reaction. Figure 1b shows an F K-edge X-ray absorption spectroscopy (XAS) spectrum of the FeF 3 composite electrode powder after ball milling. The XAS spectra of FeF 3 and BLF are also shown in this figure. The spectra of the composite electrode and pure FeF 3 show peaks at %684 eV. These peaks are assigned to the transition to mixed Fe3d-F2p unoccupied states in iron fluorides. [45,47] By contrast, the BLF does not absorb in this region, and its spectrum shows strong peaks between 686 and 693 eV. The spectrum of the electrode mixture shows peaks related to both FeF 3 and BLF. Figure 1c shows an Fe L-edge XAS spectrum of the FeF 3 composite electrode powder, along with the spectra of pure FeF 3 and BLF. The strong peaks between 707 and 713 eV are approximately the same in the spectrum of the composite electrode and the pure FeF 3 . These results indicate that FeF 3 is chemically stable after being mixed with BLF and acetylene black (AB) by mechanical milling and that the iron remains in the Fe 3þ state. Figure 1d,e show an field-emission SEM (FE-SEM) image and an energy dispersive X-ray spectrometry (EDS) mapping image of the FeF 3 composite electrode, respectively. They show that the particle sizes in the BLF and FeF 3 are %5 μm and %500 nm, respectively. The BLF particle size is larger than that before compounding ( Figure S1, Supporting Information).
XRD, XAS and SEM Evaluations of FeF 3 Cathode
In the present study, the electrochemical performance was evaluated at 160°C because of the resistance of the solid electrolyte. Therefore, we evaluated the stability of FeF 3 at high temperatures. Figure S3, Supporting Information shows F K-edge and Fe L-edge XAS spectra of the FeF 3 powder after it was heated at various temperatures. The spectra overlap within the investigated temperature range, indicating that FeF 3 is stable to 200°C. The thermal stability of the composite electrode was also evaluated. Figure S4a, Supporting Information shows XRD patterns for a sample of the composite powder (BLF-FeF 3 -AB) after the sample was heated under an Ar atmosphere. At 200°C, no pattern changes were observed, in good agreement with the XAS measurement results in Figure S3, Supporting Information. At temperatures greater than 250°C, peaks due to FeF 2 were newly observed, indicating that the FeF 3 phase is stable in the composite powder to 200°C but decomposes at higher temperatures. XRD patterns for the pure FeF 3 powder samples heated under an Ar atmosphere were also acquired ( Figure S4b, Supporting Information). Peaks due to Fe 2 O 3 were newly observed in the pattern for the sample heated at 400°C. Therefore, the phase stability of FeF 3 differed between the composite powder and the pure FeF 3 . Figure 2a shows the discharge-charge profiles for the allsolid-state fluoride battery prepared using an FeF 3 electrode. The cell had a Pb/PbF 2 -SnF 2 -AB/BLF/FeF 3 -BLF-AB structure as shown in Figure S5, Supporting Information. To evaluate the cathode performance of FeF 3 , a bilayer-type counter electrode was used. The initial discharge capacity was 579 mAh g À1 . Therefore, 2.4 F À ions were shuttled from the FeF 3 . The observed capacity was 81% of the theoretical capacity of FeF 3 . The observed capacity is substantially greater than that reported for a CuF 2 electrode or a BiF 3 electrode in an all-solid-state fluoride battery. [31,32,34] Thieu et al. reported that a CuF 2 electrode delivered a capacity of 360 mAh g À1 in the first discharge, which is 68% of the theoretical capacity (527 mAh g À1 ). [32] Therefore, both the capacity and utilization of our FeF 3 electrode are superior to those for a CuF 2 electrode. This result is speculatively attributed to the measurement temperature being slightly higher than that for the CuF 2 case (150°C) and to the Pb-based negative electrode material exhibiting better fluorination characteristics than La. Bhatia et al. reported that a BiF 3 electrode delivered a capacity of %245 mAh g À1 in the first discharge, which is 81% of the theoretical capacity (302 mAh g À1 ). [34] Therefore, the utilization of FeF 3 is similar to that of BiF 3 . Figure 2b shows the cycling characteristics of the all-solid-state fluoride-shuttle battery with an FeF 3 electrode. Although some performance loss is apparent, the device was generally stable for 10 cycles. A discharge capacity of 461 mAh g À1 was retained at the 10th cycle. This cycling stability is also superior to that of a BiF 3 electrode or a CuF 2 electrode in all-solid-state fluoride batteries. [31,32,34] The overpotential in the second plateau of the discharge profile for a FeF 3 cathode in a lithium-conversion-type battery has been reported to be larger than that in the first plateau. [44] However, according to the results for the fluoride battery in the present work, the polarization was smaller in the lower potential plateau. In addition, although the temperature conditions differed, the voltage difference between charging and discharging in the second plateau was smaller in the fluoride shuttle battery than in the lithium-ion battery. This result suggests that the resistance differs depending on the discharge-charge mechanism for lithium-conversion-type and fluoride-shuttle-type batteries. Figure S6, Supporting Information shows discharge-charge profiles at various current densities. With increasing current density, both the discharge and charge capacity decreased. When the current density was 0.24 and 0.16 mA cm À2 , the initial discharge capacity was 309 and 510 mAh g À1 , respectively. At 0.24 mA cm À2 , the IR drop associated with the BLF electrolyte was calculated to be %500 mV from the ionic conductivity of BLF ( Figure S2, Supporting Information). The difference in cell voltage between discharging and charging was %1 V at the second plateau, which is at a lower potential. Therefore, a large part of the resistance is related to the IR drop in the BLF solid electrolyte layer. Consequently, the resistance of the battery would be substantially decreased with the incorporation of a solid electrolyte with high F À ion conductivity.
Discharge-Charge Mechanism of the FeF 3 Cathode for the All-Solid-State Fluoride Battery
To clarify the discharge-charge mechanism for the FeF 3 cathode, we conducted ex situ XRD and XAS measurements. The measurement points are indicated in Figure S7, Supporting Information. Figure 3 shows ex situ XRD patterns before and after the discharge-charge measurements. At the initial state, only peaks due to BLF and FeF 3 were observed. After discharge to 350 mAh g À1 , peaks associated with FeF 2 were present. After discharge to À2 V, a peak due to Fe metal was newly observed. After charging to 300 mAh g À1 , this peak disappeared and the intensity of the peaks assigned to FeF 2 increased. Moreover, the intensity of these peaks decreased after charging to 4 V.
These results indicate that the following reactions occur in the FeF 3 electrode ðChargeÞ Fe þ 2F À ! FeF 2 þ 2e À FeF 2 þ F À ! FeF 3 þ e À (6) Figure 4a shows ex situ F K-edge XAS spectra of the FeF 3 electrode before and after the discharge-charge measurement. Before the electrochemical measurement, adsorption by FeF 3 was observed at 684 eV ( Figure 1b). The intensity of this peak gradually decreased as discharging progressed and then increased as charging progressed. However, the adsorption peaks between 686 and 693 eV are mainly due to the BLF, with a small contribution from FeF 3 . The shape of the spectrum corresponding to the discharged state agrees well with that for BLF because of defluorination of FeF 3 . Figure 4b shows ex situ Fe L-edge XAS spectra of the FeF 3 electrode before and after the discharge-charge measurement. The spectra show two intense peaks at the L 3 -edge (between 705 and 715 eV) and two doublet peaks at the L 2 -edge (between 718 and 726 eV). The positions of these peaks shifted to lower energy as discharging progressed and returned to almost their original positions upon charging. As already shown in Figure 1c, the peak positions in the Fe L-edge spectrum of the FeF 3 composite powder are the same as those in the spectrum of pure FeF 3 , suggesting that the valence number at the initial state is Fe 3þ (i.e., FeF 3 ). Miedema et al. reported that the peak position for FeF 3 in the L 3 -edge spectrum is higher than that for FeF 2 . [48] Senoh et al. reported the peak position for FeF 2 in the L-edge spectrum is higher than that for Fe. [45] These results indicate that Fe 3þ , Fe 2þ , and Fe metal can be distinguished by their peak positions. The shapes of the reported spectra are similar to those of the spectra after discharging to À2 V and 350 mAh g À1 . The energy positions of the L 3 peaks for Fe are lower than those for FeF 2 , and the intensity of the Fe peaks is diminished. These results suggest that redox reactions of Fe 3þ /Fe 2þ and Fe 2þ /Fe occur during the discharge-charge reaction. This result is in good agreement with the XRD results shown in Figure 3. Figure 4c present ex situ Ba L-edge and La M-edge XAS spectra of the FeF 3 electrode before and after the discharge-charge measurements. The peak positions in both spectra were the same before and after the measurements, suggesting that the BLF electrolyte is electrochemically stable and that the redox couple is Fe 3þ /Fe. When a conversion-type reaction occurs in the FeF 3 electrode, the resultant volume change is larger than that associated with a typical insertion-type reaction. Theoretically, the volume change is %75% according to Equation (2). Therefore, we assumed that degradation is mainly caused by the breaking of ionic or electronic connections as a result of the large volume change. We examined the morphology of the FeF 3 electrode before and after the discharge-charge measurements. Figure S8a, Supporting Information shows EDS mapping images of the FeF 3 electrode before the discharge-charge measurements. The battery cell was constructed and heated at 160°C for 2 h; the electrode was then removed from the cell without electrochemical treatment. Carbon was found to be uniformly dispersed, and the positions of Fe and La were clearly separated, indicating that FeF 3 and BLF were mixed without significant solid solution formation. The bright regions of the images correspond to La, which is a heavy element, and the BLF and FeF 3 components can be clearly distinguished from the light and dark regions in the SEM images. In addition, after the initial discharge ( Figure S8b . SEM images of FeF 3 electrode before and after discharge-charge measurements (magnification: 5000Â). a) Before discharge-charge measurement. The battery cell was constructed and heated at 433 K for 2 h, the cell was then cooled to room temperature, and the electrode was collected. b) After discharging to 350 mAh g À1 . c) After discharging to À2 V (endpoint of discharge). d) After charging to 300 mAh g À1 . e) After discharging to 4 V (endpoint of charge). proportion of bright areas is observed. This is reasonable given that the volume of iron fluoride is decreased after discharge because the iron is reduced to Fe metal. On the other hand, the number of cracks increased as discharging progressed and decreased as charging progressed. Figure S9a-e, Supporting Information show SEM images (magnification: 50 000Â) with different cutoff conditions for discharge-charge measurements. At the initial state, FeF 3 (dark regions) with a thickness of 100-500 nm is coated with a thin BLF layer (%70 nm). After discharge to À2 V, FeF 3 is still coated with a thin BLF layer, although voids are evident. A similar morphology was observed between the initial state ( Figure S9a, Supporting Information) and the cell after charging to 4 V ( Figure S9e, Supporting Information).
These results indicate that no significant size change of the iron fluoride occurred before and after the initial cycle. A better cyclability may be performed if the potential window would set to only cycle the battery between FeF 3 and FeF 2 . Discharge-charge profiles for the converision-type FeF 3 cathode of the lithium battery has been reported, and it showed that a better energy efficiency was observed with only one lithium insertion. [49] Therefore, the effect of the condisions of the cut off voltage will be reported in our future work to investigate the degradation mechanism.
Conclusion
In summary, the electrochemical reversibility of FeF 3 has been demonstrated in an all-solid-state fluoride battery for the first time. The FeF 3 electrode exhibits a reversible capacity of 579 mAh g À1 at the initial cycle and retains a discharge capacity of 461 mAh g À1 at the 10th cycle. The XRD and XAS results suggest that the FeF 3 is first reduced to FeF 2 during discharging, and then to Fe. The reverse reaction occurs during charging. SEM imaging reveals that the number of cracks increases as discharging progresses and decreases as charging progresses and the volume of FeF 3 electrode changes. The results presented here indicate that FeF 3 is a promising electrode material for fluoride batteries with a large energy storage capacity.
Experimental Section
Battery Assembly: BaF 2 (99.9%) was purchased from FUJIFILM Wako Pure Chemical. LaF 3 (99.95%) was purchased from Kishida Chemical. FeF 3 (99%) was purchased from Strem Chemicals. Pb (99.95%), PbF 2 (99%), and SnF 2 (99%) were purchased from Sigma-Aldrich. Acetylene black (AB) was purchased from Denka. Mechanical milling was used to synthesize Ba 0.6 La 0.4 F 2.4 (BLF) from a stoichiometric mixture of BaF 2 and LaF 3 ; milling was conducted at a rotation rate of 600 rpm for 12 h under Ar. The ball-to-powder mass ratio was kept constant at 20:1 during this synthesis process, and ZrO 2 pots (80 mL volume) and 36 g spheres (3 mm diameter) were used as milling media. Mechanical milling was performed using a planetary-type mill (TRITSCH Pulverisette 7). BLF was used as the electrolyte, and the cathode was prepared by mixing FeF 3 , BLF, and AB (FeF 3 -BLF-AB) in a 6:10:1 weight ratio. The counter electrode contained two layers: a Pb layer and a PbF 2 -SnF 2 -AB composite layer. The Pb layer was prepared by compacting Pb powder. A PbF 2 -SnF 2 -AB composite layer was prepared by pressing a composite powder containing PbF 2 , SnF 2 , and AB; this powder was prepared by mechanically milling a mixture of PbF 2 , SnF 2 , and AB combined in a 3:1.4:0.286 weight ratio. First, PbF 2 and SnF 2 and 10 ZrO 2 balls (10 mm diameter) were mixed at a rotation rate of 600 rpm for 24 h under Ar. AB was then added to the obtained powder composed of PbF 2 and SnF 2 , and the resultant mixture was mixed at a rotation rate of 600 rpm for 12 h under Ar.
Electrochemical Procedure and Analyses: Impedance measurements were conducted using a potentiostat/galvanostat (Bio-Logic SP-300) over the frequency range from 7 MHz to 0.1 Hz, with the sample under an Ar atmosphere. The specimens were prepared as 10 mm-diameter, %0.6-mm-thick pellets via uniaxial cold-pressing under a force of 510 MPa. A thin Pt layer was sputtered onto both sides of each pellet to form ion-blocking electrodes. The resultant pellets were sealed in a HS cell (Hohsen) in an Ar-filled glove box. Each all-solid-state fluoride-shuttle cell was assembled using an insulating cell die (PEEK, poly ether ether ketone) sandwiched between two stainless steel rods. The cell was assembled under Ar by pressing the cathode, electrolyte, and counter electrode materials together under an applied force of 510 MPa to obtain a 10 mm-diameter disc. A four-layer cell (Pb layer, 380 μm/PbF 2 -SnF 2 -AB layer, 370 μm/Ba 0.6 La 0.4 F 2.4 layer, 800 μm/FeF 3 -BLF-AB layer, 180 μm) was thus obtained. In the present study, a thick FeF 3 layer (180 μm) was used for evaluation of the electrochemical performance. Electrochemical charge-discharge measurements were performed in galvanostatic mode using a discharge-charge cycling apparatus (HJ1020mSD8, Hokuto Denko). Each cell was cycled at 160°C at a different current density (0.08, 0.16, or 0.24 mA cm À2 ) in the voltage range À2 to 4 V. XRD patterns of the powder samples and battery pellets were recorded using a Rigaku Miniflex and a Rigaku RINT-TTRIII equipped with a parallel beam. The surface morphology of the pellets was investigated by field-emission SEM (FE-SEM) using a JEOL JSM-IT700HR/LA. XAS data using soft X-rays were acquired at the BL-12 beamline station of the SAGA Light Source. All experiments were performed without exposure to the air, except during SEM observations.
Supporting Information
Supporting Information is available from the Wiley Online Library or from the author.
|
2022-10-10T15:32:48.920Z
|
2022-10-07T00:00:00.000
|
{
"year": 2022,
"sha1": "f2cf9ab12e0e39b928437dfa8d41209df22ce0d0",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/aesr.202200131",
"oa_status": "GOLD",
"pdf_src": "Wiley",
"pdf_hash": "2e33a8717b4686075c166ed48a4c097239c20428",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": []
}
|
271490677
|
pes2o/s2orc
|
v3-fos-license
|
The Danish Ibbis Trials for Sickness Absentees with Common Mental Disorders: A Phase 4 Prospective Study Comparing Randomized Trial and Real-World Data
Introduction: In two randomized controlled trials (RCT) we tested the efficacy of a novel integrated vocational rehabilitation and mental healthcare intervention, coined INT, for sickness absentees with common mental disorders. The aim was to improve vocational outcomes compared to Service As Usual (SAU). Contrary to expectations, the delivered intervention caused worse outcomes within some diagnostic groups and some benefits in others. In this phase 4 study, we examined the effectiveness of the intervention in real-world practice. Method: In this prospective intervention study, we allocated adult sickness absentees with either depression, anxiety, or adjustment disorder to receive INT in a real-world setting in a Danish Municipality. We compared the vocational outcomes of this group to a matched group who received INT as a part of the RCTs, after randomization to the intervention group herein. Primary outcome was return to work at any point within 12 months. Results: In the real-world group, 151 participants received INT during 2019. From the randomized trials, 302 matched participants who received INT between 2016–2018 were included. On the primary outcome – return to work within 12 months – the real-word group fared worse (48.3 vs 64.6 %, OR 0.54 [95%CI: 0.37–0.79], p = 0.001). Across most other vocational outcomes, a similar pattern of statistically significant poorer outcomes in the real-world group was observed: Lower number of weeks in work and lower proportion in work at 12 months (42.3% vs. 58.3% (p = 0.002)). Discussion: The real-word group showed significantly worse vocational outcomes. Like in many other studies of complex interventions, implementation was difficult in the original randomized trials and perhaps even more difficult in the less structured real-world setting. Since the intervention was less effective for some groups compared to SAU in the original trial, this negative effect may be even more pronounced in a real-world setting.
INTRODUCTION
Common mental disorders (CMD) like anxiety, depression, and stress-related disorders alone account for 40% of all sickness absence longer than eight weeks [1].These disorders are associated with much suffering and large societal costs due to production loss and expenses for services and sick leave benefits.Furthermore, sick leave duration is associated with risk of permanent labour market exclusion [2].For these reasons, much intervention research has focused on vocational outcomes, like the most frequently used being sick leave duration, but also proportion in work at follow-up is seen [3].Many interventions have been studied with heterogenous results, but according to a comprehensive systematic review [4], the most effective interventions seem to be ones who: a) have more than one component, e.g. both psychotherapy and a work-focused intervention, yielding a complex intervention defined as those "made up of various interconnecting parts", as defined by Campbell et al. [5]; furthermore they should b) focus on early, graded return to work; and finally they should c) emphasize workplace involvement [4].
The interplay between components in a complex intervention can take many forms, like e.g.interconnected as mentioned above, but it has also been suggested that they should be integrated [6], and another study of such integration of interventions showed positive results [7].
But such complex or integrated interventions are quite hard to deliver with high fidelity, even in settings of a randomized controlled trial (RCT), despite often detailed intervention protocol and rigorous attempts to achieve high protocol fidelity [8].Adding further complexity, the value of RCT results depends on their generalizability to real-world settings [9], and if external validity is low due to even lower fidelity when the rigorousness of the RCT setting is changed to a real-world setting, the gap between intervention effect in these domains may be significant [10].For that reason, there is a growing consensus that any intervention showing significant efficacy in an RCT should followingly be tested in a socalled "phase IV study" (referring to the RCT as the typical methodology in phase III studies), to determine intervention effectiveness, being an intervention's effect in a real-world setting [11].The study presented in this paper, is such a phase IV study, where we compare the effect of a complex intervention we trialed in a previous study, to the effect of the same intervention in a real-world setting.The principal intervention we test in this phase IV study, is an integrated intervention we previously tested in two randomized controlled trials (RCT), the IBBIS Trials [12,13].
THE ORIGINAL IBBIS TRIALS
"IBBIS" is a Danish acronym translating to "Integrated Health Care and Vocational Rehabilitation for Sick-Leave Benefit Recipients".Both RCTs included adult participants on sick leave with common mental disorders, with an average baseline sick leave duration of 10 weeks (SD 4).One RCT included absentees with depression or anxiety as their main diagnosis (RCT1) and the other absentees with a stress-related disorder (RCT2).Both studied the effect of the IBBIS Integrated Intervention (INT) by comparing it to Service As Usual (SAU), which in Denmark consist of treatment in General Practice, combined with standard work rehabilitation intervention and case management in the local municipality, and these sector do not cooperate, but in some cases exchanges information unidirectionally from GPs to the municipality during sick-leave -see Figure 1 depicting SAU in left column.Both trials were preregistered on ClinicalTrials.organd in two design papers [14,15].
RESULTS OF THE IBBIS TRIALS
In RCT1 (target group: anxiety and depression), we found that INT showed benefits regarding probability of being in full-time work at 12-month follow-up (a secondary outcome) but no effect on the primary outcome, return to work measured at 12-month follow-up [16].In RCT2 (target group: stress-related disorders), INT consistently showed significantly worse vocational outcomes across different measures [17].
In both IBBIS Trials, implementation was suboptimal.A process evaluation study showed that diverging norms and goals between staff groups of the different sectors supposed to be integrated regarding norms and goals.It was concluded that that hindered the integration of the intervention components [18].The intervention protocol stipulated that intervention components should imply goal alignment between the goals of the healthcare staff and the employment consultants delivering the vocational rehabilitation intervention, but instead a goal hierarchy gradually developed and settled during the twoyear trial period, with the goals of the latter staff group dominated over the other [19].These implementation issues constituted the largest challenges to the studies' external validity.Either the implementation issues meant that the results of the trial could not be ascribed the true effects of the IBBIS Integrated Intervention, if it had been delivered, or either this intervention is so hard to deliver, even in a rigorous RCT setting that in the real-world it is unfeasible.
AIM OF THIS STUDY
In accordance with the above discussion, a process evaluation of the IBBIS Trials led to the conclusion that a complex intervention like INT needs substantial managerial attention to be successfully implemented in a real-world setting [18].After the inclusion to the IBBIS Trials was concluded, the Copenhagen Municipality decided to continue to deliver the IBBIS Integrated Intervention to the population of sickness absentees for which in was intended.This gave rise to the opportunity to study the effectiveness of the intervention in a real-world setting, and hence quantify any differential from the (negative) efficacy measured in the RCT setting.The aim was to compare the vocational outcomes of sickness absentees who received INT in the IBBIS RCTs with a comparable group of sickness absentees who received INT in a real-world setting INT (henceforth "real-world-INT") after the RCTs ended.
METHODS
This study was registered before final analyses were performed.The preregistered statistical analysis plan can be found at https://osf.io/3ca5m/.
DATA SOURCES
As in the original RCTs, we retrieved vocational information about the absentees from Danish national registers on social benefits and income (the Danish DREAM register).Information about diagnosis and start date of sick leave was collected by research staff.Covariate data on education, social benefit history and employment status was retrieved from Statistics Denmark.
RECRUITMENT PROCEDURE, ELIGIBILITY ASSESSMENT AND INCLUSION
The study population in real-world-INT was recruited from the agency in Copenhagen Municipality, managing the cases of sick-leave benefiters.Recruitment procedure, and inclusion criteria was similar to the procedure in the IBBIS RCTs, except participants were not randomized, but offered inclusion in the intervention group if eligibility was established.
In RCT-INT, participants underwent a randomization procedure, before eligibility assessment, which again was conducted before receiving INT, whereas in the real-world-INT group, participants did not undergo randomization, but allocation to intervention if they were found eligible and gave written consent.Yet, in both groups, after recruitment to, and as a part of the eligibility assessment, participants a thorough mental health assessment, called the IBBIS Mental Health Assessment (IBBIS-MHA).IBBIS-MHA was performed by a psychiatrist or by a mental health professional (a nurse, psychologist, or psychiatric medical resident) supervised by a psychiatrist.The IBBIS-MHA consisted of a clinical interview with a focus on current mental health issues.The clinical interview started with a pragmatic clinical interview where the participants was asked about main health issues, and about what they experienced as main cause of sick leave.That part of the interview was followed by a) the semistructured MINI International Neuropsychiatric Interview [20], to ensure a systematic approach to assessment of main mental health symptom domains; furthermore, to screen for personality disorders it was followed by the clinician-rated b) Standardized Assessment of Personality -Abbreviated Scale (SAPAS) [21] and c) Attention deficit hyperactivity disorder symptom checklist for adults, Adult Self-Report Scale (ASRS) [22].The Mini-Mental State Examination (MMSE) was used if dementia was clinically suspected by the assessor [23].Before the interview, participants filled in the validated Danish version of the Four-Dimensional Symptom Questionnaire (4DSQ) that measures levels of depression, anxiety, distress, and somatization [24].Assessors had access to the results to guide their clinical assessment.We studied the isolated effects of the IBBIS-MHA in a separate study, since we could not rule out that this procedure could itself influence vocational outcomes.Hence, the outcomes in the SAU group in the IBBIS Trials may not totally reflect a true real-world-SAU, and the effect of IBBIS-MHA was needed to inform discussion of the external validity of the results of the IBBIS Trials.Due to methodological limitations, no strong conclusions could be drawn from that study, but we saw tendencies towards a negative effect on vocational outcomes from this assessment per se [25].Before study inclusion, participants in RCT-INT filled in self-report questionnaires that participants in real-world-INT did not, but this data was not available to the assessors, solely used for study purposes in the RCTs why they where not utilized in this study.
Eligibility criteria
To be eligible for inclusion in the study groups participants had to be at least 18 years of age, be on sick leave for at least 4 consecutive weeks, have undergone a mental health assessment with a primary diagnosis of either anxiety, depression, or an adjustment disorder, including exhaustion disorder [26] or stress according to the Four-Dimensional Symptom Questionnaire [24].
STUDY INTERVENTION: THE IBBIS INTEGRATED INTERVENTION (INT)
Both study groups received INT, the IBBIS Integrated Intervention, yet in RCT-versus real-world-settings, respectively.The IBBIS Integrated Intervention (INT), consists of IBBIS Mental Healthcare and IBBIS Vocational Rehabilitation, and these two intervention components were integrated -see Figure 1, right column.
INT consisted of the integration of IBBIS Vocational Rehabilitation and IBBIS Mental Healthcare.These intervention components and the methods for integration are described in detail in the RCT study design papers [14,15].IBBIS Mental Healthcare was a stepped-care intervention with treatment options depending on diagnosis.For participants with stress-related disorders, treatment included predominantly clinical monitoring, stress-coaching or, for those with exhaustion disorders, mindfulness-based stress-reduction (MBSR).Treatment of participants with anxiety or depression adhered to the guidelines from National Institute of Clinical Excellence [27], consisting primarily of psychoeducation, clinical monitoring, psychotherapy, medication or, in more severe cases, a combination hereof.Treatment was delivered by care managers, who were mental healthcare professionals with at least one year of experience and training in general mental healthcare and psychotherapy.IBBIS Vocational Rehabilitation was based on the principles of the Sharp-at work intervention [28] and Individual Placement and Support [29].The vocational rehabilitation intervention components were provided by employment consultants, who also functioned as the case-managers of the participants sick-leave benefit cases.
These two principal intervention components, IBBIS Mental Healthcare and IBBIS Vocational Rehabilitation, were sought integrated through the following activities: I) co-location of staff; II) at least one roundtable-meeting between the care manager, the employment consultant, and each participant; and III) interdisciplinary training and supervision for care managers and employment consultants.
Protocol adherence in RCT-INT
In RCT-INT, staff adherence to the study intervention manuals was examined through fidelity interviews.While IBBIS Mental Healthcare was implemented with high fidelity, IBBIS Vocational Rehabilitation and the activities to ensure integration were only implemented with fair fidelity.For example, a central intervention component activity like work contact was only delivered to rather few participants.We saw a slight tendency of increasing fidelity over time during the study period, but we did not perform any statistical test.
CONTROL GROUP SELECTION
The group of study participants in the control group, RCT-INT, was a selected subset of participants from the group of participants randomized to INT in the original IBBIS Trials (RCT-INT).
We could not rule out that there were differences between group who were referred for eligibility assessment for an RCT, and a group referred to the same intervention in a less rigorous real-world setting.Therefore, through propensity score matching [30], we selected two controls per participant in the intervention group.Propensity score matching is a statistical method for finding the most similar participants(s) per one case, when several variables are available, and no exact match can be found, but one wish to find the most similar other case(s).Matching variables were age, sex, primary diagnosis (as established during the IBBIS Mental Health Assessment), employment status (with or without a current employment), work branch category (as defined by Statistics Denmark, e.g.farming, production, teaching, administration etc.), and social benefit history (how many weeks during the two years before baseline had the participant received social benefit payments).These co-variates were selected as matching variables since they are likely to influence vocational outcomes [31].The general employment rate is also known to impact vocational outcomes among sickness absentees but could not be included as a matching variable; employment rates in society at the time of observation differed between the groups, as the groups received the interventions at different points in time between which the level of employment in society changed (before and after mid-2018, respectively).Crude employment rates are reported without any adjustment since such adjustment would entail assumptions about the exact causal effect of specific changes in employment rates on the vocational outcomes of the study population.Post-hoc, we conducted sensitivity analyses using municipality as a matching variable, since municipality was also associated with vocational outcomes in the RCTs.However, this halved the sample size of the RCT-INT group, as real-world-INT was only delivered in one of the municipalities involved in the RCTs.For that reason, municipality was not a matching variable in the main analyses.
OUTCOMES
The primary outcome of this study was any return to work any time before 12-month follow-up.We chose this outcome because of its higher sensitivity compared to the more commonly used time-to-event analyses [3] and because, in one of the RCTs, INT seemed to have effect around 12-month rather than 6-month follow-up [32].Return to work was defined as consecutive period of at least four weeks in which the participant did not receive any social benefit, like sick-leave benefit, and in which period the participant earned a salary through working in a competitive ordinary job.Two secondary outcomes were in a similar fashion any return to work but measured at 6-and 24-month follow-up, respectively.Further secondary outcomes, were, at 6-, 12-, and 24-month follow-up, respectively, proportion in work at follow-up, and number of weeks in work between baseline and follow-up.
STATISTICAL ANALYSES
Sample size was not determined through calculations, since the cohort size was determined externally, but before analyses were conducted, we calculated that with the number of participants in the study groups, power would be at least 0.8, for a difference of at least 13%-point between groups, since we assumed proportion of positive cases on the primary outcome would be between 0.2 and 0.65 (as observed in the original IBBIS Trials).We applied the intention-to-treat principles in all analyses.Due to the comprehensive nature of the Danish registers, we did not expect missing data and thus did not plan for this eventuality.Treatment effects were estimated using logistic regression for binary outcomes, including the primary outcome.Poisson regression with bootstrapped non-parametric standard errors was used for numeric outcomes, and 1000 resamples were applied.A significance level of 5% in two-sided tests was used throughout.This applied to p-values and all confidence intervals reported.All tests formally reflected a superiority expectation.Type I error rates were controlled for by selecting and prioritizing tests, and therefore no statistical correction of multiple tests was carried out.No distributional assumptions were made.Covariate linearity was controlled using a likelihood ratio test of simple models with and without an added quadratic term.Baseline was defined as the third week of sickness absence, since all participants received sickness benefits, which are only available after at least four weeks of sick leave.Accordingly, no participants had returned to work before that period.The propensity score was estimated using a random forest, that is, an assumption-free tree-based ensemble model that captures non-linear associations well (i.e.interactions).
OUTCOMES
On the primary outcome (at least 4 consecutive weeks in return to work within 12 months after baseline), 64.6% of the RCT-INT group had experienced RTW but only 48.3% in the real-world INT group (OR 0.51 [95%CI: 0.35-0.76],p = 0.001).The real world-INT group had a lower number of weeks in work (p < 0.001) at all follow-ups.Work status at 6-month follow-up was not statistically different between the two groups but at both 12-and 24-month follow-up, a smaller proportion of the realworld INT group were working, 42.3% vs. 58.3% in the RCT-INT group (p = 0.002).Similarly they have had less weeks in work since baseline at these two follow-ups, but not statistically sign.at 6-month follow-up.All outcome estimates are presented in Table 2, and Figure 2 shows the proportion in stable work per week after baseline.
Sensitivity Analyses
No differences in results were observed between the main analysis and the sensitivity analyses, in which the 151 participants in the real world-INT group were matched to 151 participants in the RCT-INT group in the same municipality.Yet, on some measures the p-value was lower, thereby increasing the overall level of statistical significance in favour of the RCT-INT group's outcomes.These results are found in the Results Supplement.
DISCUSSION
In this phase IV study, we tested the efficacy of The IBBIS Integrated Intervention (INT) when delivered in a realworld setting by comparing it to its effect when delivered in an RCT setting.To the authors' belief this is the first phase 4 study in the area of return-to-work interventions for target populations of people on sick-leave with common mental disorders.On the primary outcome, we found a statistically significantly lower proportion of 48.3% had experienced return to work in the real-world-INT group, compared to 64.6% in the RCT-INT group (p = 0.001).Consistently this was the pattern among almost all other measures, with real-world-INT showing worse outcomes compared to RCT-INT.
We believe that most likely the greatest part of the difference is a true difference in causal effect between the two intervention delivery settings.In the randomized trials, we had observed moderate implementation issues, being a low fidelity particularly regarding the vocational rehabilitation intervention components, but a tendency of in rising fidelity over time.We therefore hypothesized that the effect could be similar in the real-world setting, due to fidelity perhaps reaching a level to at least be sufficient to equal the effect in the RCT setting.Contrary to this hypothesis, we saw that vocational outcomes of real-world INT were significantly worse across almost all measures and different periods of follow-up.We believe that a substantial part of the negative outcomes reflects the difference between interventions in real-world vs. RCT settings that is often referred to as the difference between effectiveness and efficacy [33].One possible explanation could be that the real-word setting implementation was relatively poorer -a tendency also observed in other comparisons of RCTs and real-world settings [33].During the RCT, managerial focus on protocol adherence was greater to ensure protocol fidelity and, subsequently, internal validity, but the results in this study support the suspicion that the intervention might be rather unfeasible.We cannot completely rule out that unobserved between-group differences confounded the results.It is, for example, possible that the level of disorder was higher in the real-world group, causing worse outcomes.While we tried to handle this potential confounder by matching diagnoses, levels of illness could still vary within diagnostic groups because the municipal body had different incentives for referring potential participants during recruitment to the real-world study (after the RCTs).Other studies have previously suggested that RCT participants and real-world individuals may differ [34].Societal factors that change over time, like e.g.employment rates could explain some of the findings through confounding, but it is not very likely.During observation of the real-world group, the average baseline unemployment rate was lower, and one would therefore have expected increased return-to-work rates among sickness absentees rather than the decreased rates we observed [35].Similarly, sickness benefit legislation was reformed some years prior to the study, and that could influence the general return to work rates differently in the two time periods the two study groups were compared.Yet, the RCTs took place during the gradual implementation of this reform, from 2016 to 2019, and we observed that return-to-work rates after sick leave tended to increase over time, so this cannot explain the decrease in return-towork rates seen in this study.Finally, sensitivity analyses were performed to adjust for municipality, but though the sample size of the control group was halved, statistical significance increased, making it more likely that results represented a true difference.
STRENGTHS AND LIMITATIONS
The major strength of this study is the ability to track all participants in a highly consistent manner with few-tono missing data.Given the comprehensive and precise Danish registers, individual observations are probably quite accurate.However, the use of register-based data also has certain limitations, most notably the fact that the available information only describes utilization of sickness benefits and not level of disease.Several other studies have shown that the utilization of sickness benefits is influenced by a range of contextual factors other than level of disease, for example the degree of employment protection in a country [36] and the level of sick leave benefit control [37].This study is limited by the fact that the two study groups were separated in time, and we cannot exclude the possibility that time is a confounder, not least since contextual factors might have changed between the study periods.However, as discussed above, this effect alone would most likely produce the opposite of what we have found.Another strength is that the study was publicly pre-registered before any analysis was performed, thus reducing the risk of Type 1 errors.While not adjusting for multiple tests could be a limitation, it is negligible in this study since all findings of highly statistically significant differences favour only one of the interventions.Also, all tests support the same overall hypothesis of treatment effects on vocational recovery.
CONCLUSION
This study has a number of implications.First and foremost, it highlights that INT is either non-feasible or ineffective for common mental disorders on a group level.While the original IBBIS Trial targeting people with anxiety and depression tended to show some positive effect in the group of people with anxiety, this effect was not seen consistently across outcome measures, and this study casts further doubt on the effectiveness of the intervention, specifically when these are translated into real-world setting.Assuming that interventions are usually implemented with higher protocol fidelity in RCTs than in real-world settings, this study confirms that the efficacy of a complex intervention like INT requires a high level of managerial control, as previously documented in the process evaluation of the original RCTs [18].While INT may have positive effects that are not observed in this study, for example on well-being, implementing INT in a real-world setting like it was here will most likely not improve vocational outcomes for the common mental disorder population as a whole.
ADDITIONAL FILE
The additional file for this article can be found as follows: • Results Supplement.This supplement contains results of the sensitivity analysis.DOI: https://doi.org/10.5334/ijic.7562.s1
ETHICS AND CONSENT
The original trials were registered at https://www.clinicaltrials.gov/(#NCT02885519 and #NCT02872051).Like these two trials, the present study was evaluated and approved by the Regional Ethics Committees of the Capital Region (#H-16015724) and the Danish Data Protection Agency (#RHP-2016-006).A research team member informed participants about the objective of the study and the implications of participation, and all participants gave oral and written consent before enrolment.
Figure 1
Figure 1 Composition of intervention components."IBBIS" is a Danish acronym translating to "Integrated Health Care and Vocational Rehabilitation for Sick-Leave Benefit Recipients".
Figure 2
Figure 2 Proportion in stable work, per week.Red line: Real world-INT; Greenish line: RCT-INT.
In this study, 151 participants gave written consent to receive real-world-INT.They were included from August 2018 to January 2019.From the group of 416 participants who in 2016-2018 were randomized RCT-INT, we identified and included 302 participants.Baseline characteristics of the included participants are shown in Table 1.After matching, variables were balanced.Most participants were female (>70 %) and approx.43 years of age (SD ~10-11).In the two years preceding index sick leave, participants had ~16 weeks (SD ~10) of sick leave and ~7-9 weeks (SD ~16-19) of unemployment.The average unemployment rate was 3.8% during the inclusion of the real-world-INT group and 4.2% regarding RCT-INT.
Table 2
Vocational outcomes.+ : primary outcome.Ratio estimates are odds ratios for the binary outcomes and rate ratios for rate of weeks in employment.
|
2024-07-28T05:20:02.505Z
|
2024-07-26T00:00:00.000
|
{
"year": 2024,
"sha1": "f9b769088913881919b933a3ad5c118bc6a95af6",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "ScienceParsePlus",
"pdf_hash": "d970e9b6491fc9beb092f8b5763c8db765d953bb",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
252276664
|
pes2o/s2orc
|
v3-fos-license
|
Microstructure and mechanical properties of Mn-Cu based damping alloy fabricated by laser melting deposition
The M2052 (Mn-20Cu-5Ni-2Fe, at%) damping alloy specimen was prepared by laser melting deposition (LMD). And we studied the mechanical properties and microstructure of M2052 alloy. Columnar dendrite penetrating multiple cladding layers appeared in the as-deposited sample, and the size of dendrite tends to coarsen with the increase of laser power. Microsegregation and martensitic transformation occurs in the as-deposited M2052 alloy. The tensile strength of samples in different directions under the same process, and in the same direction under different processes is all exceeds 500 MPa. The fracture surfaces exhibit intergranular fracture model. This work shows great potential for fabricating Mn-Cu based damping alloy by LMD technology.
Introduction
Mn-Cu based alloy is a kind of structural-functional integrated metamaterial with good mechanical properties and high damping properties. The micro twin interface generated by martensitic transformation in the alloy moves reversibly under the periodic stress, causing the static lag of strain and stress phase. Then the vibration energy will be dissipated by transforming into heat energy, so as to the damping effect is achieved [1,2] . Yin F X developed commercial M2052(Mn-20Cu-5Ni-2Fe, at%) alloy and it has been widely used [3] .
At present, Mn-Cu based damping alloys are mainly fabricated by casting [4]. There are some limitations in the casting process. High manganese damping alloy has poor fluidity and is easy to oxidize at high temperature. Vacuum induction melting equipment cannot prepare high manganese damping alloy in large volume and batch. The process of forging and hot rolling is complex and lengthy, so it cannot respond the requirements of design quickly. Laser melting deposition (LMD) technology uses high-power laser to melt the metal powder transported synchronously, and accumulates the required near net shape three-dimensional parts layer by layer without mold [5] . The fabrication of Mn-Cu based damping alloy by LMD has urgent practical needs and extensive research prospects. Some researchers prepared M2052 alloy by selective laser melting (SLM) and explored the influence of forming and heat treatment process on damping and mechanical properties [6][7][8] , however, no research on Mn-Cu based high damping alloy by LMD is reported. In this study, M2052 high damping alloy is fabricated by LMD technology, and the mechanical properties and microstructure in different process parameters are examined. The results have important practical significance for promoting the application of high damping alloy by LMD in aerospace, precision instruments fields, etc.
Materials and methods
Vacuum induction melting gas atomization (VIGA) prealloyed M2052 alloy powder was used for preparing the test specimens. The morphology of M2052 alloy particle is shown in Fig. 1(a) and the distribution of powder size is shown in Fig. 1(b). Most of the particles were circular, and satellite balls can be seen on the surface of the particles. The M2052 alloy powder has a normal measure of 188.0 μm within 111.6~280.9 μm range, which is suitable for LMD. Fig.1 (a) Particle morphology of M2052 alloy powder, (b) particle size distribution of M2052 alloy powder LMD system utilized in this research consists of 3 kW semiconductor laser system, YC52 dedicated coaxial feeding head, GTV powder delivery system, 4 axis CNC motion stage. The experiment was carried out in an atmosphere protection box, and the content of O can be controlled to less than 500 ppm. Argon is used for the powder feeding protective gas and carrier gas. The wavelength of the laser beam is 1060 nm, and the diameter is 3 mm. Table 1 shows the parameters for preparing the M2052 sample. Fig. 2(a). 25 mm thick 0Cr13Ni4Mo alloy plate was chosen as the substrate, and the surface was cleaned with sandpaper and acetone. Fig.2 (b) appears the tensile testing sample, and Fig.2 (c) appears the sketch map of tensile testing sample. M2052 sample with the measure of 85×4×95 (x×y×z) mm, 85×4×45 (x×y×z) mm, 85×4×45 (x×y×z) mm prepared by LMD is appeared in Fig. 2 The microstructure of the as-deposited test samples were studied through FEI Tecnai G2 F20 S-Twin transmission electron microscopy (TEM) equiped with energy dispersive spectrometer(EDS), JSM-6510 scanning electron microscope(SEM), and axiovert 200 MAT optical microscope(OM). The samples for OM and SEM observation were etched with 25 ml H2O+ 20 ml HCl+ 5 g FeCl3 solution.
Mechanical properties of the as-deposited M2052 samples were characterized by tensile testing, and the results take the average of 3 samples. NT100 electronic tensile testing machine was used for tensile testing at room temperature with the loading speed of 0.5 mm/min. Fracture surface of tensile samples were studied through SEM. Table 2 Fig.3 shows the microstructure of as-deposited M2052 thin-wall sample on XOY section under different processes. The microstructure is randomly distributed dendrite, and the size of the microstructure tends to coarsen with the increase of laser power. Therefore, the microstructure of the Mn-Cu based alloy can be regulated by controlling the process of LMD. Fig.4 shows the microstructure of as-deposited M2052 thin-wall sample by Process I on XOZ section under different magnification. Columnar dendrite structure penetrating multiple cladding layers appeared in the sample. Zhao C Y et al. [9] also found the columnar dendrites in the Mn-Cu binary alloys fabricated by SLM. Based on the classical solidification theory, the morphology of primary grain is mainly determined by solidification rate and temperature gradient. During the grain growth process, solute atoms are enriched at the front of solid-liquid interface, and then constitutional supercooling is formed through microsegregation. Due to the high solidification rate and temperature gradient, columnar grains will dominate the microstructure in directed energy deposition (DED) process [10,11] . The characteristic size of columnar grains is controlled by solidification conditions and is stable only in a certain temperature gradient range. At a sufficiently low temperature gradient, the primary crystal axis develops and grows a secondary crystal axis, and at a lower temperature gradient, it grows a tertiary crystal axis and forms dendrites. Fig. 5(a), and it can be seen that microsegregation occurs in the as-deposited M2052 alloy. Mn and Fe are rich in the dendrite core and Cu and Ni are rich in the interdendritic region. The solid solubility of Fe in Mn is 100%, while it is almost insoluble in Cu, which is consistent with the experimental results. Twin crystal can be seen in Fig. 5 (f), indicating that martensitic transformation has occurred in the as-deposited M2052 alloy. EDS results at different sites in Fig. 5 (a) are shown in Table 3. The component is consistent with the element distribution shown in Fig. 5 (b) ~ (e). It can also be seen from Table 3 that the Mn content in the Mn rich area is as high as 95.80%, indicating that the alloy has undergone spinodal decomposition in the process of LMD, which provides conditions for the formation of martensitic twins. Fig. 6 shows the tensile properties of as-deposited M2052 alloy. The ultimate tensile strength (UTS) of X-I specimens is 541±4 MPa and the UTS of Z-I specimens is 527±3 MPa. The elongation of X-I specimens is 4.5±0.4%, and the elongation of Z-I specimens is 4.0±1.1%. The tensile strength parallel to X is slightly higher than that of Z, while the elongation along different direction is almost the same. Along the stretching direction of Z, the tensile strength of process I, II and III is basically the same, which are 527±3 MPa, 520±30 MPa and 523±31 MPa respectively. With the increase of laser power, the elongation tends to increase, and the elongation is 4.0±1.1%, 4.2±0% and 6.2±0.4% respectively. The mechanical properties of the as-deposited sample can be adjusted by post heat treatment [12][13][14] , and the related contents need to be further studied. Fig. 6 Tensile test results of as-deposited M2052 samples. X stands for the tensile direction is parallel to X; Z stands for the tensile direction is parallel to Z Fig. 7 shows the tensile fracture along X and Z directions of the sample prepared by process I. Since the X direction is perpendicular to the epitaxial growth direction of dendrite, it can be seen from Fig. 7(a) that the sample breaks along the dendritic boundary. While Z direction is parallel to the epitaxial growth direction of dendrite, no morphology of dendritic boundary can be seen. The characteristics of intergranular fracture can be seen both in Fig. 7(b) and (d).
Conclusion
Based on the results and discussions presented above, the conclusions are obtained as below: (1) The Mn-Cu based damping alloy thin wall sample can be fabricated by LMD. Microsegregation and martensitic transformation occurs in the as-deposited M2052 alloy. Columnar dendrite penetrating multiple cladding layers appeared in the as-deposited sample, and the size of the dendrite tends to coarsen with the increase of laser power.
(2) The tensile strength of as-deposited samples fabricated by LMD is all exceeds 500MPa. The fracture surfaces exhibit intergranular fracture model. This work appears incredible potential for manufacturing Mn-Cu based damping alloy by LMD technology.
|
2022-09-15T20:01:50.232Z
|
2022-09-01T00:00:00.000
|
{
"year": 2022,
"sha1": "19f3dbb80810a346b8aa7129222ed1842ed819ee",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/2338/1/012051/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "19f3dbb80810a346b8aa7129222ed1842ed819ee",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
}
|
212418314
|
pes2o/s2orc
|
v3-fos-license
|
iNOS‐inhibitor driven neuroprotection in a porcine retina organ culture model
Abstract Nitrite oxide plays an important role in the pathogenesis of various retinal diseases, especially when hypoxic processes are involved. This degeneration can be simulated by incubating porcine retinal explants with CoCl2. Here, the therapeutic potential of iNOS‐inhibitor 1400W was evaluated. Degeneration through CoCl2 and treatment with the 1400W were applied simultaneously to porcine retinae explants. Three groups were compared: control, CoCl2, and CoCl2 + iNOS‐inhibitor (1400W). At days 4 and 8, retinal ganglion cells (RGCs), bipolar, and amacrine cells were analysed. Furthermore, the influence on the glia cells and different stress markers were evaluated. Treatment with CoCl2 resulted in a significant loss of RGCs already after 4 days, which was counteracted by the iNOS‐inhibitor. Expression of HIF‐1α and its downstream targets confirmed the effective treatment with 1400W. After 8 days, the CoCl2 group displayed a significant loss in amacrine cells and also a drastic reduction in bipolar cells was observed, which was prevented by 1400W. The decrease in microglia could not be prevented by the inhibitor. CoCl2 induces strong degeneration in porcine retinae by mimicking hypoxia, damaging certain retinal cell types. Treatment with the iNOS‐inhibitor counteracted these effects to some extent, by preventing loss of retinal ganglion and bipolar cells. Hence, this inhibitor seems to be a very promising treatment for retinal diseases.
physiology of the porcine retina is very similar to that of the human retina. 6 The retina is known to be extremely sensitive to fluctuations in oxygen levels and hypoxia is known to cause development of retinopathy and retinal degenerative diseases. Common to all retinal degenerative diseases is the deterioration of the retina caused by the progressive degeneration and death of the different retinal cells. For example, there is evidence for a causal link between oxidative stress and age-related macular degeneration (AMD). [7][8][9][10] Several publications indicate that oxidative stress and ischaemia, an early event, which occurs under the high ocular pressure present in many forms of glaucoma, induces retinal ganglion cells (RGC) damage. [11][12][13][14][15] Vascular occlusions of the retina, including arterial and venous obstructions, are among the most frequent causes of vision loss. 16 During ischaemia of the inner retina, a large number of different cell types are affected, out of those the RGC represent the most sensitive population, usually die first. 17 As with all neurons, regeneration is not possible. Even worse, the preservation of damaged cells is very difficult due to the environment and the signals generated by the surrounding tissue (eg glia cells). Hence, there is an urgent need to develop new and more effective therapeutic strategies to combat these devastating diseases. In order to be able to find new treatment approaches for these diseases, models with which pathophysiology can be simulated are necessary. Hence, an ex vivo model for retinal hypoxia in pig retina was developed. 11 The treatment of retinal explants with cobalt chloride (CoCl 2 ) induces degeneration in the target tissue corresponding to the clinical picture of ischaemic retinopathies. In the here presented study, the protective effect of the inducible nitric oxide synthase (iNOS) inhibitor 1400W were investigated. Cytokine-inducible nitric oxide synthase is an immune regulator in the retina and mainly found in Müller cells and in retinal pigment epithelium. 18 iNOS is induced under pathological conditions by endotoxins, inflammation, and cytokines and causes pathophysiological reactions leading to optic nerve and retinal degeneration. 8 It is involved in phagocytosis during infectious and ischaemic processes. Once induced iNOS produces large amounts of nitric oxide (NO). 19 Nitric oxide is an essential signalling molecule, which plays a role in neurotransmission, host cell defence and vasodilation. 20,21 There are three isoforms of the nitric oxide synthase (NOS), the enzyme that produces NO, neuronal, immunologic and endothelial isoform. The first two are present in the retina. 22 The immunologic isoform is not constitutively expressed and requires induction usually by immunologic activation; calcium is not necessary for its activation as it is for the other two forms. 23 Pathophysiological increase of NO through iNOS has major effects in all tissues, but especially in neuronal tissue, like the retina. NO mediates many of the destructive effects of interleukin (IL)-1 in inflamed tissues. NO has been reported to activate matrix metalloproteinases, 24 inhibit collagen synthesis and induce retinal apoptosis. 22,25 The resulting molecules nitrogen dioxide (NO 2 ), nitrite, peroxynitrite and free radicals are responsible for a retinal degeneration that occurs in glaucoma, ischaemic retinopathies, and AMD. 19 The inhibition of iNOS has been researched for years in cancer therapy and has also made its way into ophthalmology. [25][26][27][28] Here, we present a study investigating the effect of the iNOS-inhibitor 1400W on retinal cells in the CoCl 2 degeneration model 1400W.
This iNOS-inhibitor had a neuroprotective effect on neuronal cells in CoCl 2 induced hypoxic degeneration model.
For this study, a cultivation time of 4 and 8 days was chosen.
Retina explants were exposed to CoCl 2 from day 1 to day 3, for 48 hours (300 mmol/L; Figure 1). At the beginning of degeneration, treatment with 500 µmol/L iNOS-inhibitor 1400W (Merck Millipore) was started simultaneously and lasted 72 hours until day 4. In preliminary studies, shorter time periods were also investigated, but they did not achieve the desired effects. Control retinas were cultivated continuously at 37°C without any treatment. The medium was exchanged completely on days 0, 1, 2 and 3 In addition, 50% medium were exchanged at day 6. On days 4 and 8, samples were frozen for subsequent the analyses immunohistochemistry and quantitative real-time PCR (qRT-PCR).
| Histology
Fixation of retinal explants was performed for 15 minutes using 4% paraformaldehyde. Each explant was cryo-protected with 15% sucrose for 15 minutes, 30% sucrose for 30 minutes, and then F I G U R E 1 Procedure of the 1400W treatment in the CoCl 2 degeneration model. CoCl 2 , cobalt chloride; IHC, immunohistochemistry; iNOS-inh., iNOS-inhibitor 1400W frozen in liquid nitrogen. Slides of 10 µm thick slices were cut using a cryostat.
| Immunohistology
Retinal cross-sections were pre-incubated with blocking buffer containing 0.1%-0.2% PBS/TritonX-100 mixture (Merck Millipore) and 10%-20% normal donkey serum (Dianova) for one hour. Primary antibodies ( Table 1) were diluted in the blocking buffer and slices were incubated over night at room temperature. At the next day, retinal cross-sections were incubated with fluorescence-labelled secondary antibodies diluted in the same blocking buffer (Table 1).
| Immunohistological examination
Six retinal slices per explant were used for the evaluation. In the end, 24 masked images were counted for each staining. Cells were counted as positive, when the specific marker (RNA-binding protein with multiple splicing [RBPMS], calretinin and Protein kinase C alpha [PKCα]) was co-localized with DAPI. The total amount of microglia population was evaluated by counting all Ionized calcium-binding adapter molecule 1 (Iba1 + ) and DAPI + cells. Active microglia were counted when Fcy-Rezeptor (Fcy-R + ) signals were additionally seen.
All cell numbers are given in cells/mm.
| Quantitative real-time PCR
The expression of cell specific markers, like parvalbumin (PVALB),
| Statistical analysis
In regard to immunohistological data, ANOVA followed by Tukey's post-hoc test was applied to analyse differences between groups (Statistica, V 12). In accordance, qRT-PCR data were analysed using ANOVA followed by Tukey's post-hoc test to analyse differences between groups (GraphPad Prism 8). For all statistical tests, significance with respect to the control group is indicated using the following symbols and significance levels: *P < .05; **P < .01; ***P < .001.
| Effect of 1400W on oxidative stress in retinal organ cultures
First, it was examined whether the iNOS-inhibitor in the retinal organ cultures develops its effect via testing the protein and mRNA expression of the hypoxia marker HIF-1-α. As previously described, induction with CoCl 2 leads to an increase in oxidative stress and activates transcription of HIF-1α. 3,4 This is a specific oxygen-sensitive subunit that regulates the activity of the transcription factor HIF-1, which increases after ischaemia and can either promote or prevent neuronal survival.
Histologically, an HIF-1α signal could be observed after 4 days in the CoCl 2 group, which seemed to be only slightly reduced after 4 days under treatment with iNOS-inhibitor 1400W ( With the 4-day-groups no significant changes of the HIF-1α mRNA level were observable ( Figure 2B). After 8 days of cultivation, mRNA expression of the HIF-1α was increased 1.9-fold (P = .061) in the CoCl 2 group. 1400W significantly reduced mRNA expression (P = .037; Figure 2B).
| Influence of 1400W on iNOS and HSP70 expression
HIF-1 is known to induce transcription of more than 60 genes, including vascular enothelial groth factor (VEGF; data not shown) and iNOS. In order to check whether this is the case in our model, we also analysed the mRNA expression of those markers.
Investigating of the mRNA expression of the iNOS revealed no alterations in the CoCl 2 group and a non-significant twofold reduction of the mRNA in 1400W-treated group after 4 days. In the later time point, a significant (fourfold, P = .0036) mRNA increase by CoCl 2 was observed, which was prevented by 1400W (P = .0085; Figure 3B).
This loss could be counteracted by treatment with 1400W. In the retinae of the treatment group significantly more RGCs were detected compared with the untreated group (CoCl 2 + iNOS-inh.: 33.9 ± 2.2 cells/mm, P = .021) and also no significant difference to the control group was noticed (P = .109). After 8 days of cultivation a significant decline of the RGCs in the CoCl 2 -treated retinae was also observed (control: 31.7 ± 2.0 cells/mm; CoCl 2 : 19.4 ± 0.9 cells/mm, P = .0002).
Treatment with the iNOS-inhibitor significantly reduced this effect, as these retinae contained significantly more RGCs than the untreated ones (CoCl 2 + iNOS-inh.: 28.5 ± 1.8 cells/mm, P = .002). Furthermore, there was no significant difference in the number of RGCs between the treatment and the control group (P = .374; Figure 4B).
| CoCl 2 induced irreversible degeneration of microglia and decreases their activity
Immunohistochemical staining was used to examine the total population of microglia in the retina (anti-Iba1 (red); Figure 5A). Activated microglia exhibited a strongly increased expression of the Fcy-receptor; thus, all Iba1 + and Fcy-R + cells (green) were evaluated as active microglia. As previously observed, 3 Figure 5C).
In addition, the relative mRNA expression of CD11b, another marker of the microglia, and CCL2, a CC chemokine, which regulates the activation and recruitment of macrophages, was tested. Here, a down-regulation of immunocompetent cells was also clearly evident.
After 8 days of cultivation, there was still a reduction of CD11b mRNA, which was not significant (P = .0512) ( Figure 5D). And there were no differences in CCL2 mRNA expression ( Figure 5E).
F I G U R E 3 mRNA expression of the HIF-1α target genes. A, CoCl 2 had a significant effect on iNOS mRNA expression after 4 d and the additional treatment with 1400W only lead to a small decrease in mRNA. After 8 d, CoCl 2 induced a fourfold increase in iNOS mRNA expression, this could be prevented by the 1400W treatment. B, After 4 d, a massive increase in mRNA expression of HSP70 was observed in the CoCl 2 retinas, which was significantly reduced by treatment with 1400W. After 8 d, a significantly increased expression was still observed, but not as pronounced as at the previous time. However, the protective effect of the inhibitor was still detectable after 8 d. All data are shown as mean ± SEM; **P < .01
| No rescue of amacrine cells through iNOSinhibitor treatment
The degenerative influence of CoCl 2 and the potential protective effect of 1400W on amacrine cells were investigated using relative CoCl 2 + iNOS-inh.: 5.1 ± 3.6 calretinin + cells/mm, P = .001). Also, no significant difference in relative PVALB expression could be measured between the groups after 4 days ( Figure 6C). At day 8, a 2.8fold decrease in relative PVALB expression was noted in the CoCl 2 group compared to the controls (P = .0518), which could not be eliminated by treatment with iNOS-inhibitor (4-fold decrease, P = .0265; Figure 6C). of the bipolar cells, which was prevented by the 1400W treatment (45.5 ± 7.1 PKCα + cells/mm; P = .0017; Figure 7B).
F I G U R E 4 Rescue of retinal ganglion cells after CoCl 2 -induced degeneration.
A, Representative images of the immunohistological staining. RGCs were stained with an antibody against RBPMS (red) and cell nuclei with DAPI (blue). A significant loss of RGCs in the untreated degeneration groups (CoCl 2 ) was observed over the cultivation period of 4 and 8 d. B, After 4 d, neuroprotection of the RGCs was observed by treatment with the iNOS-inhibitor compared with the CoCl 2 group. Even after 8 d of cultivation, a protection of the RGCs by 1400W could be noticed. The retinae of the treatment groups contained significantly more RGCs than the untreated retinae. GCL, ganglion cell layer; IPL, inner plexiform layer. Scale bar = 20 µm. All data are shown as mean ± SEM; *P < .05; **P < .01; ***P < .001 F I G U R E 5 CoCl 2 degeneration irreversibly reduces the microglia. A, Microglia were stained with anti-Iba1 (red) on day 4 and 8 of cultivation. Fcy-R (green, arrows) in combination with Iba1 served as an activity marker of the microglia. Cell nuclei are shown in blue. B, The addition of CoCl 2 triggered a significant loss of microglia in comparison with controls at both times. In the 1400W-treated group also, significantly fewer microglia were present than in the control. C, In addition, the number of activated microglia was significantly lower in the CoCl 2 group than in the control group after 4 and 8 d. Again, treatment with the iNOS-inhibitor had no protective effect. D, The relative CD11b mRNA expression was also significantly reduced in the CoCl 2 group, after 4 and 8 d of cultivation. The 1400W treatment did not result in an improvement compared with control at both times. E, The analysis of the relative CCL2 mRNA expression showed that it was significantly reduced after 4 d in retinae of the CoCl 2 group. Again, mRNA expression could not be altered by the 1400W. After 8 d of cultivation, no differences between the groups were observed. GCL, ganglion cell layer; INL, inner nuclear layer; IPL, inner plexiform layer; ONL, outer nuclear layer; OPL, outer plexiform layer. Scale bar = 20 µm. All data are shown as mean ± SEM; **P < .01; ***P < .001
| D ISCUSS I ON
The degeneration processes of many retinal diseases have not been fully investigated yet. In order to understand the pathological changes, reliable models are needed, in which eye diseases can be simulated. In this study, we present a promising option in which not only the degeneration process in retinal tissue could be simulated in a straight-forward and standardized way, but also drug therapy testing could be performed. The advantages of these ex vivo cultures are obvious: The anatomy of pig eyes compared to human eyes is morphologically and physiologically more similar to that of rodents. 6 In addition, the complex structure of the retina is preserved and the reproducibility is much higher as a higher number of samples can be obtained. 2,30 Retinopathy is the main cause of blindness and visual impairment in people of all ages. The pathogenesis of retinopathy is caused by numerous factors. Considering only the 'hypoxic factors', these include changes that contribute to oxidative stress, like increased nitric oxide and superoxide production, changes in the expression of various isoforms of nitric oxide synthase, or the endogenous antioxidant system. 31 Hypoxia is a main trigger of the pathogenic mechanism in retinal diseases. This is a multifactorial, dynamic process involving oxidative stress, inflammation, and cell death as well as the activation of regenerative mechanisms dependent on the hypoxia inducible transcription factor HIF-1α. 32,33 HIF-1α is one of the key regulatory components in the cell's hy- CoCl 2 , similar to hypoxia, prevents the degradation of the α-subunit of hypoxia-inducible factor and thus mediates its stabilization. 40 With our CoCl 2 induced retinal degeneration model, we already proved the neuroprotective effect of hypothermia. 4 Here, we specifically examined the influence of the inhibition of iNOS on the course of degeneration.
CoCl 2 induced hypoxia led to a significant loss of RGCs after 4 and 8 days, which was counteracted by treatment with 1400W.
As described before, incubation with CoCl 2 induces apoptosis, especially in RGCs. 3,41 We have already shown that CoCl 2 not only mimics hypoxia by stabilizing HIF-1α, but also leads to an elevated ROS level by disrupting the mitochondrial respiratory chain. 4,33 The mechanism behind the 1400W mediated protection of RGCs is possibly based on reduced NO production and thus on the prevention of apoptosis. In our model, it could be observed that treatment with the iNOS inhibitor significantly reduced the amount of iNOS and HIF-1α mRNA expression. 42 It is known that hypoxia induces HIF-1α and its target genes, such as VEGF and iNOS, in many tissues. 43 The pathophysiological accumulation of these factors has been associated with neuronal death under hypoxic-ischaemic conditions. Moreover, overproduction leads to increased extracellular accumulation of glutamate and inflammatory cytokines, which damage the neurons. 44 Another marker for cellular stress is HSP70, whose expression can be induced by HIF-1α. 45 In CoCl 2 -stressed retinas, the mRNA expression of HSP70 was strongly elevated. 46 HSPs are chaperones that are up-regulated during cellular stress. Their task is to prevent misfolded proteins and protein aggregation. Thus, HSPs plays an important role for the accumulation and function of HIF-1α. 47 vitro, and they may therefore be a major source of iNOS expression.
Furthermore, other cell types, such as amacrine, horizontal, bipolar and microglial cells, contribute to the NO production during ischaemic proliferative retinopathy. 48 Microglia are an essential mediator of neuroinflammation in many neurological disorders and are susceptible to HIF-1α. 49 Likewise, there are reports that describe that besides oligodendrocytes, the microglia are the glial cell types most susceptible to hypoxia 50 and are extremely sensitive to their microenvironment. 44,51 We observed the same effects in our ex vivo model ( Figure 5). Incubation with CoCl 2 has massive degenerative effects on microglia. 3 The inhibition of iNOS is not an obvious way to increase microglia number in this case. Based on the type of cultivation, with just a piece of the retina, no interaction with the optic nerve and the retinal pigment epithelium, the microglia are more turned to pro-inflammatory M1 subtypes. iNOS is a marker which is only produced by M1 microglia/macrophages. 52 The reduction of iNOS inhibited the microglia. Furthermore, high levels of VEGF could reduce the number of M1 microglia as well, which is already shown in an ischaemic brain rat model. 53 Therefore, the treatment with the iNOS inhibitor had no beneficial effect on the microglia number in contrast to the already published treatment with hypothermia. 4 Other publications describe a balance between harmful and protective factors in the retina after hypoxia. It is therefore conceivable that microglia react early to hypoxic stress but are down-regulated after 4 or 8 days to protect the retina. 44,54,55 Inner layers of the retina are known to be most sensitive to hypoxic challenges, whereas the outer retina is more resistant to hypoxic stress. 56,57 Investigations of other cell types of the inner retina revealed that CoCl 2 led to a loss of calretinin positive amacrine cells and PKCα-positive bipolar cells after 8 days, which was also described before. 3 Furthermore, increased production of NO is believed to mediate neuronal injury caused by glutamate acting on NMDA receptors. 44,61 This might be one mechanism how CoCl 2 induced loss of amacrine cells and why it could not be prevented by the iNOS-inhibitor.
| CON CLUS ION
The iNOS-inhibitor 1400W led to neuroprotective effects in the retina and many but not all cell types responded with an increased survival rate to the therapy. This allowed us to prove the neuroprotective properties of 1400W and at the same time prove that ex vivo organ cultures are very suitable for drug therapy testing.
ACK N OWLED G EM ENTS
This project is supported in part by the SET Stiftung, Germany.
CO N FLI C T O F I NTE R E S T
The authors confirm that there are no conflicts of interest.
AUTH O R S ' CO NTR I B UTI O N S
AMM-B, FH and LH cultivated retinal explants and performed the histological examinations of the explants. SK supported the statistical analysis of the data. JH performed the qRT-PCR examination and was a major contributor in writing the manuscript. SS and SCJ revised the manuscript, planed and designed the study. All authors read and approved the final manuscript.
DATA AVA I L A B I L I T Y S TAT E M E N T
All data generated or analysed during this study are included in this published article.
|
2020-03-05T10:23:12.772Z
|
2020-03-04T00:00:00.000
|
{
"year": 2020,
"sha1": "51b00995efea3591f0b7cb9871cca5907097b863",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1111/jcmm.15091",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "942b27ce17f46af840fd2daae3b5adca82a0d203",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
240246030
|
pes2o/s2orc
|
v3-fos-license
|
Investigating the Behaviour of Air–Water Upward and Downward Flows: Are You Seeing What I Am Seeing?
Understanding the behaviour of gas–liquid flows in upward and downward pipe configurations in chemical, petroleum, and nuclear industries is vital when optimal design, operation, production, and safety are of paramount concern. Unfortunately, the information concerning the behaviour of such flows in large pipe diameters is rare. This article aims to bridge that gap by reporting air–water upward and downward flows in 127 mm internal diameter pipes using advanced conductance ring probes located at two measurement locations. The liquid and gas flow rates are 0.021 to 0.33 m/s and 3.52 to 16.1 m/s, correspondingly, covering churn and annular flows. To achieve the desired objectives, several parameters, probability density function (PDF), power spectral density (PSD), Slippage Number (SN), drift velocity (Ugd), and distribution coefficient (C0) were employed. The flow regimes encountered in the two pipe configurations were distinguished employing a flow regime map available in the literature and statistical analysis. The obtained results were supported by visual inspection. The comparison between the present study against reported studies reveals the same tendency for the measured experimental data. The Root Mean Square Error (RMSE) method within 4% was utilized in recommending the best void fraction prediction correlation for the downward and upward flows.
Gas-Liquid Upward Flow in Small and Large Pipe Diameters
Gas-liquid upward flow finds applied applications in chemical engineering for mass transfer, the petroleum sector for concomitant oil and natural gas transport and the energy sector for heat transfer [1]. Consequently, it is imperative to have a firm knowledge of the behaviour of gas-liquid flow, a vital variable for the precise design of oil and gas production systems. A significant amount of effort for many decades has been dedicated by many researchers to achieve a comprehensive understanding of the behaviour of gas-liquid upward flows. Unfortunately, many of the reported works on such flows focus on small internal diameter pipes.
A small internal diameter pipe according to [2][3][4] is 9-55 mm (Abdulkadir et al. [4]). Notwithstanding, progress in several industries, from heat exchangers to large internal diameter deepwater risers, require the need for the understanding of gas-liquid flows in large diameter pipes, where the flow behaviour, maybe, considerably dissimilar from that in small ones [4]. On the other hand, a large diameter pipe in line with [5][6][7][8][9] is a pipe with an internal diameter > 100 mm [4]. Consequently, more attention needs to be given to the downward gas-liquid flow in large diameter pipes as in the upward flow.
Downward Gas-Liquid Flow
Gas-liquid downward flow in large diameter pipes is widely applied to many engineering applications such as nuclear reactors, steam injection wells, enriched gas injection wells where liquid condenses with pressure increase and riser pipes from offshore production platforms to the sea floor [10,11]. The knowledge of these flows in nuclear reactors is necessary for the safety analysis on the loss of coolant accidents in these reactors and plays a vital role in pressure drops precise measurement during oil and gas production and transportation over long distances. According to Wang et al. [12,13], the appropriateness of using upward experimental data to predict the loss of coolant accidents in downward flow nuclear reactors is questionable. Consequently, none of the upward flow correlations developed specifically for upward flow can be used for downward flow.
Gas-Liquid Upward and Downward Flows
The nature of flow regimes and the liquid fraction distribution gotten from upward and downward flows are expected to be significantly dissimilar, which has been confirmed by the conclusions of [14,15]. They concluded that the liquid fraction is presumed to be affected by the flow direction, buoyancy, and gravity force. The liquid fraction is of significance in the establishment of the flow pattern; it is the fraction of the pipe's cross-sectional area filled by the liquid phase [16]. Its determination according, to Abdulkadir et al. [16], is of considerable value in a range of engineering applications like enhancing safety and performance in industrial systems such as nuclear reactors, petroleum, and biomedical processing systems.
According to Bouyahiaoui et al. [17], flow patterns and void fraction disparity between vertical upward and vertical downward air-water flows in 12.7 mm internal diameter pipes was investigated by [1]. They observed significant discrepancies in the presence of the bubbly and slug flow patterns for the vertical upward and vertical downward flows. They reported the absence of churn flow in the vertical downward flow. Bhagwat and Ghajar [1] concluded that the drift-flux correlations of upward flow can be realized for the downward flow by reversing the sign of the drift velocity.
After two years, [18] studied the local interfacial characteristics in upward and downward bubbly flows in 50.8 mm internal diameter pipes by utilizing a four-sensor optical probe in the measurement of local interfacial parameters, including void fraction, interfacial area concentration (IAC), bubble frequency, interfacial velocity, and Sauter mean diameter. They compared the radial profiles of these parameters in the downward flow against those in the upward flow. They concluded that the void fraction showed a core-peaked distribution for the downward flow at a low void fraction but showed a wallpeaked distribution for the upward flow.
Chalgeri and Jeong [19] conducted two-phase flow experiments and plotted flow pattern maps for the vertical upward and downward flows from the measured data sets. They utilized a high-speed camera to visualize the flows, while a void fraction analysis was carried out by means of the electrical impedance technique and digital image analysis. They identified four and seven dissimilar flow regimes for the vertical upward and downward flows, respectively.
Recently, Bouyahiaoui et al. [17] examined the comparisons and differences between upward and downward air-water churn flow in a 34 mm internal diameter pipe for two arrangements of vertical upward (51 cases) and downward (48 cases). They used some conductance probes and pressure transducers to measure cross-sectional averaged void fraction time series and pressure drop along the pipe, respectively. They also attempted to explicitly understand how gravity could influence the behaviour of liquid structures existing in the flow. They used various parameters such as probability density function (PDF), distribution coefficient in the drift-flux model, structure velocity, slippage number (SN), dimensionless pressure gradient to achieve the objectives of their work. They reported that in both orientations, the dimensionless pressure gradient and SN showed a strong correlation with the mixture Froude number. They, nevertheless, observed some inconsistencies in PDFs and structure velocities of flow in the two arrangements.
A summary of reviewed papers concerning upward and downward flows check listing the pipe geometries and experimental flow conditions is shown in Table 1. The reviewed papers revealed that the current state of understanding of upward and downward flows is limited because they are mainly concerned with small diameter pipes. The emphasis on the research in large diameter pipes was necessitated by the realization that the models based on the data from the small diameter pipes do not satisfactorily reflect the flow scenario in larger pipes. In addition, the ability to correctly predict the gas-liquid flow in large diameter pipes is remarkably essential for pump systems and nuclear safety.
The upward flow correlations developed specifically for upward flow cannot be utilized for downward flow based on the fact that it may lead to precautious uncertainty design and operations. To examine liquid fraction behaviour in upward and downward flows quantitatively in pipes with diameters applicable to the energy industry in more detail, an air-water liquid fraction data was gathered using advanced conductance probes in 127 mm internal diameter pipes. Thus, this work reveals the effect of flow direction, buoyancy force and gravity force on the behaviour of liquid fraction in the upward and downward flows.
Models Utilized in Gas-Liquid Flow
The three common types of gas-liquid flow models utilized in the energy sector are Empirical correlations, Homogeneous models, and Mechanistic models.
Empirical correlations are established on the curve fitting of experimental data and are usually deployed to a confined range of variables examined in experiments. Homogenous models depict the fluid properties with mixture properties and utilize the procedures for single-phase flow to handle a two-phase flow mixture.
The Mechanistic model, the drift-flux model, is one of the most realistic and reliable models for gas-liquid flow study (Abdulkadir et al. [30,31]). The model, according to [30], recognises the influences of non-uniform flow, void fraction profiles, including the local relative velocity between the liquid and gas phases.
Materials and Methods
The two-phase air-water experiments reported in this work were conducted in a large flow loop facility. The test sections in the upward and downward flows pipe arrangements are made of polyvinyl chloride (PVC). Visualisation part is made of polymethyl methacrylate (PMMA). The test section in upward arrangement is 11 m tall and is equipped with an advanced conductance ring probe to measure the time-varying liquid fraction. The probe is located at 8.4 m above the air-water mixer as shown in Figure 1, which corresponds to 66 pipe diameters above the air-water mixer region.
The air-water downward flow loop comprises three principal parts: an inverted 180° bend (bend radius/pipe diameter = 3), a 9.6 m long downward pipe with an advanced conductance ring probe fitted at about 21 pipe diameters from the bend; this is 2.667 m downstream of the bend and a 1.5 m long horizontal pipe to the separator. The experimental flow facility shown in Figure 1 has been published earlier by several authors, namely, [4,6,[32][33][34][35][36][37]. Hence specifics of the experimental facility are obtainable from the published articles. However, a concise description of the experimental facility is discussed below to improve the reader's comprehension. The facility works as follows: Two large liquid ring-pump compressors actuated by two 55-kW motors were employed to provide air; it was metered by a calibrated vortex meter and supplied through a pipe base, thus facilitating its mixture with water collected from the liquid storage tank (which is also a phase separator). The mixed air-water system is then delivered by one of the turbine flow meters (both turbine flow meters are installed in parallel). The 4 m height, 1 m diameter, and 4 m 3 liquid storage cylindrical tank is highgrade stainless steel. For the present experimental study, 1.6 m 3 of water was stored in the tank. According to Abdulkadir et al. [4], the maximum calculated uncertainties associated with the flow meters are ±0.5% and ±0.6% for the water and air, respectively.
The air-water mixer, an annular injection mixing device, according to [16], is made of a 0.105 m diameter tube placed at the middle, concentric with the 0.127 m internal diameter test section. The hybrid air-water system is then flown through the upward pipe before getting to the inverted 180° bend [4].
The air and tap water mixture on reaching the bend and exiting it travels 9.6 m downwards, then 1.5 m in a horizontal direction to the separator, where the two phases (air-water system) are separated before the pump compressors are used to deliver the separated phases back [16].
The scope of the liquid and gas flow rates considered in this work are 0.021-0.33 m/s and 3.52-16.1 m/s, respectively. The measurements were obtained at the operating temperature and system pressure of 20 °C and 2 bar (gauge), respectively. The gauge pressure was used because the system pressure was greater than the local atmospheric pressure. It was set at 2 bar because the flow process was at 1 bar. The advanced conductance probes placed at the two pipe configurations were utilized to record timevarying liquid fraction data every 0.001 s for 15 s per experimental run. Each run was repeated three times to ensure reproducibility and replicability of data. Table 2 shows the air-water properties and the range of liquid fractions examined in this work. Figure 2 shows the conductance ring probes used to obtain the liquid fraction data. The probes were designed carefully by Omebere-Iyari [32] to guarantee that the electrodes had an identical diameter, D, as the test section (127 mm) to ensure flush mounting with the pipe wall [4]. According to Abdulkadir et al. [4], [32] ensured that the distance between each pair of stainless-steel electrode plates, De, and width, S, shown in Figure 2, are 25 and 0.3 mm, respectively. The outcome is a De/D of 0.20 and S/D of 0.024. Omebere-Iyari [32] concluded that a liquid fraction/dimensionless conductance relationship was achieved by reproducing the method with plastic rods of various diameters. The reader is referred to [32] for more details. Omebere-Iyari [32] simulated annular and churn flow patterns by placing a dielectric plastic rod in the pipe while the annulus between the pipe and the plastic rod was filled with a liquid that conducts [4]. Unfortunately, the [32]'s conductance ring probes failed to account for the gas bubbles within the liquid film. As a result, the utilization of the probes that can account for gas bubbles entrained within the liquid film became necessary.
Liquid Fraction Measurement Using the New Conductance Ring Probes
Van der Meulen et al. [33] adapted [32]'s method to account for the influence of gas bubbles entrained within the liquid film by simulating the gas bubbles entrained within the liquid film and then recalibrating the probes. They achieved this by occupying the region between the pipe wall and the non-conducting rod with an identified quantity of spherical glass beads of varying diameters, from 0.003 to 0.006 m [4]. The output of the conductance ring probes is proportional to the combined resistance of the air-water system varies from 0 to 0.32 V.
For this reason, the newly re-calibrated probes were designated in this present work as advanced conductance ring probes because they account for the influence of gas bubbles entrain within the liquid film. These re-calibrated probes have been utilized by several researchers, including [4,6,16,[34][35][36][37] among others. A personal computer equipped with a National Instrument data acquisition card was used to gather the liquid fraction data. It is worth mentioning that Van der Meulen [6] modified the [32]'s developed data retrieval program in LabVIEW represented by a third-order polynomial fit: Liquid fraction = h + e(Ge*) + f(Ge*) 2 + g(Ge*) 3 (1) where: Ge* is the normalised voltage response of the probe. Equation (1) was utilized to obtain the characteristic calibration curve applied for the individual probes. The calibration curves of [32,33], covering the range of liquid fractions in the present study, are presented in Figure 3. The reader is referred to Van der Meulen [6] for additional information on the re-calibrated probes (calibration with the glass beads). To improve measurement accuracy and in line with Fossa [38], the conductance ring probes used in the presented work were recalibrated. [32,33] covering the range of liquid fraction in the present study.
Results and Discussion
The liquid and gas volumetric flow rates shown in Table 3 are the operating parameters used in this study. An entire 170 liquid fraction data were acquired for the downward and upward flows conditions throughout the experimental campaigns.
The Accuracy of the Conductance Ring Probes
Omebere-Iyari [32] has previously provided a comprehensive explanation of the conductance ring probes' design. The reproducibility of the calibration and measurement procedures was a priority in this study. It is important to mention that the uncertainty in liquid fraction measurement using absolute error, according to Abdulkadir et al. [16], was found to range from 0.018 to 0.027 for all measurements taken; this corresponds to a range of 1.8% to 5% relative error.
Comparing this Study Approach with Those of Godbole et al. and Bhagwat and Ghajar
The average void fraction for the upward and downward flows gotten from the present work is compared against the [42] and Bhagwat and Ghajar [1] data. Both [42] and [1] employed the same experimental rig (pipe with an internal diameter of 0.0127 m), working fluid (air and water) and quick closing valve technique to gather void fraction data. It is interesting to note that the pipe employed in the present work is ten times bigger than that of the pipes employed by both.
The upward flow comparison is based on present work against the experimental void fraction data of [42] at the same gas and liquid superficial velocities of 4.64-4.8 m/s and 0.1 m/s, respectively. The outcome of this comparison are presented in Figure 6. However, for the downward flow direction, this present work will be compared against the [1] experimental void fraction data at the same gas and liquid superficial velocities of 5.72-15.2 m/s and 0.08 m/s, respectively. Figure 7 shows the results of the comparison. [42] for upward flow at the same gas and liquid superficial velocities of 4.64-4.8 m/s and 0.1 m/s, respectively. The absolute errors are between 0.018 and 0.027, this corresponds to 1.8% to 5% relative error for most of the data. Figure 6 reveals that the void fraction from the present work exhibits the same tendency as the [42]'s experimental data, though the values of the void fraction obtained from [42] are lower. The observed trend maybe because the quantity of drops of liquid entrained in the gas matrix is lower in the smaller diameter pipe than in the larger diameter one, leading to a higher observed void fraction in the large-diameter pipe. Similarly, the values of void fraction obtained from [1] work, as shown in Figure 7, are also lower than those of the present work.
Comparison between Present Study and That of Zangana and Abdulkadir et al.
The comparison between the current work and that of Zangana [34] and Abdulkadir et al. [4] will be carried out at the same liquid and gas superficial velocities of 0.33 and 6.2-14.2 m/s, respectively, using average liquid fraction data. The current work employed the same experimental rig utilized by [4,35] to carry out their experimental work. However, [4,34] and the present study placed their measuring instruments at 8.2, 8.3, and 8.4 m, respectively.
The outcome of the comparison is shown in Figure 8. Although with some insignificant variations, the graph displays the similar trend at some gas superficial velocities. The observed variations might be because the measurement stations are not the same. Figure 10 shows based on the liquid film thickness plot that the liquid film is irregular and shows significant disturbances with liquid film thickness up to or greater than 0.10. These disturbances, otherwise called waves acting on the liquid fraction or liquid film thickness time trace, are created because of the enormous gas shear stress exerting the gas-liquid interface. Visual inspection was used to confirm the presence of the waves. In this work, the Equation (10) that was employed to determine the individual liquid film thickness was derived from the average cross-sectional liquid fraction as follows with the assumption that the liquid film is symmetrical about the pipe axis. From
Typical Time Varying, Liquid Fraction and Liquid Film Thickness, Power Spectral Density (PSD), and Probability Density Function (PDF) Plots for Downward and Upward Flows
(2) Derived from Figure 11, 2 .
where , dcore, and D, represents the liquid film thickness, diameter of the gas core and the pipe internal diameter, respectively.
Substituting Equations (4) and (6) into (2) − 2 /4 4/ (7) Substituting the void fraction, , with liquid fraction, , and bearing in mind that 1 − . Therefore, When the direction of flow is downwards, the gas moves towards the pipe centre while the liquid travels to the pipe walls. The observed behaviour can be associated with the fact that both flow and gravity act in the same (downward) path for the liquid, whereas for the gas, buoyancy force plays in the opposing (upward) path; thus, the flow regime changes to annular flow. In addition, the flow pattern, according to Figure 9a, is annular for the downward flow because the liquid fractions from the time series are continually below 0.07 with very slight disturbances compared to those seen in churn flow.
(b) Probability density function (PDF) of liquid film fraction for the upward and downward flows: The PDF is employed in this work as shown in Figure 9b to reveal the dominant liquid fraction observed for every flow condition. The figure shows that the flow regime is churn flow for the upward flow. It is churn flow because the PDF plot depicts a single crest at a low liquid fraction of 0.07, but with a broad base stretching at liquid fractions of 0.03 and 0.12. This is in line with the observation of Costigan and Whalley [43]. In contrast, the flow pattern is annular for the downward flow because the PDF depicts a single crest at a low liquid fraction with a narrow base.
(c) Power spectral density (PSD) against frequency: The PSD analysis shown in Figure 9c was carried out in this work to remove the subjectivity inherent in frequency determination. The figure shows how the PSD varies with frequency for the downward and upward flows at gas and liquid superficial velocities of 9.9 m/s and 0.08 m/s, respectively. The PSD plot according to Figure 9c for the upward flow contains a crest at about zero frequency. According to Abdulkadir et al. [4], this kind of response is associated with churn flow. In contrast, the PSD plot possesses a flat and relatively uniform spectrum akin to annular flow for the downward flow.
The Effect of Flow Direction, Buoyancy, and Gravity Forces on the Average Liquid Fraction
This section aims to interrogate the influence of flow direction, buoyancy, and gravity forces on the liquid fraction behaviour. The average liquid fraction obtained from the upward flow is matched against that from the downward flow scenario under the same flow conditions to achieve this aim. Figure 12a-d, therefore, reveals the effect of flow direction, buoyancy, and gravity forces on liquid film and how the average liquid fraction at various liquid superficial velocities varies with the gas superficial velocity. The figure shows that the liquid fraction for the upward and downward flows reduce with increasing gas superficial velocity. The observed trend could be as a result of an increase in gas production leading to a corresponding decrease in liquid fraction as the gas superficial velocity increases. Although, the observed liquid fractions at lower gas superficial velocities are significantly lower for the downward flow than the upward flow scenario. This behaviour is not surprising because in the upward flow, the gas phase's buoyant force supports the flow direction while the gravity force plays in the opposite direction. On the other hand, the gravity force and flow direction counteract the gas phase's buoyant force in the downward flow. As a result, higher liquid fractions are seen for the upward flow due to a decrease in the gas phase volume because of the tendency of the gas to more swiftly than the liquid in comparison to the downward flow scenario, for the same liquid and gas superficial velocities. The disparity in the liquid fractions' values for the downward and upward flows decreases with an increase in the gas superficial velocity.
Correlation of Slippage Number (SN) with Mixture Froude Number (FrM)
The relationship between the Slippage Number (SN) and Mixture Froude Number (FrM) at various liquid superficial velocities is shown in Figure 13. where, SN is plotted in the current work versus the mixture Froude number, FrM. The Froude number is a non-dimensional number that describes the ratio of the inertial forces to the gravity forces. According to [44], FrM is explained mathematically as: (15) when , FrM is replaced with FrSG and hence, Equation (15) becomes: The figure shows that for larger FrM, the values of SN for the upward and downward flows are nearly identical. This demonstrates that the liquid and gas flow together as a homogeneous mixture. Furthermore, the values of SN changes from the least for annular flow (Figure 13a,b) to the highest for churn flow (Figure 13c,d). This is so because the gas superficial velocities encountered in annular flow are moderately greater than those in churn flow, and as expected, the slippage between the gas and liquid is lower than that in churn flow. As a result, the difference between the local two-phase flow mixture density and the homogeneous mixture density, in addition to SN, are lower in annular flow compared to those in churn flow.
A closer look at Equation (15) shows that when USG is dominating, the x-axis becomes approximately USG, as shown in Equation (16). A test of this assumption was obtained by making a plot of SN against FrSG. It is important to mention that for the upward flow, To further test the validity of assuming USG to be approximately equal to Um, the experimental data of Abdulkadir et al. [31] for an air-silicone oil system flowing in a vertical pipe whose internal diameter pipe is 67 mm was used. Silicone oil is a liquid whose viscosity is five times the viscosity of water. The gathered data were sorted into the prevailing flow patterns and plotted, as shown in Figures 15 and 16 Figure 15 shows a significantly large SN which, is provoked by a large difference between the homogeneous mixture density and the two-phase flow mixture density. This large difference is based on the significant slippage between the liquid and gas phases.
The figure also shows that there is a vast difference between the observed plots of SN versus FrM and SN versus FrSG, as shown through the correlations obtained through curve fitting. Thus, this shows that the assumption that USG can be used to replace Um is not valid in this case. On the other hand, Figure 16 depicts churn flow, which occurs at a relatively higher gas superficial velocities than in slug flow. As expected, the slippage between the liquid and gas phases is smaller than that of slippage in the slug flow regime. As a consequence, the disparity between the two densities is lower, and hence the SN is also lower.
A comparable observation, seen in Figure 15, is also observed here, in Figure 16. Figure 16 displays a noteworthy variance between the experimental plots of SN versus FrM and SN versus FrSG using the correlations obtained through curve fitting. Thus, the assumption that USG can be used as a replacement for Um is also not valid in this case.
It can be concluded, therefore, that the assumption of USG is approximately equal to Um is strongly dependent on the range of values of USG/Um and void fraction.
Zuber and Findlay's Proposed Drift-Flux Model Approach
The Zuber and Findlay [45]'s drift-flux model was employed in this work to correlate the actual gas velocity, VG, and the mixture velocity, Um, utilizing the two drift-flux variables, C0 and Ugd and is of the following form: where, VG, C0, Um, and Ugd are the actual gas velocity averaged across the pipe area, distribution coefficient describing the influence of velocity and concentration attributes within the two-phase fluid mixture, the mixture velocity, and the drift velocity of the gas describing the buoyancy effect, respectively. According to the model presented in Equation (25), the values of C0 and Ugd are obtained from a graph of VG against the Um for the upward and downward flows. C0 is the line gradient from the plot, while Ugd is the intercept on the y-axis. Observation from Figure 17a,b shows that a straight-line relationship is confirmed between VG and Um for both the downward and upward flows, as suggested in [45] and endorsed by several investigators. The values of C0 and Ugd are 1.03 and 0.14 m/s and 1.00 and 0.37 m/s, respectively, obtained from the downward and upward flows plot. The justification for the observed trend can be explained by considering the phase concentration attributes in upward churn and annular flows and downward annular flows. The overall gas distribution is consistent in the upward churn flow because of the appearance of some droplets entrained uniformly within the gas matrix, and consequently, C0 is approximately equal to one. Similarly, in upward or downward annular flows, where liquid moves upward or downward partially in the semblance of entrained droplets of liquid in the gas matrix and as a thin film on the pipe walls, C0 is also approximately equal to one since the non-uniform effects are growing strong. Therefore, a conclusion can be that the C0 of upward flows is slightly lower than that of downward flows. Table 4 and Figures 18 and 19 shows the concluding results.
Additionally, a plot of C0 against USL reveals that increasing liquid superficial velocity is associated with a corresponding linear increase of C0 value for both downward and upward flows. A linear correlation is established between C0 and USL. Figure 19 shows that the drift velocity (Ugd) initially increases linearly with liquid superficial velocities of 0.02 to 0.08 m/s, then decreases linearly at liquid superficial velocities greater than 0.08 m/s. The initial increase in the drift velocity results from the higher gas buoyant force playing on the gas phase average flow pathway. By increasing the liquid superficial velocity from 0.1 to 0.2 m/s, the gas phase moves in the pathway of average flow and, as a result, the liquid phase moves faster than the gas phase. It is accountable for the seen drop in Ugd for the upward and downward flows.
As noted by Bhagwat and Ghajar [1] and confirmed in this work, the Ugd for the upward and downward flows at a liquid superficial velocity of 0.2 m/s displayed in Table 4 may be applied reciprocally by exchanging the sign of the Ugd from plus to minus with the assumption that direction of flow of the phase velocities is positive.
Performance Investigation of Empirical Correlations for Estimating Void Fraction
The performance of ten selected void fraction correlations was analysed to find the one that can accurately predict void fraction for downward and upward flows. The ten considered correlations include [14,25,[46][47][48][49][50][51][52]. The Root Mean Square Error (RMSE) was used to analyse the performance of these correlations.
=
∑ 100 (26) where: N denotes the number of data points analysed and 1 − HL is the void fraction. Figure 20 reveals that, for the upward flow, the most outstanding performing correlations whose RMSE is not above 4% include: Dix [14,47,52]'s. While for the downward flows, the most striking correlations whose RMSE is also not above 4% include Usui and Sato [14] and [Woldesemayat and Ghajar [52]. It can be concluded, therefore, that the [14] correlation based on RMSE is the most outstanding performing correlation for estimating the void fraction for the flows in the upward and downward configurations.
Conclusions
To understand the gap in the knowledge on liquid fraction behaviour in large diameter pipes concerned with upward and downward flows, the present work undertook experimental research with air and water in pipes of 127 mm internal diameter.
The liquid fraction was measured using advanced conductance ring probes. To accomplish the goals of the current work, the investigated parameters are PDF, PSD, SN, C0, and Ugd. The examination of the air-water flow characteristics and behaviour for the upward (85 cases) and downward (85 cases) flows draw the following conclusions: • The flow patterns encountered in the upward flow are churn and annular flows whereas, annular flow was seen in the downward flow scenario at the same flow conditions. • The matching of the present work against the published Godbole et al. [42] (upward flow) and Bhagwat and Ghajar [1] (downward flow) void fraction data revealed the same tendency. • The average liquid fractions obtained at low gas superficial velocities for the upward flow were seen to be considerably higher than those for the downward flow. • An excellent relationship was established between the SN and FrM for the two pipe configurations. The assumption that USG is approximately equal to Um is strongly dependent on the range of values of USG/Um and void fraction. • The SN values for the upward and downward flows at higher values of mixture Froude number are nearly equal, showing that both the gas and liquid flow together as a homogeneous mixture. • In support of the conclusions of Al-Sarkhi et al. [44], the SN can be employed as a swift flow regime discerning procedure. • The C0 of the upward flow is lower than it is in the downward flow. The Ugd for the upward flow, on the other hand, was discovered to be larger than that it was in the downward flow. • An excellent relationship was observed between the C0 and liquid superficial for the two pipe configurations. • The correlation suggested in Usui and Sato [14] for estimating void fraction for the two pipe configurations was the most outstanding performing correlation based on the Root Mean Square Error (RMSE), less than 4% for all scenarios investigated.
|
2021-10-31T15:15:47.126Z
|
2021-10-28T00:00:00.000
|
{
"year": 2021,
"sha1": "781d8849bab75b6f70cbabf012795a2268450566",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1073/14/21/7071/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "48d75542a7fda978037bdd28fc6251dbea7c36b8",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": []
}
|
2289837
|
pes2o/s2orc
|
v3-fos-license
|
On the Application of Model Reduction Techniques to Real Time Simulation of Non-Linear Tissues
. In this paper we introduce a new technique for the real-time simulation of non-linear tissue behavior based on a model reduction technique known as Proper Orthogonal (POD) or Karhunen-Lo`eve Decompositions. The technique is based upon the construction of a complete model (using Finite Element modelling or other numerical technique, for instance, but possibly from experimental data) and the extraction and storage of the relevant information in order to construct a model with very few degrees of freedom, but that takes into account the highly non-linear response of most living tissues. We present its application to the simulation of palpation a human cornea and study the limitations and future needs of the proposed technique.
Introduction
Real-time surgery simulation [5] has attracted the attention of a wide community of researchers. The utility of such techniques are obvious, and they include, for instance, surgery planning, training of surgeons in image-guided surgery or minimally-invasive surgery, etc.
The state of the art of the technique has evolved very rapidly, see for instance [8] or [11] for interesting surveys. Starting from spring-mass systems, the nowadays real-time surgical simulators are now mostly based on Finite Element (FE) or Boundary Element (BE) technologies, able to account for a quite realistic behavior and even large deformations, see, for instance, [2][3] [6].
Such simulators should provide a physically more or less accurate response such that, with the use of haptic devices, a realistic feedback is transmitted to the surgeon in terms of both visual feedback and force feedback. Following [4], "... the model may be physically correct if it looks right".
For that to be possible, it is commonly accepted that a minimum bandwidth of 20-60 Hz for visual feedback and 300-1000 Hz for haptic display is necessary, see [7]. In this paper we focus our attention in the second requirement for the deformable model. All the simulations performed were designed to run under that requirements.
Very recently, geometric non-linearities have been taken into account in a work also based in model reduction, see [2]. But in this case, only linear materials have been considered (i.e., the so-called Saint Venant-Kirchhoff models, or homogeneous isotropic linear elastic materials undergoing large deformations). Most soft tissues, however, exhibit complex non-linear responses, possibly with anisotropic characteristics, and are frequently incompressible or quasi-incompressible. Geometric non-linearities (those deriving from large strains) should be also taken into consideration on top of this complex material behavior. The correct simulation of these materials requires the employ of Newton-Raphson or similar techniques in an iterative framework. This makes the existing engineering FE codes unpractical for real-time simulations.
The technique here presented is based upon existing data on the behavior of the simulated tissues. These data can be obtained after numerical simulations made off-line and stored in memory. But they can be also obtained from physical experiments, for instance. For the work here presented we have chosen the first option, and FE models of the organs being simulated will be considered as an "exact" to compare with. From these data we extract the relevant information about the (non-linear) behavior of the tissues, with the help of Karhunen-Loève decompositions and employ it to construct a very fast Galerkin method with very few degrees of freedom. To this end, we employ model reduction techniques based on proper orthogonal decompositions [9] [10] [12].
In order to show the performance of the method, we have chosen to simulate the behavior of the human cornea, although the technique is equally applicable to any other soft tissue. The cornea presents a highly non-linear response, with anisotropic and heterogeneous behavior due to its internal collagen fiber reinforcement. As an accurate enough model we have implemented that employed in [1]. This model is briefly reviewed in Section 2.
A Hyperelastic Mechanical Model for the Human Cornea
As mentioned before, we have chosen the human cornea as an example of highly non-linear tissue. This non-linearity comes from a variety of reasons, such as the internal collagen fiber reinforcement (material non-linearity) and also from the very large strains it could suffer. The human cornea is composed by a highly porous material, composed by nearly 80% by water, and thus quasiincompressible. Most of the cornea's thickness (around 90%) constitutes the stroma, that is composed of 300-500 plies of collagen fibers, distributed in parallel to the surface of the cornea. This microstructure induces in the corneal tissue a highly non-linear and heterogeneous behavior.
The model here employed for the simulation of the human cornea [1] considers the cornea as a hyperelastic material. Reinforcing fibers, that move continuously together with the cornea, posses a direction m 0 , with |m 0 | = 1. The fiber stretching after the deformation will be given by λm(x, t) = F m 0 , where F = dx/dX represents the deformation gradient. A second family of fibers, n 0 , is also considered as reinforcement at each point.
Due to the dependence of strain on the considered direction, the existence of a strain energy density functional, Ψ , depending on the right Cauchy-Green tensor, C = F T F , and the initial fiber orientations, m 0 and n 0 , is postulated. Based on the volumetric incompressibility restrictions, this functional can be expressed as [1] where Ψ vol (J) describes the volumetric change andΨ (C, m 0 ⊗ m 0 , n 0 ⊗ n 0 ) the change in shape. Both are scalar functions of J = detF,C =F TF , wherē Once this energy density functional is known, the second Piola-Kirchhoff stress tensor, S, and the fourth-order tangent constitutive tensor, C, can be determined by A detailed derivation of the model can be obtained in [1]. The interested reader is referred to this paper for reference.
Fundamentals: Karhunen-Loève or Proper Orthogonal Decomposition
In Karhunen-Loève techniques [9] we assume that the evolution of a certain field T (x, t) is known. In practical applications (assume that we have performed offline some numerical simulations, for instance), this field is expressed in a discrete form which is known at the nodes of a spatial mesh and for some times t m . Thus, We can also write T m for the vector containing the nodal degrees of freedom at time t m . The main idea of the Karhunen-Loève (KL) decomposition is to obtain the most typical or characteristic structure φ(x) among these T m (x), ∀m. This is equivalent to obtain a function that maximizes α: where N represents the number of nodes of the complete model and M the number of computed time steps. The maximization leads to an eigenvalue problem of the form:φ where we have defined the vector φ such that its i-th component is φ(x i ), and the two-point correlation matrix, c, is given by which is symmetric and positive definite. If we define the matrix Q containing the discrete field history: then it is easy to verify that the matrix c in Eq. (4) results in c = Q Q T .
A Posteriori Reduced Modelling of Transient Models
If some direct simulations have been carried out, we can determine T m i , ∀i ∈ [1, · · · , N] and ∀m ∈ [1, · · · , M], and from these solutions the n eigenvectors related to the n-highest eigenvalues that are expected to contain the most important information about the problem solution. For this purpose we solve the eigenvalue problem defined by Eq. (4) retaining all the eigenvalues φ k belonging to the interval defined by the highest eigenvalue and that value divided by a large enough value (10 8 in our simulations). In practice n is much lower than N , and this constitutes the main advantage of the technique. Thus, we can try to use these n eigenfunctions φ k for approximating the solution of a problem slightly different to the one that has served to define T m i . For this purpose we need to define the matrix B = [φ 1 · · · φ n ].
If we now consider the linear system of equations coming from the discretization of a generic problem, in the form G T m = H m−1 , where the superscript refers to the time step, then, assuming that the unknown vector contains the nodal degrees of freedom, it can be expressed as a linear combination of eigenmodes: where ζ m i represent the new degrees of freedom of the problem, from which we obtain and by multiplying both terms by B T we obtain B T G B ζ m = B T H m−1 , which proves that the final system of equations is of low order, i.e. the dimension of B T G B, is n × n, with n N .
Numerical Results
In order to test the performance of the proposed technique, we have focused our attention mainly in two aspects. First, the accuracy of the results. Second, the compliance with the requirements of haptic feedback, i.e., all results must be obtained at a frequency between 300 and 1000 Hz. A set of tests have been accomplished, all based in the model of the human cornea presented before. Inertia effects are neglected in this problem, due to the typical slow velocity in the application of the loads in this kind of organs. The cornea was discretized with trilinear three-dimensional finite elements. The mesh consisted of 8514 nodes and 7182 elements. A view of the geometry of the model is shown in Fig. 1.
Palpation of the Cornea
The first test for the proposed technique consists of simulating the palpation of the cornea with a surgical instrument. In order to validate the results, a load was applied to the complete FE model in the central region of the model. The obtained result was compared to the one obtained by employing the model reduction techniques presented before, for the load applied at the same location. Once the complete model is solved, the most important eigenmodes are extracted from the computed displacements field, together with the initial tangent stiffness matrix. The number of eigenmodes employed in this case was only six, which is, in our experience, the minimum number of modes that should be employed in such a simulation. The modes are depicted in Fig. 2. The associated eigenvalues are, from the biggest to the smallest one, 9.02·10 4 , 690, 27, 2.63, 0.221 and 0.0028. As can be seen, the relative importance of these modes in the overall solution, measured by the associated eigenvalue, decreases very rapidly. Note that the reduced model employed only six degrees of freedom, while the complete model employed 8514 nodes with three degrees of freedom each, thus making 25542 degrees of freedom. Of course, if more accurate solutions are needed, a higher number of modes can be employed. The displacement field obtained for the complete model is compared to that of the reduced model. We chose different positions of the load and compared the results. For a first location of the load, the obtained vertical displacement is shown in Fig. 3 The L 2 error norm ranged from very low values (0.08) in the early steps of the simulation, to higher values (around 0.34) for the last step. In our experience, this is a typical upper bound of the obtained error, even if very large deformations are imposed to the simulated organ, as is the case.
6
The simulations ran at 472-483Hz, which is among the limits imposed by haptic feedback realism, as mentioned before. Of course, the use of more sophisticated, maybe parallelized, codes, could give even faster results.
Force Prediction
The architecture of a real-time simulator requires, however, the prediction of the response force to a given displacement imposed to the model by means of the haptic device. Thus, a vertical displacement was imposed to node 4144, located more or less in the center of the cornea, with linearly increasing value. While the complete model took around 3 hours to solve the problem, due to the large displacement imposed at the last steps of the simulation, the reduced model still runs at between 400-500 Hz. The results are summarized in Table 1. As can be noticed, the predicted response is very accurate at the middle of the simulation, and gives some error both at the very beginning of the simulation and for very large strains.
Conclusions
In this paper a novel strategy is presented for real-time interactive simulation of non-linear anisotropic tissues. The presented technique is based on model reduction techniques and, unlike previous works [2], it allows for the consideration of both geometrical and material non-linearities.
The reduced models are constructed by employing a set of "high quality" global basis functions (as opposed to general-purpose, locally supported FE shape functions) in a Galerkin framework. These functions are constructed after
|
2018-01-23T22:46:46.474Z
|
2008-07-07T00:00:00.000
|
{
"year": 2008,
"sha1": "3cb3118ee04ebb6c53546ead63b5fe0b0e2816b4",
"oa_license": "CCBY",
"oa_url": "https://hal.archives-ouvertes.fr/hal-01007526/file/NACC.pdf",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "e7381fbf9fcd7a1f743a02de22c4c50ba80a387d",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
1227134
|
pes2o/s2orc
|
v3-fos-license
|
Propagation characteristics of pulverized coal and gas two-phase flow during an outburst
Coal and gas outbursts are dynamic failures that can involve the ejection of thousands tons of pulverized coal, as well as considerable volumes of gas, into a limited working space within a short period. The two-phase flow of gas and pulverized coal that occurs during an outburst can lead to fatalities and destroy underground equipment. This article examines the interaction mechanism between pulverized coal and gas flow. Based on the role of gas expansion energy in the development stage of outbursts, a numerical simulation method is proposed for investigating the propagation characteristics of the two-phase flow. This simulation method was verified by a shock tube experiment involving pulverized coal and gas flow. The experimental and simulated results both demonstrate that the instantaneous ejection of pulverized coal and gas flow can form outburst shock waves. These are attenuated along the propagation direction, and the volume fraction of pulverized coal in the two-phase flow has significant influence on attenuation of the outburst shock wave. As a whole, pulverized coal flow has a negative impact on gas flow, which makes a great loss of large amounts of initial energy, blocking the propagation of gas flow. According to comparison of numerical results for different roadway types, the attenuation effect of T-type roadways is best. In the propagation of shock wave, reflection and diffraction of shock wave interact through the complex roadway types.
Introduction
Coal and gas outbursts are an extremely complex dynamic phenomenon [1][2][3][4][5], during an outburst, the coals and rocks around the coal mining face are rapidly broken and ejected, releasing large amounts of gas from the pulverized coal [6,7]. The pulverized coal and gas flow induced by an outburst have enormous energy [8], which can lead to fatalities and destroy underground equipment. In recent years, many coal and gas outburst accidents have occurred in China. For example, on October 20, 2004, a serious outburst occurred in Daping coal mine of Zheng Coal Group in Henan province. In this accident, the outburst coal and rock was estimated 1894 t, plus approximately 250 thousand m 3 outburst gases. Because the pressure of outburst gas flow was great, some underground ventilation facilities were destroyed, a large number of gas a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 flowed to adjacent intake roadways, such that the gas concentration within these intake roadways exceeds gas explosion limit, a gas explosion occurred. 148 people were killed and 32 people were injured.
Extensive research have been carried out on coal and gas outbursts [9][10][11][12][13][14][15], many models and theories have been developed, however, these achievements mainly focus on outburst mechanism and prediction as well as outburst prevention technology, little investigation has been conducted for outburst gas flows during an outburst.
Cheng et al [16] theoretical analysed the outburst shock wave formation process as well as propagation law. Otuonye et al [17]simulated outburst shock waves based on simplified outburst initiation model. A field investigation of outburst gas flow pressure was carried out in Zhongliangshan coal mine, China [18].The results showed the outburst shock wave pressures of 0.3~0.6 MPa, confirming the enormous destructive potential of outburst shock waves. Wang et al [19][20][21] analysed outburst shock wave propagation characteristic in different type roadways. However, none of the above researches considered the role of pulverized coal play in the propagation of outburst shock waves. During an outburst, the interaction between pulverized coal and gas flow is quite obvious.
In this study, combining with theoretical analysis and numerical simulation as well as experimental methods, the interaction mechanism between pulverized coal and gas flow was analyzed, three dimensional unsteady models for pulverized coal and gas two-phase flow was established, then the outburst pressure attenuation law was investigated.
Methodology
Numerical method Initial conditions. Due to the complexity and variability of outbursts, it is more appropriate to analyse this phenomenon from an energy perspective. It is generally concluded that the gas expansion energy in addition to the elastic energy of coal is transferred to the coal crushing energy, the transport energy, and the remaining kinetic energy of gas after carrying the pulverized coal [22]. Zhao et al [18] calculated the elastic energy of coal and the expansion energy of gas for several outburst accidents, and showed that elastic energy only accounts for a few thousandths of the total outburst energy. Thus, in the outburst development stage, the elastic energy of coal can be ignored and the transport energy of coal derives entirely from the gas expansion energy, which can be expressed as follows: where P 0 and P 1 are atmospheric pressure and gas pressure in the outburst hole., respectively; n is the adiabatic coefficient, which is usually chosen to be 1.3; V 0 and V 1 represent the gas volumes under pressure P 0 and P 1 . As shown in Eq (6), the transport energy for the two-phase flow of gas and pulverized coal is mainly dominated by the gas volume and gas pressure in the outburst hole.
For numerical simulation of pulverized coal and gas two-phase flow propagation characteristics, the initial values for parameters in the simulation region need to be assumed. Fig 1 shows a geometric diagram of a coal mine roadway during the critical state of an outburst. For an outburst hole, the length is assumed to be L; the gas with high pressure is basically in the stationary state at the critical state of an outburst; the gas relative concentration C 1 of outburst zone is 1 (assumed to be pure methane); and the temperature T 1 of the gas in the outburst hole is assumed to be 300 K. Based on the energy analysis for the gas/pulverized coal two-phase flow, at the critical state, the initial condition of the gas in the outburst zone is given by Eq (2): Where p and p 1 are gas pressure in the outburst hole. In general, mine roadways are arranged at a certain depth beneath the surface. There are some differences in air pressure between the roadway and surface atmospheric pressure; however, this is sufficiently small that the air pressure in the roadway is taken as the atmospheric pressure p a . Although airflow speed is nonzero, it does not exceed 15 m/s in the roadway and 8 m/s in the return airway. Obviously, the speed of the airflow is much slower than that of the outburst gas propagation; therefore, the airflow speed in roadways can be assumed to zero. Gas concentration within the airflow is very low, not exceeding 1%; therefore, the volume concentration of gas in the roadway is assumed to be zero. The temperature in roadways is the same as that in the outburst hole. According to the above analysis, the initial airflow conditions in the roadways at the critical state are as follows: Experimental method Experimental equipment. To validate the numerical simulation, an experimental system was constructed for simulating the propagation law of outburst shock waves [10]. Because stresses only dominate the coal elastic energy, based on the energy analysis for the pulverized coal and gas two-phase flow, the stresses can be ignored. Therefore, the experimental system can be constructed as one kind of shock tube [20].
Experimental procedure.
(1) Coal sample preparation A predetermined pressure is applied to the coal sample, a constant pressure process lasts about 30 minutes in order to release the gas contained in the coal sample, several times will be repeated, and then the coal sample is loaded into the outburst hole.
(2) Determining air-tightness The outburst hole is checked for air-tightness using soapy water to detect any leaks.
(3) Coal gas adsorption. Prior to the sorption of coal gas, it is degassed to vacuum for 12 h with a vacuum pump, and then the coal is filled gas for 48 h to achieve adsorption equilibrium.
(4) After preparing the pressure transducers and data acquisition system, a much higher pressure than the adsorption equilibrium is applied to the canvas, resulting in outburst.
Results and discussions
To analyse the influence of pulverized coal on the outburst shock waves, simulations were conducted for different volume fraction of pulverized coal and different roadway types: the volume fraction of pulverized coal is 0% indicates the pure gas outburst, while the other volume fraction of pulverized coal is 5% simulates the process of coal-gas two-phase flow. In the meantime, three different roadway types with straight, branch, bifurcation are also simulated. As shown in Fig 3, the peak overpressure of CD cross section appears later than that of AB section whether or not the participation of pulverized coal in the outburst process. Another explicit phenomenon need to be pointed out is that both pressure drops are minor due to the small span between two neighbor sections. In the case of pulverized coal, the peak value declines from the original 0.27 Mpa to 0.11 Mpa, arrival time of overpressure are also delayed from 0.035 s to 0.046 s, indicating that the pulverized coal flow plays a negative role which consumes large energy and blocks the gas flow. In the absence of pulverized coal, the overpressure decays more rapidly, as contrasted to the slow attenuation process once pulverized coal is consider, all that illustrates the blocking effect. Numerical result for T-shape roadways Fig 4 shows the geometry of a T-shape roadway, The basic size is nearly the same as that of the straight roadway, the gas pressure in the outburst hole is 1 MPa. Contrast to straight roadway, cross-sections AB and EF are marked.
Numerical result for straight roadways
As shown in Fig 5 and Fig 6, outburst shock waves propagate at high speed. For a pulverized coal volume fraction of 0%, the peak overpressure of the outburst shock wave reaches crosssection AB at 0.0404 s, compared with 0.0478 s when pulverized coal volume fraction is 5%. Duo to the interaction between the pulverized coal and gas flow, part of outburst shock wave consumes energy such that the peak overpressure with pulverized coal is lower than that with no pulverized coal. Furthermore, the intensity of the outburst shock waves is attenuated along the propagation direction: For 0% pulverized coal volume fraction, the attenuation coefficient from cross-section AB to EF section is 1.86, compared with 2.22 when the pulverized coal volume fraction is 5%. Fig 7 shows propagation characteristic of the pulverized coal as well as the gas flow along the roadway direction at 0.02 s during the outbursts.
It can be seen that the transport speed of pulverized coal is much slower than that of the gas flow. At time 0.02 s, the transport distance of pulverized coal is about 5 m along the axial direction of the roadway, whereas the migration distance of gas flow is almost 8 m. Propagation characteristics of pulverized coal and gas two-phase flow during an outburst firstly, a peak overpressure appears immediately. When the shock wave passes through the bifurcation, a great pressure drop of section AB can be found in a very short time, it can be explained by the abrupt expanded area theory. In fact, in this passing process of coal-gas flow, bifurcation functions just like abrupt enlarged cross section. A small part coal-gas flow enters into the sub roadway due to the diffraction which takes place in the near-wall corner of bifurcation, while most coal-gas flow propagates by the direction of downstream main roadway. The attenuation trends of overpressure at section CD, EF exemplify above theory, peak overpressure of section CD is 0.092Mpa which is higher than section EF value of 0.073Mpa, while both smaller than upstream peak value.
Experimental results
An experimental study on the propagation law of outburst shock waves was conducted with the experimental equipment. The initial gas pressure in the outburst hole was 0.9 MPa.
As shown in Fig 10. The distance distances between the outburst hole and three pressure sensors were 3.4 m, 8.0 m, and 12.0 m, respectively. Pressure transducers were connected to the data acquisition system, and the voltage signal outputs from the sensors were collected by the acquisition system. These voltage signals were then converted into the overpressure values. Fig 11 shows profiles of gas overpressure variation with time at three points. As shown in Fig 11, the maximum overpressures of the outburst shock waves measured at points 1, 2, and 3 are 0.255 MPa, 0.251 MPa, and 0.116 MPa, respectively. When the shock waves spread to the measuring points, the pressure changes suddenly and then decays with time, which is consistent with the results of the numerical simulation. Comparison of the maximum overpressure values between measurement points 2 and 1 shows that the pressure drop is not great. This is mainly due to shock airflow continuous diffraction at the corner, collision with the wall, and reflection.
Numerical results for different roadway types Fig 12 shows that, compared with straight roadway, roadways of T shaped and bifurcation function well on the attenuation of over pressure, due to increased transmitted cross section of the roadway which is conducive to the rapid release of upstream coal-gas flow and pressure relief. From this perspective, in which overpressures of T-type roadways decrease most significantly, this is mainly because the transmitted shock wave front takes a strongly collision with the facing rigid wall. Conversely, the formation of the reflected wave collides with a positive transmitted shock wave, which weakens its energy, but shock wave reflection is not obvious at bifurcation roadways while the diffraction leads a prominent effect.
As shown in Fig 13, shock wave mitigates in the sub roadway via different roadway types. The peak overpressure of sub roadway for T type is 0.062Mpa, while that of the bifurcation is 0.075Mpa, obviously higher than the former. It also indicates that the attenuation effect of T type is more significant which mainly contributes to the blocking effect as mentioned before. But in the subsequent decline of overpressure, bifurcation type responses rapidly, as contrast to roadways for T type. On one hand, interaction of blocking effect and collision for T type roadways makes the coal-gas flow entering into the branch slow down; on the other hand, diffraction effect of bifurcation accelerates the flow.
Comparison of numerical results and experimental results
Based on the experiment in Fig 10, numerical simulation was conducted. Fig 14 presents the characteristics of outburst shock wave propagation. At measurement point 1, maximum overpressure is 0.231 MPa, which is approximately equal to the experimental result, and the outburst shock wave attenuation laws are similar between the numerical and experimental simulations. Propagation characteristics of pulverized coal and gas two-phase flow during an outburst
Conclusions
(1) The elastic energy of coal only accounts for a few thousandths of the total outburst energy. Thus, in the outburst development stage, the elastic energy of coal can be ignored and the transport energy of coal derives entirely from the gas expansion energy.
(2) Based on the role of gas expansion energy in the propagation characteristics of pulverized coal and gas two-phase flow during an outburst, a numerical simulation method and an experiment system were constructed for revealing the attenuation law of this two-phase flow.
(3) Pulverized coal and gas at high pressure are instantly ejected from the outburst hole; rapidly inflate; and compress the air in roadway, thereby producing outburst shock waves. Outburst shock waves rapidly propagate in an axial direction along the roadway.
(4) The outburst shock wave induced by the pulverized coal and gas flow attenuates along the roadway in an axial direction, and the volume fraction of pulverized coal plays important role in the attenuation of the outburst shock wave.
(5) Compared with straight roadway, roadways of T shaped and bifurcation function well on the attenuation of over pressure, in which overpressure of T-type roadways decreases most significantly, Abrupt expanded area theory can be applied into the explanation of this phenomenon. For T type roadways, interaction of blocking effect and collision play the leading part in the propagation of shock wave, while the diffraction effect is dominant in the bifurcation roadways.
|
2018-04-03T03:13:38.632Z
|
2017-07-20T00:00:00.000
|
{
"year": 2017,
"sha1": "b17987c53daa8c8e42efc4573710dd7b2d8a87a7",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0180672&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b17987c53daa8c8e42efc4573710dd7b2d8a87a7",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science",
"Medicine"
]
}
|
250041696
|
pes2o/s2orc
|
v3-fos-license
|
Comparative Analysis of Chloroplast Genome and New Insights Into Phylogenetic Relationships of Polygonatum and Tribe Polygonateae
Members of Polygonatum are perennial herbs that have been widely used in traditional Chinese medicine to invigorate Qi, moisten the lung, and benefit the kidney and spleen among patients. However, the phylogenetic relationships and intrageneric taxonomy within Polygonatum have long been controversial because of the complexity of their morphological variations and lack of high-resolution molecular markers. The chloroplast (cp) genome is an optimal model for deciphering phylogenetic relationships in related families. In the present study, the complete cp genome of 26 species of Trib. Polygonateae were de novo assembled and characterized; all species exhibited a conserved quadripartite structure, that is, two inverted repeats (IR) containing most of the ribosomal RNA genes, and two unique regions, large single sequence (LSC) and small single sequence (SSC). A total of 8 highly variable regions (rps16-trnQ-UUG, trnS-GCU-trnG-UCC, rpl32-trnL-UAG, matK-rps16, petA-psbJ, trnT-UGU-trnL-UAA, accD-psaI, and trnC-GCA-petN) that might be useful as potential molecular markers for identifying Polygonatum species were identified. The molecular clock analysis results showed that the divergence time of Polygonatum might occur at ∼14.71 Ma, and the verticillate leaf might be the ancestral state of this genus. Moreover, phylogenetic analysis based on 88 cp genomes strongly supported the monophyly of Polygonatum. The phylogenetic analysis also suggested that Heteropolygonatum may be the sister group of the Polygonatum, but the Disporopsis, Maianthemum, and Disporum may have diverged earlier. This study provides valuable information for further species identification, evolution, and phylogenetic research of Polygonatum.
INTRODUCTION
Polygonatum Mill (1754) is an essential medicinal and edible species widely distributed in warmtemperate zones of the Northern Hemisphere and Northeastern Asia (Floden, 2015). There are approximately 70 species recognized worldwide (Floden and Schilling, 2018), with 39 present in China, 20 of which are endemic to the region (Chen and Tamura, 2000). The underground rhizomes of Polygonatum have crucial medicinal value in moistening lungs, relieving thirst, replenishing the spleen, and increasing immunity (Jiao et al., 2018a). Among them, four species [Polygonatum odoratum (Mill.) Druce, Polygonatum sibiricum Red., Polygonatum cyrtonema Hua, and Polygonatum kingianum Coll. et Hemsl] were listed in the Chinese Pharmacopoeia (Chinese Pharmacopoeia Commission, 2020). Modern studies have demonstrated that some Polygonatum species were rich in nutrients and functional components and were regarded as a new enormous potential miscellaneous grain (Si and Zhu, 2021). The previous survey has revealed that Polygonati rhizoma is often contaminated with several common adulterants in herbal markets, such as Polygonatum cirrhifolium, Polygonatum humile, Polygonatum stenophyllum, Polygonatum filipes, and Polygonatum verticillatum (Yang et al., 2015;Jiao et al., 2018b;Wang Y. et al., 2019;Wang Z. W. et al., 2019). Because the morphology of these species is similar, changeable, and indistinguishable, it seriously affects the safety and effectiveness of clinical drug use .
Despite these potential issues, using the chloroplast (cp) genome for phylogenetic estimates generally shows promise for resolving deep relationships among the plant lineages (Nie et al., 2020). Compared with the traditional DNA fragments, the cp genome was relatively conserved and slightly varied . The method has recently been applied to many research fields, such as taxonomic revision, systematic evolution, and species identification Henriquez et al., 2020). Floden and Schilling (2018) and Xia et al. (2022) used cp genome data to reconstruct the phylogeny of Polygonatum, and the results supported the three groups and their sister relationship with Heteropolygonatum. Although these studies resolved the phylogenetic relationships of some species of Polygonatum, the phylogenetic relationships among the genera of Trib. Polygonateae and some species of Polygonatum were still unclear. In addition, the reliability of some analyses still needs to be further clarified due to the limited number of samples in the previous study. Given this, it is necessary to provide further support for the intra-generic relationships, divergence times, and genomic characteristics of Trib. Polygonateae based on a larger sample size. In the present study, we de novo assembled and annotated the cp genome of 26 species, including 23 species of Asparagaceae (18 species of Polygonatum, four species of Disporopsis, and one species of Maianthemum), and three species of Disporum (Colchicaceae). Besides, comparative analysis and phylogenetic evolution of the cp genome were conducted. The present results provide a basis for species identification, phylogenetic studies, resource development, and utilization of Polygonatum medicinal plants.
Plant Material and DNA Sequencing
The fresh and healthy leaves of Polygonatum, Disporopsis, Maianthemum, and Disporum were collected in the field or Germplasm Resource Garden (China), and then the leaf tissue was frozen fresh at -20 • C. Numbers after taxa names refer to the locality, and sample information is shown in Supplementary Figure 1 and Supplementary Table 1. The specimens were identified following the taxonomic key and external morphology diagnosis proposed by related literature (Tang, 1978). The voucher specimens have been deposited at the herbarium of Dali University. Total genomic DNA was extracted from tissue samples using the Plant Genomic DNA kit (Tiangen, Beijing, China). The extracted DNA was quantified on a Nanodrop 2000 spectrophotometer (Nanodrop Technologies, Thermo Scientific, United States), and all PCR products were tested for the presence of amplified products on agarose gels. The library of each sample was prepared using 30 µl of high-quality (>100 ng/) genomic DNA. All libraries were sequenced on the Illumina NovaSeq system (Illumina, San Diego, CA, United States).
Genome Assembly and Annotation
The pair-end reads were trimmed for adapter and low-quality reads (Phred score < 30) using NGS QC Toolkit v.2.3.3 software. The cp genomes of P. sibiricum (NC029485), Disporopsis fuscopicta (MW248136), Maianthemum bicolor (NC035970), and Disporum cantoniense (MW759302) were downloaded from the National Center of Biotechnology Information (NCBI). The genome above was then used as the reference sequence. (Jin et al., 2020). All clean reads were mapped to the database, and then the mapping data were extracted based on similarity and coverage. Subsequently, the assembled contigs were visualized and removed redundant sequence by Bandage v.0.8 to generate the complete circular cp genome (Wick et al., 2015). Finally, the reads were remapped to assembled cp genome by Bowtie2, and Jellyfish v.2.2.3 was then used to determine the reverse repeat region boundary. After assembly, circular cp genomes were annotated using online tools CpGAVAS2 and GeSeq based on the reference cp genome (Michael et al., 2017;Shi et al., 2019). The Apollo was used to correct the start codons, stop codons, and intron/exon boundaries (Lewis et al., 2002). Annotated cp genome sequences were submitted to the GenBank database of the NCBI to obtain specific accession numbers (Table 1). Fully annotated cp genome circle diagrams were drawn by OrganellarGenomeDRAW (OGDRAW) online (Lohse et al., 2007).
Genome Structure and Comparisons Analysis
The GC content was analyzed using Geneious v.9.0. Four types of the dispersed repeat sequence, including forward (F), complementary (C), palindromic (P), and reverse (R), were detected using the REPuter program 1 (Stefan et al., 2001). Tandem repeats were detected using Phobos v.3.3.12 2 (Mayer et al., 2010) with default parameter values. The cp genome of P. sibiricum (MZ029093) was selected as a reference for coordinate positions, and indels and SNPs were counted within the non-overlapping 150 bp window for 18 Polygonatum plastomes (Supplementary Table 7a; Liu et al., 2020). The region IRb was removed for the analyses of repeats to avoid over-representing the repeats following Abdullah et al. (2019). Spearman's Rho correlations were calculated based on substitutions, Indels, and oligonucleotide repeats using Minitab v. 18 (Akoglu, 2018). The criteria for repeat determination include a minimum repeat size of 20 bp with the similarity between repeat pairs of 90% by putting edit value 3. Furthermore, the simple sequence repeats (SSRs) were analyzed using MISA software 3 with the parameters of "10" for mono-, "5" for di-, "4" for tri-, and "3" for tetra-and penta-nucleotide motifs (Beier et al., 2017). The cp genomes were compared with mVISTA under the Shuffle-LAGAN mode. The cp genome junctions were visualized and compared using IRscope 4 online (Amiryousefi et al., 2018). The cp genomes were aligned using the MAFFT (Katoh and Standley, 2013). Additionally, the nucleotide variability across the cp genome sequences was analyzed using DnaSP v.6.12.03, with a window length of 600 sites and a step size of 200 sites.
Phylogenetic Analyses and Ancestral Character State Reconstruction
Phylogenetic reconstruction included 27 de novo assembled sequences (Table 1), and 61 cp genomes downloaded from NCBI (Supplementary Table 2). At the same time, two species, Dioscorea esculenta (NC052854) and Dioscorea schimperian (NC039855), were used as outgroups. A total of 88 sequences were aligned using MAFFT with default parameters and trimmed using trimAl v.1.4 with option automated. Neighbor-Joining (NJ) analyses were performed using the MEGA X, applied with 1,000 bootstrap replicates at each branch node (Sudhir et al., 2018). The alignment was also evaluated using bootstrap analysis on 1,000 in a maximum likelihood (ML) by IQ-tree (Nguyen et al., 2015), with parameters: iqtree -s input -m MFP -b 1000 -nt AUTO -o NC052854, NC039855, best-fit nucleotide substitution model.
The leaf arrangement was selected to analyze the phyllotaxy evolution of Polygonatum. The phyllotaxy information was obtained from taxonomic literature and the Flora of China (Chen and Tamura, 2000;Jiao, 2018). The states of phyllotaxy were coded: alternate (A), verticillate (B), opposite (C), and the crowd (D). In the case of some species with more than two-character states, we coded the character state based on their dominant status. For example, the leaf arrangement for Polygonatum prattii was coded as alternate because it usually has alternate leaves, although there is an opposite leaf arrangement or threeverticillate leaves occasionally. For Polygonatum hunanense, its leaves were mainly verticillate, sometimes with a few alternate or opposite leaves, thus coded as verticillate in the analyses.
Divergence Time Estimation
The divergence times of Polygonatum were calculated using the Markov chain Monte Carlo (MCMC) tree program of PAML (Puttick, 2019). IQ-tree was used to estimate the best tree topology of the data set. According to the previous study (Chen et al., 2013;Eguchi and Tamura, 2016;Wang et al., 2016;Xia et al., 2022), we used four calibration points to restrict each node: (F1) 115.9-137.4 Ma for the root node, (F2) 58.3-76.6 Ma for Asparagaceae stem age, (F3) 56.4-72.7 Ma for Asparagaceae crown age, (F4) 14.34-27.54 Ma for Polygonatum and Heteropolygonatum. The clock model uses the independent rate model (IRM), which follows a lognormal distribution. Nucleotide substitution selects the HKY model with alpha for gamma rates at sites set to 0.5. The birth-death process is used to generate uniform node age priors in the tree, using the default parameter (λ = 1, µ = 1, s = 0.1). The posterior probabilities of parameters were calculated using MCMC samples. The first 10% trees were discarded as burn-in and then sampled every 10 iterations until 20,000 samples were gathered.
Sequencing, Assembly, and Annotation
The raw data of 27 individuals were filtered to remove adapters and low-quality reads; 3-5 Gb data were obtained for each species in this study. After assembly and splicing, the complete cp genomes of the circular tetrad structure were obtained (Figure 1). The annotated result suggested that the cp genome length of P. odoratum (154,576 bp) was the smallest, and the cp genome length of M. fuscum (156,711 bp) was the largest among the 27 individuals. The length of the LSC region ranged from 83,493 bp (P. odoratum) to 85,218 bp (M. fuscum). The length of the SSC region ranged from 18,002 bp (D. cantoniense) to 18,573 bp (P. cirrhifolium), and the length of the IRa and IRb regions ranged from 26,242 bp (D. fuscopicta) to 26,815 bp (D. cantoniense). The GC content of the cp genomes ranged from 37.6 to 37.8% and varied among the different regions of the cp genomes. In addition, the number of genes and introns were highly conserved (Table 1), and the same suite of rRNA genes and tRNA genes was found in all taxa. All genomes have 85-88 protein-coding genes, except for D. cantoniense, D. megalanthum, and D. uniflorum, with 83 protein-coding genes (lacking rps16 and rpl32 genes) (Supplementary Table 4). It is worth noting that 19 genes were repeated in the Polygonatum, which were involved in photosynthesis and self-replication (Supplementary Table 3).
Repeat Analysis
Repetitive sequences in the cp genome play a critical role in genome evolution and rearrangements. Oligonucleotide repeats analysis of four types of repeats in the cp genome, including Forward (F), Reverse (R), Palindromic (P), and Complementary (C), was performed by REPuter. The number of repeat types varied among the 26 cp genomes and presented random permutations, but most repeat sequences existed in 20-29 bp (Supplementary Figure 2). The abundance of F and P repeats was higher than that of R and C repeats (Supplementary Figure 2). The minimum number of repeats was found in Disporum uniflorum (60), whereas the maximum was found in P. cirrhifolium, P. sibiricum, and P. zanlanscianense (86). Complete details have been listed in Supplementary Tables 5, 6. Moreover, Spearman's Rho correlation coefficients were obtained between tandem repeats, indels, and SNPs (Supplementary Table 7b). All these correlations showed a significant value (tandem repeats and indels: p < 0.001, tandem repeats and SNPs: p < 0.001, indels and SNPs: p < 0.001). The average correlation values between tandem repeats and indels, indels and SNPs, and tandem repeats in 26 Polygonatum species were 0.469, 0.351, and 0.267, respectively. Furthermore, we identified 67 (P. franchetii)-87 (M. fuscum) SSRs per cp genome consisting of mono-to hexa-nucleotide repeating units (Supplementary Table 8). Most of the SSRs were located in the intergenic areas. More than half of these SSRs (52.94-66.23%) were mononucleotide A/T motif, followed by dinucleotide (18.29-26.47%) with a predominant motif of AT/TA, tetranucleotide repeats (10.81-13.89%) with a predominant motif of AAAT/ATTT, AATC/ATTG, trinucleotide (2.94-6.94%), pentanucleotide (2.44-4.35%) with a predominant motif of AAACG/CGTTT, and hexanucleotides (0-1.35%) were absent in the cp genome of M. fuscum (Supplementary Figure 3).
Inverted Repeats Regions Contraction and Expansion
The contraction and expansion of IR regions revealed variation in LSC/IR/SSC regions (Figure 2). The rpl22 gene was present in the LSC region, and rpl2 and trnH existed entirely in the IRb region. The rps19 gene was present in the junction of the IRa/IRb/LSC region in four genera (Polygonatum, Heteropolygonatum, Disporopsis, and Maianthemum), but in Disporum, the rps19 gene was absent in IRa/LSC region and existed completely in the IRb region. Notably, rps19 was started in IRa regions and integrated into the LSC by 60 base pairs in P. cyrtonema, whereas in all other species of Polygonatum, the rps19 gene exists completely in the LSC region. Additionally, the ndhF gene was observed at the junction of IRb/SSC and integrated into the SSC varied from 21 to 34 bp. Another truncated copy of the ycf1 gene was observed in all species at the IRa/SSC junction, which starts in IRa regions and integrates into the SSC. In addition, rpl2 and trnH existed in the IRa, and psbA existed in the LSC. It is worth noting that the rps19 Frontiers in Plant Science | www.frontiersin.org and trnN genes existed entirely in the IRa region of four genera, whereas the two genes were missing in the Disporum at the junction of IRa and LSC (Supplementary Figure 4). Moreover, the genome alignment analysis showed that the cp genomes among the 26 species were relatively conserved, and no inversions, translocations, and genomic rearrangements were detected (Supplementary Figures 5, 6).
Comparative Chloroplast Genomic Analysis
Comparison of overall sequence variation showed that the cp genome within Polygonatum is highly conserved. The IR regions had lower sequence divergence than LSC and SSC regions. In addition, the coding region was more conserved than the non-coding regions. Furthermore, except for the more remarkable mutation in ndhA, ycf1, and ycf2 genes, most of the protein-coding genes of Polygonatum were pretty conserved. The highest divergence in intergenic regions was found in the rps16-trnQ-UUG, trnS-GCU-trnG-UCC, trnT-UGU-trnL-UAA, ndhC-trnV-UAC, rpl32-trnL-UAG, trnV-GAC-rps7, and accD-psaI. The most divergent in the coding region were the ycf1 and ycf2 open reading frames (Supplementary Figure 7). Moreover, the sliding window analysis demonstrated that the seven regions had greater nucleotide diversity values (>0.01), including matK-rps16, trnS-GCU-trnG-UCC, rpl32-trnL-UAG, trnC-GCA-petN, petA-psbJ, ccsA, and ycf1 (Figure 3). The polymorphism loci of these variability regions are listed in Supplementary Table 9. Among these regions, nucleotide diversity values of rps16-trnQ-UUG, trnS-GCU-trnG-UCC, rpl32-trnL-UAG, and trnC-GCA-petN were greater than 0.01, and the ycf1 gene was the lowest (0.00047). The insertions/deletions (indels) diversity of trnS-GCU-trnG-UCC and matK-rps16 were 7.076 and 5.181, respectively, with no indel events detected in the ccsA gene. Furthermore, the cp genome of Polygonatum, Heteropolygonatum, Disporopsis, and Maianthemum is similar with an average similarity of 99% but different from that of Disporum with an average similarity of 85% based on the global comparison (Supplementary Figure 8).
Phylogenetic Analysis
The ML and NJ phylogenetic trees were inferred using 87 species, with the Dioscorea as the outgroup. The consensus trees obtained from the inference analyses were resolved, and most nodes were supported with maximum support (100% bootstrap support, Figure 4 and Supplementary Figure 9). The core Asparagaceae includes the subfamily of Scilloideae, Nolinoddeae, and Agavoideae, which form a monophyletic group (group I). Scilloideae and Agavoideae were sister taxa within the three subfamilies, and Nolinoddeae was a sister group to the clade of Scilloideae + Agavoideae. In addition, the results showed that most species of Trib. Polygonateae were placed in the crown of the phylogenetic tree, including Polygonatum, Heteropolygonatum, and Disporopsis, and supported the monophyly of three genera. However, within this clade, the Maianthemum and Ophiopogoneae were sisters to a clade formed by Disporopsis, Heteropolygonatum, and Polygonatum, while Disporum was polyphyletic across three separate clades and distantly related to Trib. Polygonateae. In addition, the Polygonatum is further divided into sect. Sibirica, sect. Polygonatum, and sect. Verticillata. And the sister relationship was between sect. Sibirica and sect. Polygonatum, whereas sect. Verticillata was placed as sister to sect. Polygonatum + sect. Sibirica with high support (100% B/S). It is worth mentioning that sect. Sibirica only includes a species of P. sibiricum. The NJ and ML analyses produced trees with similar topologies, although some poorly supported groups were sensitive to changes in the mode of inference. The position of several species was unresolved, including P. hunanense and P. kingianum, which varied among trees recovered using distinct phylogenetic inference methods.
Divergence Time Estimation
Results of divergence time for the node of the 95% highest posterior density (HPD) intervals are shown in Supplementary Table 10. A complete chronogram is demonstrated in Figure 5. The extant genera of the Polygonatum and Disporopsis have shared a common ancestor at the beginning of the Eocene (41.68 Ma, 31.97-57.29, 95% HPD), while the split between Polygonatum and Heteropolygonatum is estimated to occur at 16.56 Ma (HPD = 13.57-20.56 Ma, 95%), and sect. Verticillata, sect. Polygonatum and sect. Sibirica might share a common ancestor at 14.71 Ma (1.32-18.57 Ma, 95% HPD), and the divergence times between sect. Polygonatum and sect. Sibirica was formed at approximately 11.80 Ma. Sampled specimens of Maianthemum and Ophiopogon were estimated to have originated in 52.22 Ma. Moreover, the divergence times of the Disporum occurred at 128.56 Ma, having shared a common ancestor with the Asparagaceae.
Reconstruction of Leaf Morphological Character
In the classification of Polygonatum, the arrangement of leaves was usually concerned. Phyllotaxy is one of the main characteristics of Polygonatum taxonomy, character transformation of leaf order is essential to understanding the evolution of Polygonatum. The phyllotaxy was used to reconstruct ancestral traits of Polygonatum and its relative species. As illustrated in Figure 6, the S-DIVA results showed that the verticillate leaves arrangement was the most likely ancestral state of Polygonatum, which was consistent with the MRBT method (B: p = 0.95). In addition, sect. Polygonatum is marked by phyllotaxy with an alternate leaf, except for P. hunanense. In its sister clade, sect. Sibirica is mostly a verticillate leaf arrangement. Sect. Verticillata includes species that appear to have a combination of the opposite, alternate, and verticillate leaves. In addition, alternate and verticillate leaves evolved more than once. Notably, the Heteropolygonatum and Disporopsis showed alternative phyllotaxy.
Chloroplast Genome Structure and Comparative Analysis
In the present study, we de novo assembled the cp genome of 26 species of Trib. Polygonateae and performed comparative analyses. The cp genome of 26 species exhibited a quadripartite structure with two IR regions separated by the LSC and SSC regions. Higher GC content was observed in the IR region compared with the LSC and SSC regions, consistent with previous reports (Liang et al., 2021). In addition, our results found that the total length, GC content, and gene composition of the cp genome were almost identical in all species. Previous studies have found that the angiosperms possessed a highly conserved nature in the cp genome at the genus level (Yu et al., 2019;Shahzadi et al., 2020), but we found that the rps16 and rpl32 genes lost in Disporum, and this variation may be specific to Disporum.
Inverted repeat contraction and expansion could cause gene duplication, the origination of pseudogenes, and length variation in the cp genome, which were considered critical evolutionary phenomena. In the present study, rps19 is present in the IR region in four genera (Polygonatum, Heteropolygonatum, Disporopsis, and Maianthemum), except for Disporum. Previous studies of monocotyledons' cp genome revealed that the rps19 gene existed in the IR region (Ahmed et al., 2012;Nock et al., 2014;Henriquez et al., 2020). Whereas the de novo assembled genome of Disporum contrast with previous studies, revealing integration of rps19 into the LSC region. Moreover, the ycf1 gene was duplicated in the IRa and IRb regions in Polygonatum and Heteropolygonatum, while in the other three genera (Disporopsis, Maianthemum, and Disporum), this gene only exists in the junction of the SSC/IRa region, which is consistent with a previous study (Wang et al., 2011).
At the genus level, weak-to-strong correlations among tandem repeats, SNPs, and indels have been observed in the Polygonatum. We found a weak correlation between tandem repeats and SNPs, a moderate correlation between indels and SNPs, and a strong correlation between tandem repeats and indels in Polygonatum plastomes. A recent study has confirmed that the plastomes exhibited strong associations between tandem repeats, indels, and substitutions in Araceae and Malvaceae (Abdullah et al., , 2021. Our results also supported prior findings that tandem repeats play an important role in generating the indels and SNPs. These results have practical implications in selecting appropriate loci for comparative analyses.
Phylogenetic and Taxonomic Resolution
The phylogeny and classification of Polygonatum have long been debated (Feng et al., 2020). This study used 88 cp genome, including most of the basal monocot family Asparagaceae, to construct the phylogenetic tree. Results of the NJ and ML phylogenies analysis confirmed the position of Polygonatum within the Asparagaceae, which were congruent and largely concordant with recent phylogenomic studies (Zhao et al., 2019;Xia et al., 2022). There is strong support for the monophyly of many major clades of Asparagaceae, including Polygonatum, Heteropolygonatum, Disporopsis, Maianthemum, and Rohdea. In a previous study, the Polygonatum was subdivided into two sections: sect. Polygonatum and sect. Verticillata based on the trnK (Tamura et al., 1997), but in our phylogeny, the Polygonatum was recovered as monophyletic in NJ and ML analyses, which were divided into three sections: sect. Sibirica, sect. Polygonatum, and sect. Verticillata, and can be strongly supported as a sister relationship between (1) Polygonatum and Heteropolygonatum, and (2) sect. Sibirica and sect. Polygonatum, and (3) sect. Verticillata and sect. Polygonatum + sect. Sibirica, respectively, which is consistent with previous studies (Meng et al., 2014;Floden, 2017;Zhao et al., 2019;Xia et al., 2022).
Moreover, we found several interesting implications of phylogeny in this study. First, both NJ and ML analyses provided strong evidence for the monophyly of P. sibiricum in sect. Sibirica. These results were supported by the findings of other FIGURE 5 | Divergence time estimation based on cp genome sequences. The divergence times are exhibited on each node, whereas the blue bars represent the 95% highest posterior density interval for each node age. on multiple plastid markers (atpB, ndhF, rbcL, matK, psbA-trnH, trnC-petN, atpB-rbcL, and rps16) also supports its placement in Trib. Polygonateae (Chen et al., 2013;Meng et al., 2014;Zhao et al., 2019). However, ML analysis in our study reveals that the Maianthemum was deeply nested within Trib. Ophiopogoneae rather than Trib. Polygonateae, which agrees with former studies based on multiple loci (e.g., petA-psbJ, ETS, ITS, and rps10; or trnL-F, rps16, rpl16, psbA-trnH, rbcL, trnK, trnC-petN, and ITS) (Floden, 2017;Floden and Schilling, 2018;Meng et al., 2021). Our results suggested that the Trib. Polygonateae should include only three genera (Disporopsis, Heteropolygonatum, and Polygonatum). Therefore, the results revealed that the convergent evolution of some traits may have misled previous relationships. Further phylogenetic analysis is needed within the Maianthemum.
Diversification History and Leaf Arrangements
Results of divergence time estimates suggest that the elevated diversification rates of Polygonatum occurred from approximately 15-0.1 Ma during the late Miocene and early Pliocene. The two main lineages, sect. Verticillata and sect. Sibirica + sect. Polygonatum seem to have radiated since the mid-Miocene (sect. Verticillata: 11.52 Ma; sect. Sibirica + sect. Polygonatum: 11.80 Ma; Figure 5; Supplementary Table 10).
Notably, the diversification rates of Polygonatum slowly increased during this period, attributed to uplifts of the Qinghai-Tibetan Plateau in the early Miocene (Xue et al., 2021). In addition to the aforementioned tectonic rearrangements and mountain formation in East Asia, the global climatic fluctuations and aridification that occurred in the Mid-Miocene Climatic Optimum (MMCO, 15-17 Ma;Zachos et al., 2001) also accelerated the diversification rates of Polygonatum. Global warming occurred at approximately 15 Ma (MMCO), followed by a gradual decrease in temperature (Zachos et al., 2001). These climatic changes might have influenced the plant diversification and promoted radiation of Polygonatum species.
Variation in phyllotaxy morphology represents an important character source for species delimitation. The phyllotaxy diversity (alternate, opposite, and verticillate) caused some confusion in classifying the Polygonatum. Our study showed the evolutionary trend of Polygonatum from verticillate leaves to alternate leaves, and this suggests that verticillate leaf is the ancestral state and agrees with the previous molecular studies of Polygonatum (Xia et al., 2022). It is noteworthy that the phyllotaxy of Polygonatum is an unstable character even for the same species. Therefore, phyllotaxy cannot be used as the unique taxonomic feature for classifying Polygonatum.
CONCLUSION
In this study, the complete cp genome of 26 species of Trib. Polygonateae was de novo assembled from Illumina reads. In all of our analyses, these cp genomes were generally conserved and exhibited similar gene content and genomic structure. A total of 8 highly variable loci were identified across the Polygonatum cp genome, which could serve as potential markers for phylogenetic and population genetics studies. The monophyly of Polygonatum was confirmed, and phylogenetic analysis indicated that the genus consists of three sections (sect. Sibirica, sect. Polygonatum, and sect. Verticillata). Meanwhile, the phylogenetic analysis suggested that Heteropolygonatum may be the sister group of the Polygonatum, but the Disporopsis, Maianthemum, and Disporum may have diverged earlier. In conclusion, our results enhanced the genomic information for Polygonatum and provided valuable insight into the phylogenetic relationships among the genera involved in Trib. Polygonateae. The results also contribute to the bioprospecting and conservation of the Polygonatum.
DATA AVAILABILITY STATEMENT
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/ Supplementary Material.
|
2022-06-26T15:14:40.751Z
|
2022-06-24T00:00:00.000
|
{
"year": 2022,
"sha1": "5802b41a062e0e3ec7f4434e70e0c2e6b25b8520",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2022.882189/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "fbc77ec9c3ea78f46f4830c1f655d0336c03e021",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
}
|
54203365
|
pes2o/s2orc
|
v3-fos-license
|
Capacity Building for the Integration of Climate Adaptation into Urban Planning Processes : The Dutch Experience
The institutions of the Dutch (urban) planning system face four challenging characteristics of climate adaptation measures. These measures are uncertain in their effects, in competition with other interests, multifaceted, and inherently complex. Capacity building is a key issue for the implementation of climate adaptation measures in urban planning processes, which aim to achieve Climate-Proof Cities (CPC). For successful capacity building, it is important to define the relevant stakeholders and tailor the adaptation strategies first to (the position of) these stakeholders and next to the specific urban conditions and issues. In addition, scientific insights and tools can be of assistance, and the use of climate maps can help to create a common language. Such common understanding of climate problems can lead to “goal entwinement” between actors, which can support the implementation of climate adaptation strategies in urban planning. Awareness, recognition and urgency are the most important components of this common understanding, which may differ for each stage in every urban planning process. In order to overcome the pragmatism that rules in day-to-day urban planning processes, multi-level arrangements between different tiers of government must be employed to improve the penetration of climate adaptation measures. After all, it still remains a soft interest in a hard process.
Introduction
Throughout the world, cities in delta areas must deal with the effects of climate change.Over time, in the Nether-lands, the awareness and the sense of urgency for climate adaptation have grown.The latter can easily be understood when one considers that the Dutch authorities have to deal with rising sea levels as well as rising river discharges.Heavy rainfalls and rising ground water levels are also matters of concern.Therefore, water threatens Dutch cities from multiple sides.Of course, other effects of climate change include heavy winds and urban heat islands, but these seem less important within the Dutch context, where water is clearly the dominant issue, as most of the country lies below sea level.Dutch local governments have only recently started to explore the possibilities of coping with the effects of climate change in terms of adaptation in urban planning.In this vein, the Climate-Proof City Programme (CPC) has been launched.The types of adaptation planned in this program anticipate climate changes that can already be observed; the adaptations include different kinds of structural measures, such as adjustments in urban design, temporal water storages etc. [1].
In this contribution, we explore the possibilities of integrating climate adaptation measures in urban planning processes in the Netherlands.
The results are based on the early experiences of some of the larger municipalities in the Netherlands that participate in the Knowledge for Climate research programme (KFC).These municipalities are Amsterdam, Rotterdam, The Hague and Arnhem.Practical experience in integrating climate adaptation measures is rather new, and it is mainly limited (until now) to policy making and small technical adjustments, as examples in Rotterdam and Amsterdam have shown [1] [2].Therefore, the analysis remains mostly theoretical and explorative.Also, as the debate on this integration is rather fierce, we intend to shed some light on the dilemmas that are revealed by this debate, as we expect that similar dilemmas will emerge throughout the world.
The very subject of climate change brings along with it specific challenges.These challenges, related to climate adaptation, are identified in the next section.Because these challenges are dealt with at the local level, a short outline of the institutional background of Dutch urban planning follows to aid the reader's understanding of the context of the challenges.Next, a sketch of the dilemmas related to climate adaptation in the context of urban planning will be presented.Finally, a three-stage procedure for facing these dilemmas will be proposed that leads to capacity building for the integration of climate adaptation into urban planning processes.
Challenges of Climate Adaptation
Research within the KFC and CPC identified many types of challenges that planning systems and planning processes face when attempting to implement climate adaptation measures.Van Buuren et al. defined four types of such challenges [3].
Specific Uncertainties
The first type of challenge is that the planning system has to deal with the uncertainties that come along with the issue of climate change.Although knowledge about climate change (see the IPCC reports) and its effects is increasing [3]- [6], much uncertainty remains regarding the degree, the time, and the manner in which climate change will affect local communities.This uncertainty can result in indecisiveness and hesitation [7].Responding to this uncertainty is thus crucial for climate-proof cities.
Competing Issues
The second challenge comes from the competition of climate change with other interests, particularly in spatial planning.Within local spatial planning processes, the issue of climate adaptation has to compete with other interests.Additionally, some sort of cost-benefit balance must always be taken into account.As climate adaptation measures mostly require a long-term horizon, they can quickly transform into a so-called weak interest in local decision-making processes.A good example for such measures can be found in the way the municipality of Arnhem is trying to incorporate climate-proof strategies and measures (e.g. to cope with an abundance of rainwater) into its municipal structure plan [8].It would be misguided to blame spatial planning; rather, balancing various spatial issues-whether they be long-or short-term issues-is an inherent feature and task of planning.Planners may not overestimate or underrate certain spatial demands [9]- [11].But the fact that climate measures are both controversial and a weak interest demands that they be given particular attention in planning decisions.
Multifaceted Character
The third challenge is the multifaceted character of climate adaptation.As climate change can result in dry pe-riods as well as extremely wet periods, the opposite sides of the same coin have to be addressed.This situation can be seen in Australian experiences, where changing climate has already led to more dry periods with bush fires on the one hand and to more severe flooding on the other hand.Also, in Central Europe, opposite extremes in the effects of climate change have been seen: while 2002 witnessed an extreme flood, 2003 saw the navigability of the Rhine river threatened due to a drought.On the city level, this multifaceted characteristic of climate change requires measures preparing for vastly different situations-this makes implementing adaptation measures more complex, varying in application from urban heat island problems to flooding problems as a result of heavy rainfall.This means that the effects of certain measures might have unforeseen and probably contradictory results.Climate adaptation can thus become a "wicked problem" [12] because the solution to one problem might be the cause of another problem.Since climate change tends to connect to many other domains and functions, it is not hard to imagine its far-reaching effects on a local level.
So, three challenges rise from implementing climate adaptation measures: the measures are uncertain in their effect, in competition with other interests, and multifaceted.Coping with these challenges is crucial for successfully pursuing the ideal of climate-proof cities.
The Institutional Framework: Integrating Climate Change Adaptation into Urban Planning in the Netherlands
What does climate adaptation mean for urban planning at the local level?To facilitate understanding of the impact of climate change on the underlying planning paradigm, this section briefly recaps the development of urban planning in the Netherlands over the past decades before addressing how the three challenges mentioned above can be dealt with in Dutch urban planning.
In the Netherlands, from the 1920s onwards, there has been a growing concern that urban expansion would engulf the whole western part of the Netherlands, forming one huge conurbation [13].In reality, this was never a serious threat until the 1960s.Then, a major wave of suburbanization threatened to spread pockets of urban land use across the Green Heart [14].From the late sixties onwards, social circumstances began to change.In this period, pressures from different interest groups succeeded more and more in convincing the authorities to focus attention on predominantly social objectives in urban policy.
The development, as described above, was mirrored in planning documents from the central government.The Second Report on Physical Planning in the Netherlands [15], for example, included a proposal to divert populations from the crowded western part of the country to the north and south.But it also took a powerful stand against the suburban sprawl that was developing into a real threat, particularly in the Green Heart [14].The proposed alternative was to channel suburbanization into "concentrated deconcentration".More specifically, this meant accommodating new urban growth outside of the existing cities in a number of designated overspill centres.Jenks et al. [16] described this policy as a feasible compromise between concentration and low-density dispersal of urban activities.In Randstad, Holland, it was put into practice in the late 1970s and the early 1980s.Compact urban development has remained the cornerstone of Dutch physical planning ever since.In the view of Faludi and van der Valk [13], the policy of concentrated deconcentration was successful; half a million people moved into the designated growth centres, and urban sprawl was stemmed by prohibiting the growth of villages in the Green Heart.
In the course of the 1980s, the policy of compact urban development changed track.The main cause of the shift was the decline of the old urban cores.Concern about urban decay was first voiced by the cities' administrations.Later the issue was placed on the national political agenda [17].City officials blamed inner city decline, in part, on the policy of concentrated deconcentration.Eventually, this policy was abolished.In its place, a new concept of compact urban growth has been developed and put into practice.Under this new policy, the government tries to guide new urban (re)development towards locations within existing cities (towards "brown" sites) and later on towards new greenfield sites directly adjacent to the cities of Amsterdam, Rotterdam, The Hague, and Utrecht.Within a ten-year period, a total of some 227,200 dwellings should be built on these sites in a relatively compact form.In addition, places of work and service premises will also be built [18].
This short recapitalization of Dutch urban planning illustrates, against the background of the issue of climate change, the intense shift in urban planning, namely, the shift from a brand of urban planning focused on economic growth towards a more sustainable approach.
In the course of developing the ideal for compact urban growth, a heated debate on the merits, feasibility, and costs of such growth has started throughout administrative and policy circles.The arguments used are reminiscent of those brought forward in the international literature on urbanization processes [19].The debate shows that the implementation of urban development will not be an easy task, even if a broad consensus on its merits exists [16].Within these urbanization processes, the Dutch governance approach already tries to balance too many interests, and the introduction of the need to adapt to the effects of climate change further complicates an already very complex decision-making process.
Dilemmas of Integrating Climate Adaptation
Based upon the earlier-mentioned challenges, it is possible to identify four dilemmas of integrating climate adaptation measures in urban planning processes.These dilemmas are addressed below.
Dilemma of the Law: Flexibility versus Robustness
Together, the three previously discussed challenges seem to demand flexibility, learning, and experiments from a governance approach [20] [21].Some even propose flexible coalitions between all kinds of societal actors [22] [23] or bottom-up processes of self-organization aimed to increase the resilience of a local community towards climate change [24].Such approaches, however, largely neglect issues like legal responsibility, liability and legal security, which are essential for a functioning spatial development [25] [26].Adaptation to climate change demands not only flexibility but also robustness and reliability of spatial planning decisions [2] because planning processes must always strike a balance between the desire to change the built environment and the obligation to support legal certainty for those who have possessions in that particular area [25].Impacts of climate change are always locally measured and tackled and therefore have to be locally tailor-made [27], responding to the institutional arrangements of the local urban planning system.Hence, in order to be effective and balance flexibility and robustness, adaptation needs to match the institutional arrangements for planning and fit into the local planning processes [1].Accordingly, in order to deal with the challenges of climate adaptation, the planning system needs a legal system that is firmly rooted in broadly accepted values as well as flexibility to adjust to local circumstances.This can be considered the backbone of the governance approach needed to adapt to climate change.It, more or less, guarantees continuity and legal certainty in otherwise unpredictable processes.It must guarantee a minimum quality and cannot be too detailed.In the latter respect, the Dutch system does not fit this requirement because the detailed legal system sets limits to the much-needed flexibility in urban planning.
Dilemma of Applying Principles of Decision-Making to Climate Adaptation Measures
In the implementation of climate adaptation measures, three principles of decision-making must be considered: the precautionary principle, the proportionality principle, and the cost recovery principle [2].
The precautionary principle applies for situations that are difficult to grasp or calculate and for which negative effects of an acuity or event are likely but not yet sufficiently proven by scientific evidence.In such situations, the principle demands that decision makers may not defer measures that avert or reduce the potential damage on the basis of insufficient evidence [28] [29].Their decision is thus based on discretion rather than measurable facts [29].This principle, however, requires that the issues at stake are on the agenda of decision makers and, moreover, "that this agenda has a high priority" [28].The precautionary principle is applied in many international treaties, but it can also be found in Dutch law [29].
The proportionality principle also gives a guideline for deciding specific measures.Measures need to match three requirements: they must be suitable to protect the interest at stake (there needs to be a causal relationship), they must be the least restrictive choice (compared to alternatives), and "the restriction caused by the measure must not be out of proportion to the objective pursued" [30].This, of course, competes with the three challenges earlier mentioned, in particular the first two: the uncertainty of the effects measures may have and their competing character.
Third, the cost recovery principle implies that all costs of a measure need to be covered by one form of revenue.In environmental issues (climate adaptation can be considered as such), the "polluter pays principle" is applicable as cost recovery.It fulfils an informational, steering, and financing function [31].The difficulty with climate adaptation is that the polluter often cannot be clearly identified.Even the beneficiary of climate adapta-tion measures is not always evident, which is a consequence of the multifaceted character of climate adaptation.
So, each of these principles plays an important role in every decision-making process about adaptation towards climate.Ideally, these principles should play a fairly equal role in each planning process.However, as shown above, these principles all have difficulties associated with them that affect how challenges of climate change should be approached.
Dilemma of Technical Safety Standards: Protection versus Moral Hazard
While the above principles can be applied to climate adaptation, most of them also sustain technical safety standards (such as design levels for flood protection), as these are the best justifiable solutions according to the precautionary principle, the principle of proportionality, and the cost recovery principle.
Generally, it will be expected that the use of such standards will raise the general physical quality of an area.Yet, the use of norms is in conflict with the need for flexibility in specific local circumstances.Especially in situations of uncertainty and complexity, flexibility in planning rules and norms is essential [32].
The dilemma with technical safety standards is the following: such standards are politically approved and very common in environmental policy, for example, in the European Water Framework Directive or the European Air Quality Directive, but they are also common in flood risk management [33].These standards are essential for building structures like levees, measuring dams and polders, sizing drainage systems, or drawing flood risk maps [34].A safety standard determines the maximum impact on the subject of protection, such as the maximal reasonable pollution or damage.However, reasonable impact events also need to be determined.In this context, "reasonable" means that the severity of an event must not exceed a determined impact on the values.This indicates that the technical measure would have to protect against rare but heavy events while also tolerating frequent smaller events.Most real-world applications, however, fall short of this idea.Often, technical measures protect against small and frequent events and fail against infrequent and heavy events.This can be seen in the use of dikes against flooding [33] or the famous Dutch barriers, the "Deltaworks", against storm floods.Moreover, security cannot be guaranteed by technical safety standards, but they provide the illusion of security [35].Such technical safety standards provoke additional value accumulation, because people feel safe behind them and become complacent [36]- [38].In environmental research, this dilemma is called a moral hazard [28] [39].
Dilemma of Science and Knowledge: Facts versus Social Constructions
Scientific information might seem to play a role in unraveling the complexity of applying climate adaptation measures.Yet, research shows that scientific information in general (including GIS tools) is not easily incorporated into planning processes [40] [41].This applies even more so for information about the effects of climate change.Ren et al. [8] concluded from their study on the Dutch city of Arnhem, "The existing gap between the practical world of spatial planning and the scientific world of climate studies will never be closed.The tool remains just a fragile bridge to cover a huge gap".
To bridge the gap Ren described, the earlier-identified challenges must be met [8].It is necessary to organize the support of the different actors that have a stake in the adaptation process towards climate change.A first requisite is the use of a common knowledge base.Without an agreement between the relevant actors as to the basic analysis of the common problems that must be addressed, there is no chance of a (common) solution.The information from the analysis of urban climate maps can provide not only this knowledge, but it can also provide a common "language" about the effects of climate change.If the relevant actors agree upon language and analysis, the necessary basis is created for common solutions-although these requirements do not necessarily and automatically lead to consensual solutions [42].
Capacity Building in a Three-Stage Procedure
The key question that remains is how to organize these solutions.The section above identified four dilemmas: First, coping with the requirement for flexible governance schemes on the one hand while also providing legal security for the implementation of measures on the other hand; second, applying principles of decision-making to climate adaptation; third, managing technical quality standards on the one hand and avoiding their moral hazards on the other hand; and fourth, balancing measurable knowledge with a socially constructed common un-derstanding of the problem.
This leads to the next question: How can public and private actors be mobilized in order to develop such an adaptation strategy?
It is important to realize that a new adaptation strategy has to be integrated into an existing (and functioning) urban planning system.A strategy, firmly supported by a common understanding of the effects of climate change, starts with building common ground, which can function as a basis for the development of "goal entwinement" [1].This can be understood as capacity building.The core characteristic of goal entwinement is the acceptance of differences between actors [43].This acceptance provides a basis for a cooperative attitude between actors and the creation of opportunities, for example, sharing costs and resources in the case of an integrated design for area development.In fact, the introduction of climate-proof elements can provide benefits for public actors as well as for private actors because it can easily be embedded as replacement investments in day-to-day issues in urban planning, such as maintenance issues [1].Thus, not only the urban quality improves, but also the value of the real estate involved may increase.The latter implies not only a positive incentive for the owner, but also for municipalities as it may result in higher revenue from the property taxation [44].So, this acceptance might be able to prevent conflicts of interest.
Analytically, three stages can be distinguished in such a strategy: awareness, recognition, and urgency.The process always starts by generating an awareness of the problem.It is easy to see how the role of information and science fits into this stage.In Rotterdam, in the case of Feijenoord (an unembanked neighbourhood), the local government made the first step towards awareness by bringing public and private parties together in "expert meetings".In interviews, the participants stated they were satisfied with these meetings but that more steps should be taken [1].The latter statement showed that recognition of the problem was around the corner but not yet reached.The same is true for a sense of urgency, although the ever-present threat of flooding in Feijenoord ensures a constant sense of urgency in this particular unembanked neighbourhood.
The stage of awareness must lead to recognition of the problem, as the case of Feijennoord shows.This recognition, in its turn, has to be translated into some sense of urgency.For, without recognition, the problem will be denied, and without some sense of urgency, there will be no need for policy actions.In both cases, nothing will happen.A common recognition of the problem in combination with a common sense of urgency are the basic conditions for a joint strategy that can bring about the adaptation towards climate change within urban planning processes.
In the third stage, a common strategy can be developed in which priorities are set, instruments developed, and investment schemes agreed upon.The case of Feijenoord showed the mutual dependency of market parties and local officials.Both perceived a shared responsibility towards climate change [1].However, the Feijenoord case also showed that a hierarchical steering by public actors strengthened the dissatisfaction of market parties in this process.From a traditional water management perspective, this type of steering is understandable, because it refers to (legal) certainty and equality of rights, which is strongly embedded in water policy in the Netherlands.This third stage can, however, benefit significantly if the strategy is based upon the three principles mentioned above because, in that case, it is optimally tailored to the issues as well as to the governance context in which it has been developed.The case of Feijenoord shows that small, flexible measures in public space support growth in trust between public and private partners [1], as do protective measures in vulnerable public areas.On the other hand, the same study shows that it seemed hard to gain trust through larger projects, such as the elevation of land.
Conclusions: Soft Interests in a Hard Process
Why is such a careful arrangement for adapting to climate change in urban planning necessary?One has to keep in mind that the relevant actors will first view the development of an adaptation strategy as no more than some form of external integrative interest that is eager to penetrate into the urban planning system, as so many sorts of interests do all the time.From this perspective, the attempt will trigger all the usual defense mechanisms.One may expect a lot of opposition to any attempt to start an adaptation strategy due to its more disagreeable characteristics [2].Consequently, it will be interpreted as an extremely weak interest without an economic and/or financial position to back it up.Its long-term character and weak financial position will especially put any adaptation strategy in urban planning in this relatively weak position because relevant actors will first see the associated extra costs for an issue of lower priority.However, if a carefully laid out strategy, organized in stages, is accompanied by support from higher tiers of government, this position can change significantly [45].As ideological arguments (with a weak economic impact) work better on higher administrative levels of government, it should be possible to arrange support.As pragmatism rules the lower tiers of government, a lot of support is needed to overcome the existing (and new) barriers to make the adaptation towards climate change in urban planning a success.Although this road may appear a tedious one, it is nevertheless an exciting challenge in which both practitioners as well as scientists can play an important role.
|
2018-12-04T12:00:28.699Z
|
2014-09-18T00:00:00.000
|
{
"year": 2014,
"sha1": "e0ec80c5d34164cc426338731e72df8c17cd91e1",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=49839",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "e0ec80c5d34164cc426338731e72df8c17cd91e1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Business"
]
}
|
118660895
|
pes2o/s2orc
|
v3-fos-license
|
Particle production and chemical freezeout from the hybrid UrQMD approach at NICA energies
The energy dependence of various particle ratios is calculated within the Ultra-Relativistic Quantum Molecular Dynamics approach and compared with the hadron resonance gas (HRG) model and measurements from various experiments, including RHIC-BES, SPS and AGS. It is found that the UrQMD particle ratios agree well with the experimental results at the RHIC-BES energies. Thus, we have utilized UrQMD in simulating particle ratios at other beam energies down to 3 GeV, which will be accessed at NICA and FAIR future facilities. We observe that the particle ratios for crossover and first-order phase transition, implemented in the hybrid UrQMD v3.4, are nearly indistinguishable, especially at low energies (at large baryon chemical potentials or high density).
I. INTRODUCTION
One of the main goals of the heavy-ion experiments is the characterization of strongly interacting matter under extreme conditions of high temperature and density [1]. Examining the possible quark-hadron phase transition(s) plays a crucial role in verifying the quantum chromodynamics (QCD), the theory of strong interactions, which predicts that the confined hadrons likely undergo phase transition(s) to partonic matter called quark-gluon plasma (QGP) [2]. So far, various signatures for the QGP formation have been verified, experi-mentally [3]. The statistical-thermal models [4][5][6][7][8][9][10][11] are successful approaches explaining -among others -the produced particle yields and their ratios. At chemical equilibrium, it is conjectured that the particle ratios are well described by at least two parameters, the baryon chemical potential (µ b ) and the freezeout temperature (T ch ). The chemical freezeout is defined as a stage during the evolution of the high-energy collision at which the inelastic collisions is assumed to disappear and the number of produced particle should be fixed. Experiments at the Schwerionensynchrotron (SIS18) [12,13], the Alternating Gradient Synchrotron (AGS) [5], the Super Proton Synchrotron (SPS) [7], and the Relativistic Heavy-Ion Collider (RHIC) [14][15][16][17] have been successfully reproduced within the statistical-thermal approaches [4].
The dependence of both freezeout parameters (T ch and µ b ) on the nucleon-nucleon center-of-mass energies ( √ s NN ) known as the chemical freezeout boundary looks very similar to the QCD phase-diagram separating confined hadrons from deconfined QGP [18]. In lattice QCD simulations [19,20], which are very reliable at µ b /T ≤ 1, i.e., √ s NN greater than top SPS energies, the dependence of T ch on µ b appears very close to the QCD critical line. At larger µ b (lower energies), both boundaries become distinguishable [21]. In this region, lattice QCD simulations suffer from serious numerical difficulties (the so-called sign problem). Thus, we are left with effective models such as statistical-thermal models and QCD-like approaches including linear-sigma and Nambu-Jona-Lasinio models. So far, there are various phenomenological proposals suggesting universal conditions describing the chemical freezeout boundary. For a recent review, the readers are kindly advised to consult Ref. [4]. Recently, the possible interrelations among the various freezeout conditions have been derived [21].
While the region of the high temperature and low baryonic density at the QCD phase-diagram are explored by the experiments of the RHIC and the LHC, the region of relatively low and intermediate energy will be covered by the future programs: BES-II at RHIC, Nuclotron-based Ion Collider fAcility (NICA) at the Joint Institute for Nuclear Research (Dubna) and the Facility for Antiproton and Ion Research (FAIR, Germany). At NICA will be the fixed target experiment BM@N with the beam energy E kin = 1 − 4.5 AGeV and the collider experiment MPD with the collision energy range 4 ≤ √ s N N ≤ 11 GeV. At the FAIR will work the fixed target experiment CBM with E kin up to 11 AGeV (SIS100).
In the present work we utilize the hadron resonance gas (HRG) [21] and the Ultra-relativistic Quantum Molecular Dynamics (UrQMD) v.3.4 models [22] in order to estimate various particle ratios at energies ranging for √ s N N from 3 to 19.6 GeV. The freezeout parameters (T ch and µ b ) are determined from the statistical fit of various particle ratios from UrQMD simulations of Au-Au collisions at √ s N N = 3, 5, 7.7, 11.5 and 19.6 GeV. The last three energies are the part of the RHIC beam energy scan program (BES I) and at these energies the experimental values of freezout parameters are obtained also. A comparison of simulated and these experimental results are discussed. The convincing agreement between UrQMD simulated and experimental parameters encourages us to extend the study to the other beam energies through UrQMD simulations lower down to 3 GeV, in which the baryon density likely reaches its maximum and shall be covered by NICA and FAIR future facilities.
It is obvious that HRG is an effective statistical model which is only applicable to the produced particles in their final stages of the temporal and spatial evolution of the high-energy collision. Thus, both chiral and deconfinement phase transition(s) can not be modelled in such statistical approaches, which are based on Hagedorn bootstrap picture [21].
The present paper is organized as follows. Section II gives short reviews on both approaches; HRG and UrQMD models. In Section III, the energy dependence of various particle-ratios (section III A) and the deduced freezeout parameters (section III B) are presented. In Section IV, the conclusions are outlined.
II. APPROACHES
The hybrid UrQMD model is used to calculate various particle ratios at energies ranging √ s NN from 3 to 19.6 GeV. This is appropriate as long as the location of the critical endpoint is not known yet. It widely varies in both µ-and T -dimension. The freezeout parameters: temperature (T ) and baryon chemical potential (µ b ) are calculated from the HRG model and the statistical fit of the particle ratios from the UrQMD data. This data was generated with the first order or crossover phase transitions. Our UrQMD ensemble contains 10 000 and 150 000 events for high and low energies, respectively.
A. Hadron Resonance Gas (HRG) model
In grand-canonical ensemble, the partition function of an ideal gas consisting of hadrons and resonances is given as [4] Z(T, V, µ) = Tr exp where H is the Hamiltonian and µ and T being chemical potential and temperature of the system of interest, respectively. The Hamiltonian counts the relevant degrees of freedom of confined and strongly interacting medium. Interactions (correlations) can be included implicitly, for instance, in the ones responsible for the resonance formation, i.e., strong interactions. In the HRG model, Eq. (1) sums up contributions from a large number of hadron resonances [4] consisting of light and strange quark flavors as listed in the most recent particle data group compilation with masses ≤ 2 GeV [23]. This corresponds to 388 different states of mesons and baryons besides their anti-particles. More details can be found in Ref. [4]. The decay branching ratios are also taken into consideration. For the decay channels with not-yet-measured probabilities, we follow the rules given in Ref. [24]. No finite size correction was applied [6] ln where ± represent fermions and bosons, respectively, ε i = p 2 + m 2 i is the dispersion relation of the i-th particle and λ i is its fugacity factor [4] where b i (µ b ), S i (µ S ) and Q i (µ Q ) are baryon, strange and charge quantum numbers (their corresponding chemical potentials) of the i-th hadron, respectively. The number density of i-particle can be derived from the derivative with respect to the chemical potential of the corresponding quantum number. Such particle can be a stable hadron and decay product out of heavy resonances, where b j→i is the decay branching ratio of j-th hadron resonance into i-th stable particle of antiparticle. In a statistical fit of various particle ratios with the UrQMD simulations or with the measurements at different energies, T and µ b are taken as free parameters. Details about the statistical fit can be taken from Refs. [25,26]. Their resulting values can be related to each other and each of them to the center-of-mass energy ( √ s NN ), separately [4].
It noteworthy emphasizing where or when HRG can be applied. As mentioned, different numerical methods and algorithms seems to fail while trying to reproduce even the well-identified particles (low-lying states, such as pions, Kaons and protons) at very low beam energies or equivalently very large baryon chemical potentials. In this energy limit, some particle species can't be accessed. This becomes illustrated in section III, especially Fig. 4. In general, the HRG model is a very powerful statistical approach. In spite of its simplicity, it finds so-far a wide range of implications, especially in describing various aspects of the lattice QCD thermodynamics and the particle production in heavy-ion collisions. The latter are limited to the final state, post the chemical freezeout era. All prior eras of the relativistic collision are not accessible by the HRG model. These would be subject to transport approaches. The UrQMD characterizes almost the entire evolution of the colliding system from very early stages up to the particle production, including hydrodynamical evolution and particlization. This transport approach will be elaborated in the section that follows.
B. Ultrarelativistic Quantum Molecular Dynamics (UrQMD) model
The UrQMD event-generator [27] is a well-known simulation approach enabling the characterization of highenergy collisions. It simulates the phase space of such collisions and implements a large set of Monte Carlo solutions for a large number of paired partial-differential equations describing the evolution of phase space densities. The UrQMD model simulates the development of the colliding system from a possibly very early stage (depending on the chosen configuration) up to the final state of the particle production. Its large number of unknown parameters could be fixed from experimental results and by theoretical assumptions.
In the present calculations, we use hybrid UrQMD v. 3.4 [22] which has been tested and give reasonable results in the energy range from E lab = 2 − 160 AGeV in standard parameter calculations. Furthermore, hybrid UrQMD v3.4 provides the possibility to use two types of the phase transition; first order and crossover. This allows us the study of the possible effects of the hadronization processes on the final-state particle production.
In hybrid UrQMD 3.4, for case the of crossover, the equation of state (EoS) for the fluid dynamical evolution is borrowed from the SU(3) parity doublet model in which quark degrees-of-freedom besides the thermal contribution of the Polyakov loop are included [28,29]. This EoS qualitatively agrees with the lattice QCD results at vanishing baryon chemical potential and -most importantly -is conjectured to be applicable at finite baryon chemical potentials, as well. For first-order phase transition, an EoS from SU(2) bag model is to be included. By the end of the hydrodynamical evolution, the active EoS is changed to the one characterizing the hadron gas. Accordingly, it is assured that the active degrees-of-freedom on both sides of the transition hypersurface are exactly equivalent [28][29][30].
For the first order phase transition in UrQMD v. 3.4 is used the approach proposed in [31]. The nuclear matter is described by a σ − ω -type model for the hadron matter phase and by the MIT bag model for the quark-gluon plasma, with a first order phase transition between both phases.
For the sake of completeness, we emphasize that two differences between crossover and first-order phase transitions are the latent heat and the degrees of freedom. In first-order phase transition both are larger than that in crossover. Furthermore, the crossover takes place smoothly, i.e., a relative wide range of temperatures is needed to convert the QCD matter from pure hadron to parton matter or vice versa, while there is a prompt jump in case of first-order phase transition, i.e., the critical temperature becomes very sharp [33].
As a limitation, there is some influence of a mere technical aspect of UrQMD to its physical outcome. When the program switches from the hydrodynamical treatment of the high-density stage of the hadronic medium back to the "normal" particle-based transport code, there might occur some bias to the resulting particle statistics [34]. Because we selected the same particlization procedure in both cases, the differences of particle ratios between first order and crossover phase might appear smaller than implied by the physical model.
III. RESULTS AND DISCUSSION
Particles ratios in this work are studied at the energies √ s NN = 3, 5, 7, 7.7, 9, 11, 11.5, 13, 19 and 19.6 GeV. The energies 7.7, 11.5 and 19.6 GeV are the part of the STAR BES program and for these energies exist also comparable measurements from experiments of the Superproton synchrotron (SPS), such as NA49, NA44, and NA57 [24]. The energy range √ s NN = 3 − 11 GeV will be reached at future NICA facility, so it is interesting to study particle ratios in both regions experimentally and with the model approaches. If the hybrid UrQMD model can reproduce the STAR results relatively well further UrQMD simulations for the energies 3 and 5 GeV will be as predictions for the future experiments at NICA. From an ensemble of events created by the hybrid UrQMD model at various energies and taking into account two types of the quark-hadron phase transition (crossover and first order), we study the ratios of various particle species. For beginning, we determine their energy dependence. From the statistical fit of the HRG model to the ones simulated with UrQMD and independently to the data from STAR experiments, both freezeout parameters were calculated.
The HRG particle ratios are determined from Eq. (4), in which the baryon chemical potential (µ b ) is replaced by √ s NN [25] where a = 1.245 ± 0.094 GeV and b = 0.264 ± 0.028 GeV −1 . The HRG calculations are in good agreement with both measurements and UrQMD predictions, at least qualitatively. For some of the particle ratios, the agreement is better than for the other ratios. It should be noticed, that these calculations will be fine-tuned in order to reproduce both UrQMD and the experimental results. In doing this, both freezeout parameters will be taken as free variables. Adjusting both of the parameters brings HRG calculations to a quantitative agreement with the UrQMD and the experimental results. It is worthwhile noticing that the particle ratios from both types of phase transition are almost indistinguishable, especially at lower energies (larger baryon chemical potentials). In Fig. 2, the energy dependence of UrQMD π − /π + , K − /K + ,p/p,Λ/Λ andΣ/Σ are illustrated and compared with HRG calculations and various measured ratios; π − /π + (a) [35][36][37][38], K − /K + (b) [35][36][37][38],p/p (c) [35][36][37]43],Λ/Λ (d) [35][36][37][38][39][40][41] andΣ/Σ (e) [44][45][46]. Again, it is obvious that the both orders of the phase transitions implemented in the hybrid UrQMD -at least qualitatively -result in both measured (STAR) and calculated (HRG) particle-ratios. Concretely, the particle ratios from crossover phase transition is slightly higher than the ones from the first-order. Furthermore, we observe that the agreement between UrQMD simulations or HRG calculations for these particle-antiparticle ratios and their measurements is fairly convincing, at least qualitatively.
Characterizing the energy dependence of ten particle-ratios from simulations, calculations and measurements, and being successful in reproducing, at least qualitatively, both UrQMD predictions and STAR measurements by means of the statistical-thermal HRG furnish us with a solid argumentation for the attempt to deduce the freezeout parameters from the given data sets. In doing this, we assume the UrQMD simulations take the position of experiments such as STAR. The determination of the freezeout parameters at the energies covered by STAR BES; 7.7, 11.5 and 19.6 GeV are compatible with the UrQMD simulations with crossover phasetransition. The results from the HRG statistical fits well with the STAR BES measurements [47][48][49][50] can be summarized as
B. Determining freezeout parameters
The study of the energy-dependence of various particle ratios paves the way towards determining the freezeout parameters, which are taken as free parameters in the HRG approach, from UrQMD simulations. The statistical fit of the HRG calculations to the UrQMD results is motivated by the excellent agreement between UrQMD and STAR particle ratios at the given RHIC-BES energies. The quality of the statistical fit is measured by minimum χ 2 and q 2 where R exp i and R theor i are the i-th measured and calculated particle-ratio, respectively, and σ i represents the error in its measurement. In UrQMD, σ i is restricted to the statistical errors of each particle ratio.
For the particle ratios K + /π + , K − /π − , π − /π + , K − /K + , Λ/π − ,p/p,Λ/Λ,Σ/Σ,p/π − and Ω/π − a comparison between the HRG statistical fits (dashed lines) with the UrQMD simulations with a crossover phase transition (solid lines) and that with the STAR measurements [47][48][49][50] at 7.7, 11.5 and 19.6 GeV (open symbols) is illustrated in panels (a), (c) and (e) of Fig. 3. It is apparent that the ability of hybrid UrQMD to generate the STAR particle ratios increases with the beam energy. This is also reflected in the corresponding χ 2 per degrees of freedom (dof), Tab. I. Same observation can be reported from qualities of the HRG statistical fits for both UrQMD and STAR results. The comparison between the hybrid UrQMD simulations and the STAR measurements is illustrated in these three panels in order to argue for further UrQMD simulations at other energies, such as 3, 5, 7, 9, 11, 13 and 19 GeV. Table I: Estimated freezeout parameters, T ch and µ b in MeV from the statistical fits of the HRG calculations with the hybrid UrQMD simulations, in which a crossover phase transition is taken into consideration.
Also, for the crossover phase transition, these ten particle ratios are depicted in panels (b) and (c) as well. Other fits for first-order phase transition will be illustrated in Fig. 4. [47][48][49][50]. The smallest χ 2 /dof is given in each graph.
The resulting freezeout parameters from the hybrid UrQMD simulations with crossover and first-order phase transition, Tab. I and Tab. II, respectively, are depicted as thick-solid curve (crossover) and dashed curve (firstorder phase transitions) in Fig. 5. The present calculations are also compared with other estimations (symbols). They are phenomenologically deduced freezeout parameters from measured particle ratios: Cleymans et al. [51], Tawfik et al. [25,26], HADES [52] and FOPI [53] and from measured higher-order moments of net-proton multiplicity: the SU(3) Polyakov linear-sigma model (PLSM) and HRG [54]. The thin curve represents the HRG estimations at the freezeout condition s/T 3 = 7. At a given µ b which is related to beam energy √ s NN , Eq. (5), the freezeout temperature has been determined from the HRG model, section II A, at which the freezeout condition s/T 3 ≃ 7 is nearly fulfilled.
It is obvious that the UrQMD results agree well with the thermal-model calculations which are based on the higher-order moments of the net-proton multiplicity [54]. Also, the calculations from the SU(3) PLSM are slightly lower that both UrQMD variants. There is a very small difference between UrQMD with crossover and first-order phase transition as can be determined from Tab I and Tab. II. Accordingly, we conclude that the resulting freezeout parameters are not affected by the type of the phase transition.
The main reason of the small difference between crossover and first-order transitions could be that in the our UrQMD simulations the same value for the particlization criterion were used in both cases. So, influence of this criterion will be the subject for the next investigations.
When we determine the freezeout parameters from the statistical fits of the HRG calculations to the measured particle ratios and when we compare them with the fits to UrQMD, we observe that the first ones are relatively higher. This might be due to the assumptions that the constituents of the HRG model are point-like, i.e., no excluded-volume corrections were taken into account, and the quark occupation factors are unity, i.e., light and strange quarks equilibrium, i.e., γ f factors, where f runs over the quark flavors, which are multiplied by the quark occupation parameters λ i in Eq. (2). At equilibrium, they are omitted as their values are unity. In nonequilibrium, the quark occupation factors should be stated.
IV. CONCLUSION
Ten particle ratios are generated from the hybrid UrQMD v.3.4 at different nucleon-nucleon center-of-mass energies. Two types of the quark-hadron phase-transition; crossover and first-order, are taken into consideration. The energy-dependence of the resulting particle ratios is compared with the HRG calculations and different measured results from the STAR experiments and from the UrQMD model. Within the energy-range considered in this study, a good agreement is observed, at least qualitatively.
We observe that almost all particle ratios from both types of phase transition are nearly indistinguishable, especially at lower energies (larger baryon chemical potentials), which might be interpreted in such a way that the chemical freezeout, at which the particle number should be fixed, apparently takes place immediately after the hadronization process and accordingly the particle production at this chemical equilibrium stage does not differ with respect to its origin. Concretely, we find that for some particle ratios, the simulations with crossover phase transition result in slightly higher temperatures if crossover phase transition is considered than the ones in first-order and vice versa in other ratios. All particle-to-antiparticle ratios are regularly resulting in slightly higher temperatures for crossover phase transition. For these ratios, the agreement between UrQMD or HRG calculations and their measurements is fairly good.
From the energy dependence of the UrQMD particle ratios and the conclusion that the HRG model qualitatively reproduces them and the STAR measurements, as well, we have deduced both freezeout parameters. In doing this, we assume that the UrQMD simulations are experimental inputs. The corresponding uncertainty is determined by statistical errors. We have determined the freezeout parameters at STAR BES; 7.7, 11.5 and 19.6 GeV, whose particle ratios are found compatible with the UrQMD simulations with the crossover phase transition. The resulting freezeout parameters agree well with the ones determined from the statistical-thermal fits of STAR particle ratios at these given energies.
It is found that the resulting freezeout parameters from hybrid UrQMD agree well with the HRG calculations, in which higher-order moments of the net-proton multiplicity are utilized. Furthermore, the freezeout temperatures deduced from the SU(3) PLSM are slightly lower that the ones from both of them. We conclude that the resulting freezeout parameters are not influenced by the order of the quark-hadron phase transitionor that the aforementioned particlization bias has a possible small influence.
The HRG freezeout parameters determined from the statistical fit of the measured particle ratios are relatively higher. This might be understood due to the assumption of point-like constituents and equilibrium light-and strange-quarks occupation factors assumed in the HRG model. Furthermore, the Parton-Hadron-String Dynamics (PHSD) [55] and the Three-Fluid Hydrodynamics (3FH) [56] are conjectured as alternatives to perform much better than UrQMD at low energies, towards the NICA energy range. In a future study, we plan to compare between all these approaches at NICA energies.
|
2016-09-30T21:59:51.000Z
|
2016-09-08T00:00:00.000
|
{
"year": 2016,
"sha1": "fde37bc7dfe3d3eacfd51f8e267774650e2a7b00",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1609.08423",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "fde37bc7dfe3d3eacfd51f8e267774650e2a7b00",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
169497437
|
pes2o/s2orc
|
v3-fos-license
|
“ Going the extra mile ” : A descriptive exploratory study of Primary Health Services based on the experiences of Pacifi c Primary Health Organisation Service managers and providers
INTRODUCTION: This exploratory study is part of a larger evaluation of the primary health care strategy (PHCS) in Aotearoa New Zealand, using a mixed methods research approach. The aims of this qualitative arm of the research were to explore the extent of use and satisfaction with the PHCS through the operation of Pacific-led Primary Health Organisations (PHOs) in relation to service provision and delivery from the service providers’ and managers’ perspectives. METHOD: The exploratory study was conducted using a case study design and in-depth interviews with service managers and health providers at six Pacific-led PHOs. A review of the literature on primary healthcare was conducted prior to undertaking the research. In this literature review, several themes were noted from the review of policy documents providing background to the development of primary healthcare in New Zealand. CONCLUSION: The themes from interviews suggest a core tension between the business model, Ministry reporting requirements, and more altruistic values of both managers and service providers in their delivery of services. Overall, there was a positive response to the lowered cost of healthcare from the providers and managers interviewed in the Pacificled primary health services, mirroring the findings of the larger evaluation report of PHOs (Cumming et al., 2005). The availability of wrap-around, holistically based, accessible services delivered by culturally responsive health providers who were considered to “go the extra mile” for their clients was the predominant theme accounting for an increased uptake and use of the services. The implications for health social work are discussed.
Introduction
This article begins by providing the demographic and historical context in which the Pacific-led PHOs were developed in Aotearoa New Zealand from the mid-2000s.Definitions of the terms Pacific; and Pasifika are given and some of the key barriers in the provision and uptake of services are then outlined.The literature review underpinning the research is reported, along with the aims and objectives of the project and its research design and methods.Finally, the results of the data analysis are summarised and the
Defi nitions and demographic trends
Pacific peoples within the context of the present study is an umbrella term used to describe those residents and citizens living in Aotearoa New Zealand who self-identify culturally with one or more of the predominant Pacific cultures living there.The predominant Pacific cultures represented in the present Aotearoa New Zealand population include: Samoan, Tokelau Islander, Cook Island Ma ¯ori, Niuean, Tongan and Fijian.The term Pasifika relates to those born in Aotearoa New Zealand with a Pacific heritage which has been used to distinguish those residents from those born in the Pacific Islands who later migrated to live in Aotearoa New Zealand.The current research on the Pacific-led PHOs encompasses both groups, whom together comprise the term Pacific peoples, or those who identify culturally with one or more of the Pacific cultures represented in Aotearoa New Zealand, regardless of place of birth.
In 2013, the Pacific ethnicity with the highest proportion of Aotearoa New Zealand-born people included those self-identifying as Niuean, with 78.9% born in Aotearoa New Zealand.Those self-identifying as Cook Island Ma ¯ori were 77.4% of the Aotearoa New Zealand population; Tokelauan 73.9%; Samoan, 62.7%; and Tongan 59.8% (Statistics New Zealand, 2013).In 2013, 7.4% of the population (295,941 people) identified with one or more Pacific ethnic groups, compared with 6.9% (265,974 people) in 2006.However, the rate of growth for the Pacific peoples ethnic group slowed across recent censuses, growing 14. 7% between 2001 and 2006 but only 11.3% between 2006 and 2013.The Pacific peoples ethnic groups whose growth slowed between 2006 and 2013 included Tongan, Samoan, Cook Island Ma ¯ori, Niuean and Tokelauan.In contrast, the Fijian ethnic group grew by a bigger percentage between 2006 and 2013 (46.5%) than between 2001 and 2006 (40.1%) (Statistics New Zealand, 2013).
Literature review: Barriers to Pacifi c healthcare
The following section summarises the predominant themes found in the literature review which framed the research.These themes include the barriers faced by Pacific peoples to accessing and using healthcare in Aotearoa New Zealand; the prevalence of long-term health conditions and low uptake of health services amongst this population; and the evolution of the PHO service network aiming to ameliorate the barriers to accessing and using healthcare by Pacific peoples.Removing the business imperative from healthcare that enables innovation including a culturally appropriate healthcare model is the predominant theme of the literature review.
There are many barriers that have been identified involving Pacific peoples' access and use of health care in Aotearoa New Zealand.Pacific peoples are disproportionately represented in the most deprived areas of the country and have poorer health status than other New Zealanders (Pack, Minister, Churchward, & Fa'asalele Tanuvasa, 2013).Thus, Pacific citizens and residents in New Zealand are a key priority group for the primary health services, given the focus on reducing inequalities in health.The PHCS was implemented by the Labour Government in Aotearoa New Zealand in the mid-2000s.The services established were evaluated to determine what the impact was on the delivery of primary health care services nationally and the resulting changes in the health of local geographic populations of enrolled residents.The PHCS had a focus on services for Pacific peoples provided by Pacific peoples, active involvement of Pacific communities in service delivery, further building of Pacific provider capacity, the formation of Pacific-led services, and leadership at a national level.All providers of PHOs were to identify, reach out to and address Pacific health needs (King, 2001).
QUALITATIVE RESEARCH
The key intention of the PHCS is the removal of the business emphasis in primary health thus opening the way to the development of culturally relevant models of healthcare provision and delivery.Re-structuring care teams to include allied health beyond simply nursing and medical staff; delivering education to patients on how to manage healthcare and using Pacific languages in healthcare delivery are some of the ways suggested to overcome barriers to addressing Pacific health needs (Beddoe & Deeney, 2012;Döbl, Beddoe, & Huggard, 2017;Keating & Jaine, 2016;Southwick, Kenealy, & Ryan, 2012).
Accessibility of services and long-term health conditions
Concerns about the accessibility of health care, influenced by increases in the prevalence of chronic conditions and an ageing provider workforce have dominated the literature on primary health service evaluations worldwide (Hogg, Rowan, Russell, Geneau, & Muldoon, 2008).Recent frameworks for primary healthcare internationally have emphasised the service delivery aspects guided by principles of "comprehensiveness, integration and accessibility" (Hogg et al., 2008, p. 308).In the Canadian and Aotearoa New Zealand contexts, indigenous populations have been consulted and new models of healthcare provision have thus developed.These models are designed to tackle the social determinants of health which inevitably impact, relative to poorer health outcomes and lower life expectancy than for European service users (Barnett & Barnett, 2004).The context of historical colonisation has been cited as influencing equity and as a social determinant of health among Pacific nations (Anderson et al., 2006).Under the Treaty of Waitangi, the founding charter between Ma ¯ori and Pa ¯keha ¯ in Aotearoa New Zealand, partnership, participation and protection are guiding principles, which necessitate a focus on identifying and addressing inequities in health as in other areas of life (Anderson et al., 2006).Social work in Aotearoa New Zealand has enshrined in its professional standards of practice, standards aimed at working for greater equity under the Treaty, encompassing all areas of service provision including health (Beddoe & Deeney, 2012;Döbl et al., 2017;Pockett & Beddoe, 2017).Barriers to accessing primary health care in Aotearoa New Zealand continue to revolve around the financial cost of seeing a general practitioner, with the survival strategies of service users including delaying seeking care, lack of uptake of medication and putting others in the family first, such as children and the elderly (Barnett & Barnett, 2004;Hawley & McGarvey, 2015;Pulotu-Endemann & Faleafa, 2017).
Alongside these principles underpinning health models, the broader focus in primary healthcare has been on community empowerment, education and the demographic and cultural aspects of health (Hogg et al., 2008).Western models of health care involving diagnosis and treatment often do not conform to the cultural norms of Pacific service users and their aiga (family) and the wider nu'u (village, community).Pacific models of healthcare to address these differences need to integrate principles of choice, self-determination, and culturally relevant models of health care delivery.This goal has been achieved in the field of mental health care by translating health information into Pacific languages, providing choices of provider, a range of support services and integrating hospitality as part of the care (Agnew et al., 2004;Pulotu-Endemann & Faleafa , 2017;Southwick, Kenealy, & Ryan, 2012;Suaalii-Sauni et al., 2009;Tamasese, Peteru, Waldegrave, & Bush, 2005).
Primary health care in Aotearoa New Zealand
In relation to the Primary Health Care Services history, in February 2001, the New Zealand government released the Primary Health Care Strategy (PHCS) with the aim of improving the health of New Zealanders and reducing health inequalities.The five to 10-year vision of the strategy was to shift primary health care
ORIGINAL ARTICLE
(PHC) services to focus more on the health of the population by providing services which are easy to access; improving and maintaining their health; and coordinating their on-going care (King, 2001).Underlying this vision was a greater emphasis on the role of community participation in health improvement.PHC was seen to encompass a wide variety of services, including health promotion and preventive care, which necessitated the involvement of a wide range of health professionals (multidisciplinary teams) in the service delivery model.
To achieve the vision, the strategy emphasised six key directions for the future development of PHC in Aotearoa New Zealand: 1) work with local communities and geographic populations of enrolled residents; 2) identify and remove health inequalities; 3) offer access to comprehensive services to improve, maintain and restore people's health; 4) coordinate care across service areas; 5) develop the primary health workforce; 6) and continuously improve quality using good information (King, 2001).
A large number of PHOs were established between 2002 and 2005 whose brief was to address these aims.By mid-2008 there were 80 PHOs in operation, with additional funding to the value of $2.2 billion having been provided for further PHC service developments since 2001 (Cumming & Mays, 2011).Early evaluations have noted the unique way in which each PHO has been adapted to the communities in which they have developed.The dilemma is the struggle for smaller and remote PHOs to stay local when there have been pressures to amalgamate with larger PHOs to effect economies of scale (Gauld & Mays, 2006).These amalgamations lead to a dilemma over control of services trying to remain relevant to local resident populations whilst maintaining altruism over a concern to show a profit (Gauld & Mays, 2006).
The contribution of social workers to establishing PHO services based on social justice principles within these evaluations have indicated a synergy between social work and primary health care aims.Both aims are ideally structured and delivered by adhering to culturally relevant principles that acknowledge the holistic nature of health which includes the role of spirituality, community and family participation in healthcare (Pack, 2008).Jantrana and Crampton (2009) found that ethnicity and gender were significantly associated with higher odds of deferring buying a prescription.The low uptake of dental care due to high cost was identified as a compounding factor in the escalation of physical health problems including exacerbations of chronic conditions.Social workers, through advocating for a holistic vision of health, are ideally placed to highlight where barriers in health exist (Beddoe & Deeney, 2012;Döbl et al., 2017;Pockett & Beddoe 2017).Social work is well placed to suggest alternative models of health care.This comprehensive, holistic, model acknowledges the importance of four facets of primary care service delivery that is prefaced on the importance of the patient and treatment provider relationship, awareness of the whole person, and gender, culture and family (Hogg et al., 2008).To evaluate the model, provider satisfaction is considered pivotal as treatment providers, when satisfied with the services they are working within, are found to be more open to alternative processes and a holistic and individually tailored approach when working in primary healthcare (Hogg et al., 2008).
Method Research aims
The two main aims of the exploratory study were: 1) to identify the environmental and organisational context that impacted treatment providers and service managers of the PHOs; and 2) to identify the structural aspects of the policy and governance of the practice agency and its impact on the delivery of services by the provider, and, therefore, its impact on health outcomes.
QUALITATIVE RESEARCH
In undertaking this exploratory study, our research team comprising four Pacific health researchers, had earlier completed the interviews and transcribed the audio recordings.The author was then invited to analyse the data, report the major themes from the interviews, and to develop recommendations that were to sit alongside the larger mixed methods study on Pacific patients and their families' perspectives of the same PHOs.As I had been involved earlier in the establishment, development and service management of a culturally led PHO that was not part of the research, the team requested my involvement as they valued my background to provide rich and in-depth knowledge of the field of PHO development.The overarching study received Research Ethics Approval from Victoria University of Wellington's Research Ethics Committee.
Research design and methodology
An exploratory, descriptive, qualitative research design and methodology were used to explore the service managers and provider's perspectives of the structure and the day-to-day operation of their PHOs.The researchers adopted a case study approach based on Yin's description of case study (Yin, 2009).Each PHO was considered to be an example or a case, in the sense that each PHO had developed uniquely relative to its management structure/governance, service establishment and delivery due to a range of factors such as geography, size of resident population, funding or budget and local health demographics.Yin (2009) discusses the importance of triangulation in case study research for its potential to assemble different narrations on a theme.Thus we were able, in the current study, to incorporate service managers' views to explore how the service setup and structure impacted on the service delivery from a health provider's perspective within each PHO.A case study research approach enabled the context and structure of each PHO to be described alongside the accounts of health providers/managers and brought together with the service user accounts in the broader research project.
Findings
The following section presents the themes from the interviews conducted with managers and service providers.These themes were related to: 1) lowered costs of healthcare; 2) publicising the availability of services offered; 3) access to a range of services; 4) an ethic of care and "going the extra mile" for clients; 5) holistically based/integrated models of care; 6) incorporating culturally appropriate models of well-being; 7) relationship with community: PHO partnerships with NGOs, residents and local communities: 8) building workforce capability, and 9) providing services in a shared language.Due to the differing perspectives of the groups of participants, some themes were more figural for one group, for example the managers, than for service providers.In some themes both groups were in agreement about the issues.Therefore, in some themes, managers' perspectives predominated while in others, service providers' views did.
Lowered costs of healthcare
There was an enthusiastic response to the lowered cost of healthcare from the stakeholders interviewed.Reducing the costs of medical consultations was a primary motivation for practices to initially become involved in the PHOs.
Publicising the availability of low-cost services
The availability of low-cost health care was not, however, widely known in the local community initially, which necessitated promotion of the PHO service.There was also a need to publicise the specific services that were offered.Information dissemination about how patients could enrol themselves and their family members in order to obtain access to low-or no-cost consultations, lower prescription fees and other services, was part of the implementation strategy of each of the PHO managers interviewed.For example, the use of promotional campaigns on Pacific Radio spoken in a range of Pacific languages was one way of publicising the availability to an audience of Pacific clients that was discussed at interview with one urban Pacific-led PHO.Community meetings with local groups were another way in which this PHO publicised the range of services their organisation offered.Fono organised by this PHO provided an opportunity to distribute more general information about health promotion to a range of audiences in faceto-face mode.It was considered important to follow up any presentations to answer queries and to hold meetings with the professional groups working at the PHO: We have a very strong Samoan Residents' Association and we tell them about health stuff and the PHO as well and then we also have a meeting of other nurses of different communities and we tell them about the PHO.(Manager)
Access to a range of services
Access to a range of other services such as free transport to treatment and lower prescription costs were important incentives to establishing a PHO.This widespread appeal was seen by participants as a means of improving access to comprehensive health care services for residents.Another PHO organised health days to introduce a range of health services to local residents including their promoting their own services: … we just have a health day, we go to a hall and stakeholders are invited to come and display their information and tell people about the services that they provide.(Manager)
An ethic of care and "going the extra mile" for clients
The attitudes of Pacific PHO staff towards their work were reported by participants to differ from the business orientation of many medical practices which worked from a business-centred model.This difference in philosophy was thought to partly stem from the values underpinning PHOs being supported by charitable trusts.A workplace based in a shared enthusiasm for helping under-resourced communities was the major motivation described by one general practitioner working in a PHO where 97% of the local enrolled population is described as "low income and Pacific Island": The philosophy of this practice is improved access with lower fees … affordability has always been an important part of the organisation really for us and for other members of the PHO… We provide a free taxi service for people who can't get to their appointments as well.We have access to free PHO funded prescriptions.(General practitioner) Altruism, and the not-for-profit motivation to remain working within the PHO was seen by PHO health workers as important for putting funds back into community, as the same general practitioner interviewed suggested: I'm a salaried GP so I don't get the financial incentives, it's not my business
QUALITATIVE RESEARCH
that I'm safeguarding, that's a different model from the sort of third sector where there's a long history of community ownership and not-for-profit being part of the way that we operate.(General practitioner)
Holistically based /integrated models of care
A community model of care facilitated by the PHOs was described as a positive development across the providers interviewed.This model consisted of several elements -remaining small enough to know the local community which enabled treatment providers to remain aware and responsive to locally defined needs.Coordination of services and communication across practices meant that duplication of services in a geographic area could be avoided as the following excerpt from an interview with a general practitioner in a Pacific-led PHO illustrates: You know it's good to have that sort of relationship because of referral -we're basically seeing people from the same community.It helps avoid duplication of services and knowing what people are doing, having input with different families without knowing that each other is involved in, which I think happened a lot more under pre-PHOs.(General practitioner)
Incorporating culturally appropriate models of well-being
Pacific PHOs looked at health more holistically deriving from social inequalities and so they actively advocated on behalf of patients.A nurse described advocating with income support agencies on behalf of sickness and invalid beneficiaries who could not afford to see a general practitioner for review of their medical condition to avoid a cessation of weekly income benefit payments.She encouraged those patients with long-term or complex presentations who had debts to pay to continue coming to the practice for treatment despite lacking the means to pay for their health care.This kind of advocacy was common and seen as part of the responsibility of providers in the Pacific models of health earlier reviewed (Agnew et al., 2004).The Treaty of Waitangi principles aim to guide the health care delivery in Aotearoa New Zealand to ensure equity of uptake of services and satisfaction of the service user's healthcare experience as far as possible (Barnett & Barnett, 2004;Barnett, Smith, & Cumming, 2009).
Relationship with community
The establishment of PHOs was seen as a positive move by their managers as it provides an opportunity to collaborate to provide culturally appropriate services designed and delivered by Pacific clinicians.As one manager of a Pacific health organisation stated, it was envisaged that PHO funding would build capability in the workforce for care of Pacific by Pacific.
As services move into the community, we are organising the Pacific community to work as a team.Pacific people need to work together as a team that is how it works best.(Manager)
Providing services in a shared language
Another Pacific-led PHO used the services of a medical specialist to run a clinic to see patients who had been screened by a self-administered patient questionnaire to identify health issues.This consultant was unique in being able to speak a number of Pacific languages which enabled him to engage more easily with the majority of patients at that service.The shared language was an important means of building relationship with Pacific clients.This is in contrast to comments made by a non-Pacific PHO about the difficulty of engaging with Pacific communities when the process was not relational (for example, one PHO mailed out about 5,000 letters to
ORIGINAL ARTICLE
Pacific families and received less than 100 responses).This illustrates the importance of understanding how to engage Pacific and the value that Pacific practices bring in their capability to do this.Establishing processes and protocols for making decisions and acknowledging shared values, including the spiritual dimensions of care, aided success, as a CEO of a Pacific PHO explains: We're bound by a common philosophy… I think fundamentally in essence we are a Christian organisation bound by a set of Christian values that hold us together in quite hard times and they are around all of those things, you know like…, integrity, respect… we do have hard times and we have our difficulties and battle but we try to work through them and there is a lot of passion.It's still trying to work through that respect and just wanting the best for our community.(CEO/Manager)
Collaboration, co-ordination and team work across services
Since their joining of a Pacific PHO, a common experience amongst participants was improved communication between diverse social and statutory agencies to avoid the silo-effect of services acting independently of one another.These social connections and networks enabled more comprehensive wrap-around services to be offered to Pacific patients.
The difference between [name of another PHO] and [name of participant's
Pacific PHO] is the community focused, community driven, focus on, you know, the health needs of the people.Whereas [name of other PHO] is very much doctor driven now….(Generalpractitioner) Having a manager who shared a vision and philosophy of working with under-resourced communities was seen to be advantageous by colleagues working at the same PHO as a shared vision of the local community was facilitated.A common purpose for continuing to work within the PHO was a passion for work with what were considered to be under-resourced communities.As one participant commented: Our manager [name] who is Ma ¯ori understands where the lower socioeconomic people are coming from.She has a passion for this population here.And that's why we are getting that support because we know that she's there because of that passion.(Nurse) Another participant who worked as a general practitioner in a not-for-profit PHO described this collaboration as "a collective approach to providing a service."This was seen by those interviewed as being part of this shared vision for work in the PHO: We are not alone as [Pacific Islanders] within this PHO.We are here working alongside others and do collectively have a very strong communication strategy, making sure the population focus on their needs.(General practitioner)
Barriers
Initially there was enthusiasm about the funding available for services to improve access.There were many initiatives that participants considered were working effectively for people accessing the health care they required.However, high and complex patient needs inevitably increased the length of consultations which impacted on the workload of the PHOs' treatment providers such as general practitioners and nurses, as the following comment from a general practitioner working in a Pacific PHO illustrates: The heavy workload is helping them [patients] with social issues, so, sickness benefit, housing, all immigration issues.There is a lot of expectation that we will help them with that.We do quite a lot of it which prolongs our consultation time with the doctor or nurse.There are social workers in public health that we pass things on to ... very nice to have social
QUALITATIVE RESEARCH
workers except that their contracts are all around youth ... But the strategy needs to cater for elderly and social issues a bit better.(General practitioner) General practitioners working in Pacific-led PHOs found that they needed to take longer to explain medical screening procedures prior to undertaking them with Pacific patients.This work needed to be done in face-to-face mode as contact by telephone and letter did not work as effectively with Pacific patients.The unavailability of funded transport to treatment was seen as an obstacle by a clinical manager/general practitioner of one Pacific PHO: We had a lot of DNAs [did not attends] and she [nurse] said to me yesterday that she thinks transport has got something to do with it and that if we could provide transport, that would really help.(General practitioner) Social problems were tackled by the nurses interviewed.For example, one nurse who had an established relationship with local social services organised food parcels from a local food bank for a patient who had not been eating an adequate diet due to lack of money to spend on grocery items.The lack of food had meant that he had become dizzy and fallen from scaffolding at work resulting in a trip to the local hospital's accident and emergency department.Through the PHO's nurse liaising with the accident and emergency department at the hospital, the reason for the accident was clarified with the patient and advocacy arranged with the social services.
Discussion: The implications for Health Social Work
Participants in the Pacific-led PHOs have suggested in this study the need to consider co-ordinated approaches to health care which are comprehensive, culturally appropriate and flexible to respond to local needs.These approaches derive from traditional Pacific beliefs which include "going the extra mile" to meet the consumer where they live in a diversity of local and cultural contexts.The importance of incorporating Pacific values and ways of being in primary health cannot be underestimated in the uptake of services (Agnew et al., 2004;Beddoe & Deeney, 2012;Döbl et al., 2017;Pockett & Beddoe, 2017).Previous studies provide evidence that community-based models of intervention contribute to positive health outcomes (Barwick, 2000;Beddoe & Deeney, 2012;Döbl et al., 2017;Pockett & Beddoe, 2017).
The service providers mentioned a number of Pacific models they drew from in their work that were used alongside clinical models of assessment and treatment.Many of these frameworks adopt a focus on wellness in the community and are underpinned by an ethos of altruism, interpersonal relationship and social inclusion.These same principles need also to guide the provision of secondary health services including social work in hospitals where the tasks involve returning clients to extended family in the community to support ongoing care.
Building trust and support at the first point of contact requires what has been termed "a roundabout Pacific rapport building approach" which is learned by healthcare providers in practice rather than in theory (Agnew et al., 2004, p. ix).This approach involves ensuring that patients feel comfortable in their surroundings as an integrated part of the health service delivery.Rapport building to engage patients and their families is considered an important requirement when working with Pacific peoples (Agnew et al., 2004).Pacific models and modes of service delivery are distinct from western models of care and remain implicit in the practices of the health care providers who use them.These styles of service delivery follow the principles underpinning the government's strategic direction for Pacific health care.These
ORIGINAL ARTICLE
Island Affairs, 2010, p. 5).Social workers require a detailed understanding of Pacific principles and models in their tertiary training including both theory in the lecture room and practice in the field placement.Clinical supervision attending to cultural safety needs to be factored into the wider learning of social workers both before and after their courses of study, as a programme of ongoing professional development.The Aotearoa New Zealand Association of Social Workers would be well placed to attend to such professional development nationally with the support of the workforce (Beddoe & Deeney, 2012;Döbl et al., 2017;Pockett & Beddoe, 2017).
Conclusion
The results from this exploratory study reveal that the implementation of Pacific PHOs has provided a capability for better communication between various parties from board level down to those working at the community level.It has increased the cultural relevance of healthcare approaches offered by removing the economic imperative to manage health as a business.For example, providers in the Pacific-led PHOs offered a broader range of services including efforts in health promotion, advocacy and education in programmes.This development has meant expanded roles and responsibilities beyond the medical model.The expectation of work in these PHOs can sometimes test the reality for some of the providers interviewed and is clearly impacting on the recruitment and retention of staff as the role is less bounded and consultation times are longer due to the complexity of assessing social issues that inevitably impact health and wellbeing.The use of Pacific language was considered an important component of engaging successfully and working well with Pacific patients.Further research is needed to more clearly delineate what is uniquely Pacific in the approach of the Pacific PHOs.
Engagement in community is a core competency when practising social work within Pacific models of healthcare.These competencies need to be reflected in the learning outcomes for social work programmes of education at undergraduate and graduate levels in Aotearoa New Zealand.Spirituality and a holistic approach are key aspects of Pacific models of healthcare involving collaboration and relationship at their core, which differ from more linear, expert-knows-best approaches.Ongoing professional development in Pacific models of healthcare and culturally based models of clinical supervision are areas for further research and development for social work and other healthcare providers in Aotearoa.
As one manager of a Pacific-led PHO explained during the first round of interviews in 2005: …You know the first benefit to us is no or low cost and they [patients] don't pay.(Manager) The CEO from another Pacific PHO stated in 2009 that the fee-paying structure still offered a means of providing a targeted approach to those patients most in need of low or no-cost consultations.PHO I think it's a success with 90 plus percent enrolment throughout the QUALITATIVE RESEARCH ORIGINAL ARTICLE country plus Pacific people, people are saying that they [lowered costs] are the advantage of PHOs.Low cost consultation fees I think is the main product … So a lot of people have enrolled and are making use of the services.(Manager/CEO) principles are: respecting Pacific culture; valuing family; quality health care, and working together (Minister of Health & Minister of Pacific QUALITATIVE RESEARCH
|
2018-12-11T04:19:29.969Z
|
2018-08-26T00:00:00.000
|
{
"year": 2018,
"sha1": "fade67227573bd26df87fe3462bcb3294767de80",
"oa_license": "CCBY",
"oa_url": "https://anzswjournal.nz/anzsw/article/download/482/586",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "fade67227573bd26df87fe3462bcb3294767de80",
"s2fieldsofstudy": [
"Political Science",
"Medicine"
],
"extfieldsofstudy": [
"Business"
]
}
|
69238
|
pes2o/s2orc
|
v3-fos-license
|
No adverse effects of transgenic maize on population dynamics of endophytic Bacillus subtilis strain B916‐gfp
Abstract Endophytic bacterial communities play a key role in promoting plant growth and combating plant diseases. However, little is known about their population dynamics in plant tissues and bulk soil, especially in transgenic crops. This study investigated the colonization of transgenic maize harboring the Bacillus thuringiensis (Bt) cry1Ah gene by Bacillus subtilis strain B916‐gfp present in plant tissues and soil. Bt and nontransgenic maize were inoculated with B916‐gfp by seed soaking, or root irrigation under both laboratory greenhouse and field conditions. During the growing season, B916‐gfp colonized transgenic as well as nontransgenic plants by both inoculation methods. No differences were observed in B916‐gfp population size between transgenic and nontransgenic plants, except at one or two time points in the roots and stems that did not persist over the examination period. Furthermore, planting transgenic maize did not affect the number of B916‐gfp in bulk soil in either laboratory or field trials. These results indicate that transgenic modification of maize with the cry1Ah gene has no influence on colonization by the endophytic bacteria B916‐gfp present in the plant and in bulk soil.
| INTRODUCTION
Genetically modified (GM) crops planted from 1996 to 2014 cover 181.5 billion hectares globally (James, 2015). These crops, modified with Bacillus thuringiensis (Bt) proteins, reduce pesticide use by 500 million kilograms of active ingredient, and had a market value of $15.7 billion in 2014. Currently, 93% of maize planted worldwide is genetically modified (James, 2015). Despite huge economic benefits, GM crops -especially Bt maize -still trigger controversial debates over biosafety and ecological compatibility, including their effects on soil microbial community structure.
Rhizospheric soil microbia play an important role in plant growth, nutrient accumulation, and resistance to biotic and abiotic stressors (de Zelicourt, Al-Yousif, & Hirt, 2013;Prischl, Hackl, Pastar, Pfeiffer, & Sessitsch, 2012). Some microbial communities in the rhizosphere might move into roots of plant and become root endophytes which were mainly defined by soil type (Bulgarelli et al., 2012). Microbial communities in the rhizosphere have been investigated in terms of the effects of root exudation changes caused by transgenic modification.
Although the effects of GM crops on fungal diversity and density have been well studied, few reports have focused on the effects on the endophytic community. Recent studies reported that plants of different species or even different genotypes assembled specific endophytic community (Cotta et al., 2013;Donn, Kirkegaard, Perera, Richardson, & Watt, 2014;Gaiero, McCall, Thompson, Day, & Dunfield, 2013). Two studies have investigated the influence of crops transformed with Bt genes on endophytic bacterial communities, and reported no significant effects (da Silva et al., 2014;Prischl et al., 2012).
This study investigated the effects of transgenic maize 33-7 harboring the cry1Ah gene on the population size of Bs strain B-916. Bt maize 33-7 is effective in controlling the growth of Ostrinia furnacalis larvae under both laboratory and field conditions (Wang et al., 2008). A previous study found no obvious adverse effects on microorganisms in rhizosphere soil (Cui, Shu, Song, Gao, & Zhang, 2011), midgut bacterial structure, or the development of honey bees fed with Bt-cry1Ah maize in our laboratory Geng et al., 2013;Jiang et al., 2013); however, its effects on endophytic bacteria is unknown.
| Cry1Ah protein purification
Cry1Ah protein was extracted from B. thuringiensis strain Biot1Ah by alkaline solubilization method and purified as described in Xue et al. (2008). Protoxin was dissolved in 50 mmol/L Na 2 CO 3 (pH 9.6), and then purified by anion-exchange chromatography using an AKTA FPLC system. To evaluate the effects of Cry1Ah protein, a growth curve of strain B916-gfp was investigated by monitoring the OD600 of samples at different time points for 24 hr. This procedure was repeated three times.
| Inoculation methods and sampling strategy
For inoculation by seed soaking, X090 and 33-7 maize seeds (n = 80 each) were surface-sterilized by treatment with 2% sodium hypochlorite for 5 min. The seeds were soaked in 30 ml B916-gfp suspension (10 11 CFU/ml) on a shaker (120 rpm) at 28°C for 8 hr, then transferred to glass tubes (diameter × height, 5 × 40 cm; three seeds per tube) that were covered with sterilized film, and cultured in a greenhouse.
For inoculation by root irrigation, X090 and 33-7 maize seeds (n = 60 each) were grown in plastic pots (80 × 60 × 30 cm). Seedlings were irrigated with a B916-gfp suspension (10 11 CFU/ml) 7 days after germination in the greenhouse or 11 days after sowing for the field trial on roots. Seedlings and soil samples were collected 7, 14, 21, 28, and 35 days after inoculation.
| Colonization assessment
About 500 mg of roots, stems, and leaves collected from three samples were sterilized for 10 min in 0.2% mercuric chloride, then washed four times with sterile water. Surface-sterilized tissue was ground after adding 1 ml of phosphate-buffered saline (pH 7.4). Tissue fluids were diluted 10-, 100-, and 1000-fold and spread on Luria Bertani medium agar plates supplemented with 5 μg/ml chloromycetin, which were cultured at 30°C for 72 hr. The number of clones exhibiting green fluorescence was counted under UV light (366 nm). Plasmids were extracted from the cells and the gfp gene was amplified by PCR using the following primers: gfpF, 5′-TAA GGG GGA AAT CAC ATG AGT AAA GGA GAA GAA-3′ and gfpR, 5′-GGG GTA CCA TTA TTT TTG ACA CCA GA-3′ under the conditions: 94°C for 10 min, followed by 30 cycles of 94°C for 1 min, 56°C for 1 min, and 72°C for 2 min.
The plasmids were also digested with the restriction enzymes KpnI and SphI.
Genomic DNA was isolated from plant roots, stems, and leaves.
| Quantitative real-time PCR
Quantitative real-time PCR analysis was carried out using a 7500 Real-Time PCR System (Applied Biosystems) with SYBR Premix Dimer Eraser (Perfect Real Time; Takara Bio, Otsu, Japan) and genomic DNA isolated from maize plant roots, stems, and leaves as the template. The reaction conditions were as follows: 95°C for 15 min, followed by 40 cycles of 95°C for 10 s, 56°C for 20 s, and 72°C for 32 s. Reactions were performed in triplicate. Primers 640F and 702R were used to amplify the gfp gene. A five-step dilution series of the gfp gene (ranging from 10 4 to 10 8 copies) was used as a template to generate a standard curve.
| Data analysis
Quantitative real-time PCR data were analyzed with the least significant difference multiple comparisons test using SPSS v.13.0 software (SPSS Inc., Chicago, IL).
| Colonization of transgenic maize by Bs strain B916-green fluorescent protein (gfp) inoculated by seed soaking under greenhouse conditions
We first investigated the effects of Cry1Ah protein on the growth of Bs strain B916-gfp. There were no differences in growth curves of strain B916-gfp without or with 15, 150, and 300 μg/ml Cry1Ah protein supplementation (Fig. S1). To compare the ability of Bs strain B916-gfp to colonize transgenic (33-7, harboring the cry1Ah gene from Bt) and nontransgenic (X090) maize plants, surface-sterilized seeds were soaked in a B916-gfp cell suspension (1011 CFU/ml) for 8 hr before they were grown in a greenhouse. Colonization was verified by confocal microscopy detection of GFP expression and molecular analysis of strain B916-gfp DNA in the roots, stems, and leaves of each plant. Green fluorescence was observed in all examined parts of transgenic and nontransgenic plants (Fig. 1). After surface sterilization, endophytic bacteria were isolated from maize roots, stems, and leaves and cultured by conventional methods; plasmids were extracted from green fluorescent clones detected under ultraviolet (UV) light (366 nm) (Fig. S2). An 800-bp fragment was amplified from these clones using gfp-specific primers (Fig. S3A), and 5900-and 2300-bp fragments were obtained by digestion with KpnI and SphI restriction enzymes, as for the positive control plasmid (Fig. S3B).
The number of B916-gfp cells colonizing roots, stems, and leaves over 17 days' cultivation ( Fig. 2A) was counted under UV light (Fig. S2B). B916-gfp population dynamics were similar in the three tissues in both transgenic and nontransgenic maize, with the number of colonies reaching a peak at 10 days (range: 1.96-2.62 log 10 CFU/g) before decreasing thereafter (p > .05) (Fig. 2B-D). The concentration of the gfp gene in the roots, stems, and leaves was comparable for transgenic and nontransgenic maize at 5 days, as determined by quantitative PCR analysis (p > .05) (Fig. 3A). Hence, the ability of strain B916-gfp to colonize maize plants was unaffected by the presence of the transgene.
| Colonization of transgenic maize by Bs strain B916-gfp inoculated by root irrigation under greenhouse and field conditions
Transgenic and nontransgenic maize were root-irrigated 7 days after seed germination with Bs strain B916-gfp. Roots, stems, and leaves were collected for analysis every 7 days after inoculation. Under greenhouse conditions, the B916-gfp population in the roots of transgenic maize reached a maximum value of 2.59 log 10 CFU/g on day 21 after inoculation (Table 1). In the other tissues of both transgenic and nontransgenic maize, the number of B916-gfp cells peaked at 2.05-3.17 log 10 CFU/g on day 14 after inoculation (Table 1). There were no differences in colony numbers in leaves between transgenic and nontransgenic plants from 7 to 35 days after root irrigation (Table 1). The number of B916gfp colonies differed significantly in root samples collected on day 14 and stem samples collected on days 14 and 21, but this difference did not endure for the duration of the examination period (Table 1). Under field conditions, the B916-gfp population in leaves reached the highest values (2.01 log 10 CFU/g for transgenic and 2.04 log 10 CFU g −1 for nontransgenic maize) 14 or 21 days after inoculation (Table 1) There were no differences in the number of colonies in leaves between transgenic and nontransgenic plants from 7 to 35 days after root irrigation (Table 1), nor were there differences in B916-gfp population dynamics in the rhizospheres of transgenic and nontransgenic plants under greenhouse and field conditions ( Fig. S4A and B).
| DISCUSSION
This laboratory and field study investigated the impact of cry1Ah maize on the colonization ability and population size of B916-gfp endophytic bacteria. We found that planting transgenic cry1Ah maize did not affect the size of B916-gfp populations in the rhizosphere soil, and that the cells could colonize transgenic and nontransgenic maize with equal efficiency by seed-soaking and root-irrigation inoculation methods. Moreover, there were no significant differences observed in terms of colony number between transgenic and nontransgenic plants, except for one or two time points in roots and stems that did not persist over the period of examination.
Soil bacteria are exposed to Bt toxins released into rhizosphere soil through root exudates from Bt crops (Gruber, Paul, Meyer, & Müller, 2012;Wang et al., 2013;Xue, Diaz, & Thies, 2014). A recent study reported that desorbed Bt protein was quickly mineralized by microbial F I G U R E 3 Concentration of the gfp gene in roots of transgenic and nontransgenic maize, as determined by quantitative PCR analysis. X090 and 33-7 maize seeds soaked in a B916-gfp suspension were grown under greenhouse conditions; 150 mg of tissue from each seedling were collected at 5 days (n = 5), and total genomic DNA was extracted for quantitative PCR analysis degradation (Valldor, Miethling-Graff, Martens, & Tebbe, 2015). Several studies have investigated the effect of growing Bt maize on the diversity and population size of microbial rhizosphere communities, especially AMF (Castaldini et al., 2005;Cheeke et al., 2011Cheeke et al., , 2014Turrini et al., 2005;Zeng et al., 2014); only two examined endophytic bacterial communities, which were not significantly affected by transformation of maize with cry1Ab (da Silva et al., 2014), cry3Bb1, cry1A105, or cry1Ab2 genes (Prischl et al., 2012). Here, we used Bs strain B916 expressing the GFP marker to monitor the population dynamics and colonization of Bt maize and its parental line. Bs strain B916 was confirmed as an endophytic bacteria based on the green fluorescence detected in the roots, stems, and leaves of plants and by PCR amplification of the gfp gene. B916-gfp cells were transferred from the outside to the inside of maize plant tissues by inoculation of seeds and roots; they also moved from the inoculated parts of the plant to other parts such as the leaves and stem following root irrigation. So, Bs strain B916 was exposed to Cry1Ah protein in the tissues of maize as well as in the soil. There were no significant differences in the population dynamics of B916-gfp colonies in transgenic and nontransgenic maize under laboratory or field conditions. This was consistent with the colonization of AMF (Cheeke et al., 2011(Cheeke et al., , 2014Zeng et al., 2014) and endophytic bacterial communities (da Silva et al., 2014;Prischl et al., 2012) in Bt maize. After inoculation, the growth curve of B916gfp followed the same trend of first increasing then subsequently declining in tissues of cry1Ah maize and isogenic lines. In leaves collected 14 days after root-irrigation treatment, the number of B916-gfp cells was about 2.5 log 10 CFU/g in the laboratory trial as compared to 2.0 log 10 CFU/g in the field trial. This difference was likely due to temperature, humidity, light intensity, and soil texture.
In conclusion, we found that B916 is an endophytic bacteria that is translocated from the roots of maize plants to the stem and leaves.
There were no significant differences in population size or dynamics between B916-gfp colonies in transgenic and nontransgenic plants in either laboratory or field conditions. Since endophytic bacteria can hasten growth of plants, improve resistance to diseases and environmental stress, this study will provide a new insight into biosafety analysis of GM crops.
ACKNOWLEDGMENTS
This work was funded by the 863 Project of China (no. 2011AA10A203).
|
2018-04-03T03:03:23.625Z
|
2016-09-25T00:00:00.000
|
{
"year": 2017,
"sha1": "f7daa0861808c6596f6eb27da2d26ab032df1673",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/mbo3.404",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f7daa0861808c6596f6eb27da2d26ab032df1673",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
56188528
|
pes2o/s2orc
|
v3-fos-license
|
AN INDUSTRY ANALYSIS OF THE POWER OF HUMAN CAPITAL FOR CORPORATE PERFORMANCE : EVIDENCE FROM SOUTH AFRICA
Even in industrialised emerging economies, the value-generating competencies of a workforce, known as its human capital efficiency, are a key resource for commercial success. The objective of this research is to empirically investigate the relationship between human capital efficiency (as measured by value-added human capital) and the financial and market performance of companies listed on the Main Board and Alternative Exchange (ALT-X) of the Johannesburg Stock Exchange. Return on assets, revenue growth and headline earnings per share were used as financial performance indicators; while market-to-book ratio and total share return were used to measure market performance. Multivariate regressions were performed, with panel data covering 390 companies in the financial, basic materials, consumer services, consumer goods, industrial and technology industries from 2001 to 2011. First, human capital efficiency was found to have no effect on the market performance of listed companies in South Africa. Secondly, higher human capital efficiency was found to result in the extraction of greater returns from both tangible and intangible assets in all industries. Thirdly, higher profitability was found to be associated with higher human capital efficiency in almost every industry in South Africa, with the exception of the technology industry, where human capital efficiency was found to be independent of headline earnings per share. Finally, higher revenue growth was found to be positively associated with human capital efficiency in those industries which are not consumer-driven. In the consumer-driven industries, human capital efficiency contributes to bottom line profitability even though it is not a driver for revenue growth. Overall, the results of this study confirm that human capital efficiency enhances a company’s financial performance, whether it be through a greater capacity for production and service delivery, tighter cost controls or better use of company resources. Management in all South African industries are encouraged to develop the value-creating abilities of their employees through employer-driven personnel enrichment and training programs and by incentivising workers to pursue further education.
Introduction
In an industrialised economic environment, such as that in South Africa, the effective use of physical resources is considered to carry more weight than that of human and other intellectual resources in the production of goods and services (Firer & Williams, 2003:357).Consequently, companies may take less care in the development and management of their human capital assets than they do in managing the efficiency and productivity of their tangible assets.However, human capital is the essence of innovation and is therefore crucial to the development of commercial products and the improvement of business processes (Stewart, 1998:76;Sullivan, 2000:9).The validity of this assertion may, however, differ from industry to industry.
According to the World Economic Forum, South Africa's Global Competitiveness Index world ranking has fallen from 36 th to 53 th since 2006 (The Global Competitiveness Report 2007-2008, 2007:10;The Global Competitiveness Report 2013-2014, 2013:15).The corresponding drop in South Africa's ranking in both Higher Education and Training, and Labour Market Efficiency implies a connection between these factors.To be truly competitive in the long run, whether locally or in the global arena, it is clear that South African businesses will be forced to cultivate their knowledge-based intangible assets, starting with their human capital.
Abstract
Creating opportunities for their employees to complete secondary or tertiary education or to attend in-house training courses is one way in which businesses in emerging economies can achieve this commercial imperative, and in so doing, indirectly serve the country's need for socioeconomic growth.However, the prevailing corporate culture is one in which human capital development expenditure is regarded as an opportunity cost.The aforementioned trade-off in South Africa between tangible and intellectual assets (Firer & Stainbank, 2003:36) means that firms would rather spend available funds on the acquisition or development of property, plant and machinery for production.If employee education and skills development received more attention from the South African private sector and government, the outcome would be a workforce better equipped to bolster the country's economic growth in this age of global competition.This argument is supported by Judson (2002:229), who analysed data on educational spending, enrolment and educational attainment to confirm a positive relationship between economic growth and human capital accumulation.
Human capital and its development are difficult to quantify.In this study, human capital is measured by the efficiency with which it creates value for a business (hereafter known as "human capital efficiency").The metric for human capital efficiency that has been used in this study, Value-Added Human Capital (VAHU), is best described as the value-added per unit of employeerelated input cost (Pulic, 2000:706).Human capital efficiency should not be confused with production efficiency, which refers to an employee's ability to deliver maximum output of the highest quality, using the least inputs, as fast as possible.Production efficiency is related to physical productivity, while the subject matter of this research -human capital efficiency -is related to value creation.
It is hoped that this study will spark a change in the collective perception of education, training and skills development as grudge expenses, rather than essential investments for corporate performance.Empirical research is needed to reinforce the notion that human capital efficiency is important to the success of South African commerce, and consequently the local economy.The primary objective of this research is therefore to investigate the relationship between human capital efficiency and the financial and market performance of South African companies across all industries.
The remainder of the article is organised as follows: the second section summarises prior local and international research relevant to the topic.The underlying conceptual framework is developed and the research methodology presented in the third section.The empirical results are presented and discussed in the penultimate sections, while the final section comprises conclusions reached and suggestions for further research.
2 Literature review Ioannidis (2005:0696) argued that the strength of a research finding is based on the statistical power of the study, the expected probability of that outcome, the extent of replication, and the consistency of the conclusions reached across similar research.Most of the existing empirical studies about the impact of human capital on corporate performance were performed using Pulic's (2000:706) VAHU as the proxy for human capital.In addition, most studies examined emerging markets that were similar to the South African market.Based on Ioannidis' (2005:0696) criteria, the body of prior research on human capital and firm performance is limited in volume and scope and offers inconsistent, mixed results.
This study intends to further the pioneering exploratory research by Firer and Williams (2003:348), who examined the relationship between intellectual capital and corporate performance in South Africa.Due to the exploratory nature of their research, their sample was restricted to single-period data (2001) for 75 companies in only those industry sectors considered to have inherently high intellectual capital intensity -banking, electronic, information technology and services.Firer and Williams (2003:357) concluded that human capital efficiency is negatively associated with corporate productivity (as measured by asset turnover).This result supported Firer and Stainbank's (2003:36) finding of a trade-off between intellectual capital and tangible assets in South Africa.In order to enhance their productivity, firms were inclined to incur costs in improving the efficiency of their physical assets rather than that of their human capital resources.Although Firer and Williams (2003:357) found that market values declined when companies focused on better use of human capital instead of physical assets, no relationship could be established between VAHU and profitability.
Using the Ohlson (1995) value relevance model, Swartz, Swartz and Firer (2006:78) empirically confirmed that human capital efficiency (measured by VAHU) has a significant and robust positive effect on share prices on the Johannesburg Stock Exchange (JSE).Although the Ohlson (1995) model was deemed unsuitable for this study due to the extent of risk estimations required, their use of share prices three months after each company's financial year-end was adopted.JSE-listed companies are granted three months after year-end to disseminate either their unaudited provisional financial statements or the audited financial statements (JSE Limited Listing Requirements -Service Issue 13, 2010:3-7).This timing difference between the financial and share data is needed to allow the impact of investor and market reactions to the financial statements to reflect in the share prices.Firer and Williams (2003) and Swartz et al. (2006) examined only the JSE Main Board.The JSE Alternative Exchange (ALT-X) raises development funding for high growth, small market capitalisation companies to encourage entrepreneurship and black economic empowerment.Research encompassing both the Main Board and ALT-X may be considered a better reflection of the true South African market.
The studies by Firer and Williams (2003) and Swartz et al. (2006) deliver contradictory results, leaving no clear consensus on the impact in South Africa of human capital efficiency on the various measures of firm performance.Unfortunately, the research results of international studies do not provide much clarification.
Chen, Cheng and Hwang (2005:159) found a weak positive link between human capital efficiency and the financial and market performance of companies listed on the Taiwan Stock Exchange from 1992 to 2002.Taiwanese firms that display higher human capital efficiency perform only slightly better in terms of market valuation and profitability, as measured by marketto-book ratio, return on equity, return on assets, revenue growth and employee productivity.Shiu (2006:363) examined the technology sector in Taiwan and his conclusions contradicted those of Chen et al. (2005:159).Shiu found no relationship between human capital efficiency and return on assets or market-to-book ratio, yet found VAHU to have a positive impact on asset turnover.Shiu may be criticised for restricting all negative company VAHU to zero in order to derive "meaningful" correlation (Shiu, 2006:359).Negative VAHU data should be used as is and should not be transformed -correlation analysis describes the direction and strength of the linear association between two variables, without imposing requirements on the sign or amount of each variable.Gan and Saleh (2008:113) found that, in Malaysia, the value-creating efficiency of a firm's human capital resource base is a direct determinant of its profitability and productivity (as measured by return on assets and asset turnover respectively).They found that human capital efficiency had no effect on market-to-book ratio, and suggested that share prices in a young, emerging market such as Malaysia may be driven more by fundamental theory than a more mature stock market would be (Gan & Saleh, 2008:127).They warned that the results may not be representative of the entire Malaysian market because their data was restricted to technologyintensive companies listed on the MESDAQ, a sub-division of the Bursa Malaysia Berhad similar to the JSE ALT-X (Gan & Saleh, 2008:127).
An empirical study of four sectors of the Athens Stock Exchange by Maditinos, Chatzoudes, Tsairidis and Theriou (2011:146) confirmed that human capital development is a necessary factor for corporate success as it is a determinant in share pricing.They confirmed a positive relationship between VAHU and return on equity, but could not establish any between human capital and revenue growth or return on assets.Puntillo (2009:112) concluded that human capital efficiency does not influence return on assets or market-to-book ratio in the Italian financial sector.However, the degree of approximation and extrapolation in her calculation of VAHU poses a strong argument for using staff costs as disclosed in audited financial statements.Appuhami (2007:24) confirmed a strong positive relationship between VAHU and investor capital gains in the Thai financial sector in 2005.Muhammad and Ismail (2009:210) investigated the impact of human capital efficiency on financial and market performance in the Malaysian financial sector, but regarded their regression results as inconclusive due to their small sample size and coverage of a single year (2007).They could not establish any significant relationship between VAHU and company performance (Muhammad & Ismail, 2009:210).Using a different research sample -the top 25 drug and pharmaceutical companies on the Bangladesh Stock Exchange from 1996 to 2006 - Kamath (2008:700) reached similar conclusions.He confirmed that corporate performance is independent of human capital efficiency.
Several shortcomings were identified in the review of prior literature.Although single-period data was used in most of the prior research, analyses covering a longer time period may yield more meaningful results (Firer & Stainbank, 2003:41;Firer & Williams, 2003:358;Maditinos et al., 2011:146;Tseng & Goo, 2005:199).There is a greater risk of "sampling within a sample" if research is limited to heavily intellectual capital-based sectors only (Firer & Williams, 2003:358;Maditinos et al., 2011:146).By addressing these limitations through cross-sectional, time-series analysis incorporating all industries of the JSE Main Board and ALT-X, it is hoped that this study will add value to the existing body of human capital research in South Africa.
Developing the regression models
The research population of this study consisted of all 390 companies listed on the Main Board and ALT-X of the JSE for the financial years falling in the period 31 December 2001 to 30 June 2011, resulting in 1765 company years' of empirical data.Time-series cross-sectional multivariate regressions were used to analyse the impact of human capital on various financial and market performance measures in different South African industries over the period under review.H1-a to H1-e were each performed for six industries -Financials, Basic Materials, Consumer Services, Consumer Goods, Industrials and Technology -resulting in a total of thirty regressions.The use of panel data decreases regression errors in samples where significant time-series depth is lacking (De Jager, 2008:56).All empirical data -audited annual financial statements, monthly share data and market indicators -was obtained from the McGregor Bureau of Financial Analysis database.Survivorship bias was avoided by explicitly including all companies listed on the JSE at any time during the research period, regardless of whether they had subsequently delisted or remained listed.
Measuring human capital efficiency
Pulic's (2000:707) measure of human capital efficiency, VAHU, was used as the measure of the independent variable in this study.He calculated VAHU as value-added per unit cost of salaries and wages.Refining Pulic's concept, Riahi-Belkaoui (2003:220) and Chen et al. (2005:166) proposed that value-added be calculated as net profit before interest, taxes and salaries and wages.Therefore, the independent variable was calculated as: where NP = net profit after tax I = interest expense T = total of all taxes W = salaries and wages Directors' emoluments are often much higher, more subjective and determined in a less marketrelated manner than those of management and other employees.To avoid distortion of the intended meaning of the VAHU variable, directors' remuneration has been excluded from its calculation This exclusion is further supported by Pantzalis andPark's (2009:1610) assertion that the compensation received by employees reflects the value of human capital to the labour market, as it is a market pricing effect of the need to attract employees amidst labour market competition.
JSE-listed companies are required to have their financial statements audited by independent, external auditors and to make them available to their shareholders periodically.VAHU therefore offers a uniform, standardised calculation based on reliable and readily available financial statement information, which is both simple to replicate and allows for ease of comparison between companies.Firer and Stainbank (2003:32), Firer and Williams (2003:353) and Swartz et al. (2006:74) provided similar motivation for their use of Pulic's human capital metric.VAHU is also very similar to the metric for wealth creation efficiency, P 2 , in the annual Value-Added Scoreboard report of the United Kingdom Department for Business Innovation and Skills (United Kingdom, 2009:55).
The selection of company size (LMC) and financial leverage (DR) as control factors is supported by Kamath (2008:692), Riahi-Belkaoui (2003:221), Shiu (2006:360), Firer and Stainbank (2003:32) and Firer and Williams (2003:354).ROE was included as an additional control factor in H1-d and H1-e to encapsulate the effect of a company's financial performance on its market performance.It is calculated by dividing earnings before interest, tax, depreciation and amortisation by the average book value of equity.
Measuring corporate financial and market performance
ROA, GR and published HEPS were considered to be appropriate proxies for financial performance: ROA is a measure of the efficiency, effectiveness and economy with which a company utilises its assets to generate profits, while GR is indicative of its potential for future growth.HEPS is a sophisticated earnings per share figure (SAICA Circular 08/07 Headline earnings, 2010:4), which is mandatorily disclosed by all JSE-listed companies (JSE Limited Listing Requirements -Service Issue 13, 2010:8-20), and is commonly used in South African analyst reports to assess company performance.
M/B and TSR were chosen to represent market performance: M/B reflects stock market performance because the higher the M/B, the better the company's ability to influence its stock market value through the management of its net assets.TSR represents the total return gained by an ordinary shareholder as it incorporates both the capital gain and the dividend declared per share (Tan, Plowman & Hancock, 2007:82).
Tests of statistical integrity
Various data transformations were performed to ensure the statistical reliability of the regressions, with minimal adjustment to the underlying data.Share volumes in the research data were adjusted for share splits and consolidations, to ensure consistency over the research time period.Although the control factors and ROA were found to be normally distributed, VAHU and the remaining dependent variables were positively skewed and displayed leptokurtosis (refer to Table 1).This non-normality is considered acceptable, as it is common in financial ratios (Barnes, 1982:51;Deakin, 1976:95;So, 1987:488) since positive skewness would prevail in ratios with a lower limit of zero and no real upper limit (Ezzamel, Mar-Molinero & Beecher, 1987:466).Outlier bias was addressed conservatively by winsorising only the outlier portion of extreme values to three standard deviations from the population mean -a technique preferable to other marginal models (Tsay, Pena & Pankratz, 2000:803).Heteroskedasticity was rectified through robust covariance matrix estimation (Hayes & Cai, 2007:714).
Descriptive statistics and correlation analysis
The descriptive statistics of the research population are presented in Table 1.While the median ROA was fairly high at 17.5%, the median VAHU clearly indicates that South African companies on the JSE were able to generate value from their human capital.The financial health of South African listed companies over the period under review appears to be strong, as the median TSR of 16.8% is much higher than the prime rate of interest in South Africa (which declined from 13% to 9% over the period) and the median GR of 12.9% is considerably higher than the target inflation rate of 3% to 6% over the period under review.Those favourable conditions translated into higher share valuations, attested by a median M/B greater than 1, despite the usual concerns surrounding emerging markets, such as weaker information environments, questionable corporate governance, lack of transparency and liquidity, and governmental corruption (Bruner, Conroy, Estrada, Kritzman & Li, 2002:319).Table 2 presents the bivariate pairwise correlation analyses performed to establish a linear relationship between the independent, dependent and control variables prior to undertaking the regressions.No excessively strong correlations (0.7 and higher or -0.7 and lower) were found between any of the variables.VAHU was found to have a positive and statistically significant linear relationship with all measures of financial and market performance.This supports the traditional thinking that human capital efficiency has a positive effect on corporate performance.VAHU was more strongly associated with ROA and HEPS than the other dependent variables.GR showed a moderate association with VAHU, while the relationships between VAHU and both M/B and TSR were weak.
Industry VAHU
Traditional thinking would imply that the value-generating ability of human capital would be expected to be higher in those industries where the quality, expertise, training and skill of the employee base are perceived as better.Therefore, the average VAHU would be expected to be higher in Financials, Technology and perhaps Consumer Services.Conversely, the average VAHU in Industrials, Consumer Goods and Basic Materials would intuitively be expected to be lower.
Although the median VAHU of each industry (presented in Table 3) roughly resembles this instinctive industry ranking, human capital efficiency was found to be higher than expected in Basic Materials and lower than expected in Technology.
The high human capital efficiency of workers in Basic Materials may be attributable to the mining situation in South Africa, where heavily under-paid, unskilled labourers produce mineral outputs that are worth countless times more than the Rand cost of their wages.Industrial action in response to this inequity has resulted in the loss of more working hours in this industry than in any other (Republic of South Africa, 2011:17).Workers in the field of computer services, telecommunications and information technology are presumed to hold specialised knowledge and expertise.Yet this knowledge capital does not guarantee that value will be added, as many working hours may be spent on research and development or experimental projects which are not profitable later.The lower than expected VAHU in Technology is therefore likely to be attributable to the nature of the industry, in that certain human capital expenditure might not result in earnings.
Regression results
Given the large number of regressions performed, the regression results are presented diagrammatically in panels in Figure 1.For ease of reading, the regression outputs were limited to the β coefficients for VAHU -i.e.those which directly describe the impact of human capital efficiency on the dependent variables.A very high VAHU β was observed with respect to HEPS in all industries (H1-c in Panel I to VI of Figure 1).These coefficients were all statistically significant (p<0.05), with the exception of Technology.The magnitude of these standardised coefficients is a strong indication of multicollinearity.Therefore the β coefficients may still be used to describe the direction of the relationship between VAHU and HEPS, but they grossly overstate the strength of that relationship.
Intra-industry analysis
The coefficient for VAHU was positive and statistically significant (p<0.05) with respect to ROA, GR and HEPS for Basic Materials, Financials and Industrials.The VAHU β coefficient was statistically significant (p<0.05) and positive with respect to ROA and HEPS in Consumer Goods and Consumer Services.However, their VAHU coefficients relating to GR were not statistically significant.The VAHU coefficients relating to HEPS were not statistically significant in Technology, but the β coefficients relating to ROA (p<0.01) and GR (p<0.10) were significantly positive.
The β coefficient was not statistically significant with respect to M/B and TSR in any industry.
Inter-industry analysis
The regression results for each measure of financial and market performance were analysed across the different industries in order to identify any trends.The VAHU β relating to GR was statistically significant (p<0.10) and positive in Basic Materials, Financials and Industrials; although it was not significant in Consumer Goods and Consumer Services.VAHU was found to have a significantly positive β (p<0.05) for HEPS in all industries, except Technology (where it was not significant).The β coefficient for VAHU was found to be positive and statistically very highly significant (p<0.001) with respect to ROA in all the industries.ROA also displayed the strongest association with VAHU in the preliminary correlation analysis (refer to Table 2).On the other hand, no significant β coefficients were identified for VAHU with respect to M/B and TSR in any industry.
Discussion
The conclusions formulated from the individual industry regressions have been summarised in Table 4.Much of the prior research on this subject matter was inconsistent or inconclusive, at best.The outcomes of this study, however, paint a far more definitive picture of the relationship between human capital efficiency and firm performance across the various industries in South Africa.Human capital efficiency has little to no effect on a company's market performance in South Africa, irrespective of the industry in which it operates.Neither the premium to net asset value at which a company trades (i.e.M/B), nor capital and dividend returns (i.e.TSR), appear to be influenced by VAHU.This contradicts the positive impact of VAHU on share prices (Swartz et al., 2006:78) and negative impact on M/B (Firer & Williams, 2003:357) observed in earlier exploratory South African studies.In addition, this result challenges prior research in Greece (Maditinos et al., 2011:146) and Italy (Puntillo, 2009:112) which confirmed a positive relationship between the variables.As experienced in other emerging economies such as Malaysia (Gan & Saleh, 2008:127) and Thailand (Appuhami, 2007:24), the independence of market performance from human capital efficiency may be due lack of sophistication in a relatively young stock exchange.In emerging stock markets, market sentiment is a stronger driver of share prices than fundamental analysis (Gan & Saleh, 2008:127).It is therefore plausible that, in South Africa, public perceptions about corruption, crime and other prominent macroeconomic and microeconomic conditions play a bigger role in share pricing than human capital efficiency does.
Higher VAHU was found to be associated with higher ROA in all the industries in South Africa.This finding is in contrast with that by Firer and Williams (2003:356), who could not identify any meaningful relationship.Human capital enhancement directly influences operational performance in a manufacturing environment by improving staff productivity, machine efficiency and customer satisfaction (Youndt, Snell, James & Lepak, 1996:858).In an industrialised economy where intellectual capital investment is overlooked in favour of investment in physical assets, propagating the trade-off between physical assets and human capital may result in poorer financial performance.
Consumer Goods and Consumer Services are composed of food and beverage retailers, the fishing and farming sectors, motor manufacturers, other retailers and personal services providers.Although service delivery and production in these industries remains dependent on human capital to some degree, the financial performance of the companies in these two industries is driven primarily by consumer demand.As could therefore be expected, VAHU was found to offer little to no explanatory power for GR in these industries despite having a favourable effect on the bottom line (through higher ROA and HEPS).
HEPS was found to be independent of VAHU in Technology.This is possibly due to the research and development side of the industry's operations, which is also considered to be the cause of the unexpectedly low value-generating ability of its workforce.Firer and Williams (2003:357) observed a similar disassociation between profitability and human capital efficiency in the information sector.VAHU was still found to positively impact all the other measures of financial performance in Technology and Industrials.
VAHU contributes positively to ROA, GR and HEPS in Financials and Basic Materials.Therefore, higher human capital efficiency results in a stronger financial performance in those industries where employees have a greater capacity for deriving company value from their knowledge and skills (i.e.those industries with higher VAHU) This clarifies prior research in the Malaysian financial sector (Muhammad & Ismail, 2009:210) and in South Africa (Firer & Williams, 2003:357), which found no clear association between VAHU and profitability.
Conclusion
The scope of this investigation into the relationship between human capital efficiency and corporate performance in South Africa was broad.Thirty multivariate regressions were performed, involving both inter-industry and intra-industry testing of the impact of VAHU on five measures of financial and market performance (ROA, GR, HEPS, M/B and TSR) across all six industries of the JSE over a ten-year period.These analyses consequently offer deeper insights than prior research, which were either exploratory, lacking in time depth or were largely restricted to economic sectors not inherently intensive in intellectual capital.
Firstly, human capital efficiency was found to have little to no direct effect on the market performance of listed companies in South Africa.This may be because investor perceptions carry more weight than fundamental analysis in share pricing in emerging markets (Gan & Saleh, 2008:127).
Secondly, higher human capital efficiency was found to result in the extraction of greater returns from both tangible and intangible assets in all the industries.Management are advised to build the value-creating competencies of their workforce through skills development and training of workers in order to derive greater benefit from the company's physical capital resources.A more competent employee would also be better able to draw value from the company's intangible assets -through innovation, improvements in organisational culture and exploiting stakeholder relationships for competitive advantage.This finding debunks the South African tendency to tradeoff intellectual capital expenditure in favour of investment in physical assets (Firer & Stainbank, 2003:36).
Thirdly, higher profitability was found to be associated with higher human capital efficiency in almost every industry in South Africa.Employee remuneration is often the largest expense in any company's income statement.Because minimising this cost usually means a reduction in the size of the workforce that is accompanied by diminished production capacity, it does not guarantee higher headline earnings.Instead, South African companies should focus on improving the valuecreating ability of their employees through the provision of job-specific training and incentivising workers to pursue further education.The exception to this finding is in Technology, where human capital efficiency was found to be independent of profitability -further industry-specific investigation may shed light on this unusual phenomenon.
Finally, higher revenue growth was found to be positively associated with human capital efficiency in those industries which are not consumer-driven.The perceived degree of intensity in intellectual capital or knowledge capital of an industry does not appear to affect the relationship between human capital and revenues.In the non-consumer-driven industries -Financials, Basic Materials, Industrials and Technology industries -optimising the value-added per Rand of employee costs is associated with stronger growth in business revenues.In the consumer-driven industries (Consumer Services and Consumer Goods), however, it was established that human capital efficiency is not a driver for revenue growth.In such industries, the microeconomics of supply and demand is the chief driving force behind a company's turnover.As with all the other industries, management in consumer-driven industries are nevertheless encouraged to develop the value-generating abilities of their workers through employee enrichment and training programs.Overall, the results of this study indicate that human capital efficiency enhances a company's financial performance -be it through a greater capacity for production and service delivery, tighter cost controls or better use of company resources.
As discussed, the measure of human capital efficiency used in this research does not incorporate the cost of employer-funded employee skills development and excludes the remuneration of directors.Incorporating the value-creating efficiency of directors may yield different results.Alternatively, isolating it may lend support for or against the common practice of paying directors a substantially higher level of remuneration.Finally, provided that empirical training cost data could be collected, a separate investigation into the value-added per Rand spent on employee training might cultivate a stronger corporate commitment to staff enrichment and empowerment.
Figure 1Regression of corporate performance on human capital efficiency
Table 1
Descriptive statistics of the research population
Table 2
Correlation matrix for the regression variables
Table 3
Human capital efficiency per industry
Table 4
Summary of the effect of human capital efficiency on corporate performance
|
2018-12-07T00:18:47.429Z
|
2015-11-27T00:00:00.000
|
{
"year": 2015,
"sha1": "a1e8e323f95c4feb6d0f7685104d815da63bc365",
"oa_license": "CCBY",
"oa_url": "https://sajems.org/index.php/sajems/article/download/1191/572",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a1e8e323f95c4feb6d0f7685104d815da63bc365",
"s2fieldsofstudy": [
"Economics",
"Business"
],
"extfieldsofstudy": [
"Economics"
]
}
|
221864002
|
pes2o/s2orc
|
v3-fos-license
|
In Vitro and In Vivo Screening of Wild Bitter Melon Leaf for Anti-Inflammatory Activity against Cutibacterium acnes
Cutibacterium acnes (formerly Propionibacterium acnes) is a key pathogen involved in the development and progression of acne inflammation. The numerous bioactive properties of wild bitter melon (WBM) leaf extract and their medicinal applications have been recognized for many years. In this study, we examined the suppressive effect of a methanolic extract (ME) of WBM leaf and fractionated components thereof on live C. acnes-induced in vitro and in vivo inflammation. Following methanol extraction of WBM leaves, we confirmed anti-inflammatory properties of ME in C. acnes-treated human THP-1 monocyte and mouse ear edema models. Using a bioassay-monitored isolation approach and a combination of liquid–liquid extraction and column chromatography, the ME was then separated into n-hexane, ethyl acetate, n-butanol and water-soluble fractions. The hexane fraction exerted the most potent anti-inflammatory effect, suppressing C. acnes-induced interleukin-8 (IL-8) production by 36%. The ethanol-soluble fraction (ESF), which was separated from the n-hexane fraction, significantly inhibited C. acnes-induced activation of mitogen-activated protein kinase (MAPK)-mediated cellular IL-8 production. Similarly, the ESF protected against C. acnes-stimulated mouse ear swelling, as measured by ear thickness (20%) and biopsy weight (23%). Twenty-four compounds in the ESF were identified using gas chromatograph–mass spectrum (GC/MS) analysis. Using co-cultures of C. acnes and THP-1 cells, β-ionone, a compound of the ESF, reduced the production of IL-1β and IL-8 up to 40% and 18%, respectively. β-ionone also reduced epidermal microabscess, neutrophilic infiltration and IL-1β expression in mouse ear. We also found evidence of the presence of anti-inflammatory substances in an unfractionated phenolic extract of WBM leaf, and demonstrated that the ESF is a potential anti-inflammatory agent for modulating in vitro and in vivo C. acnes-induced inflammatory responses.
Introduction
Acne vulgaris (acne) is one of the most common and chronic inflammatory skin diseases associated with abnormal keratinization, increasing sebum production, bacterial colonization and inflammation [1]. It is widely accepted that the commensal bacterium Cutibacterium acnes (previously named Propionibacterium acnes) is involved in the initiation and prolongation of inflammation [1,2]. For example, C. acnes produces numerous hydrolytic enzymes, including lipases, proteases and hyaluronidases, that can damage skin [3], and modulates inflammatory responses triggered when monocytes are stimulated by C. acnes through the activation of toll-like receptor 2 (TLR2) [1,3]. During the inflammation, pro-inflammatory cytokines secreted from monocytes attract neutrophils, basophils and T cells to the infected pilosebaceous unit, leading to the process of chronic inflammation [4]. Over-expression and excessive secretion of pro-inflammatory cytokines have been highly correlated with the severity of acne in patients [5]. Since anti-cytokine therapies have been applied for various inflammatory diseases [6], identifying new agents that suppress C. acnes-induced inflammation might be useful in treating acne vulgaris.
Wild bitter melon (Momordica charantia L. var. abbreviate Seringe; WBM), a member of the family Cucurbitaceae, is a tropical and subtropical vine that is native to Asia, Africa and the Caribbean region. WBM fruits and leaves are widely consumed on a daily basis, as well as used as folk remedies to treat numerous conditions and symptoms, including relieving metabolic syndromes [7], diabetes [8] and inflammatory responses [9], lowering blood lipid and glucose levels [10] and also for slowing the progression of certain cancers [8]. Like many other plants, WBM fruit and leaves are rich sources of a wide variety carotenoids and polyphenols [11] that may have beneficial applications in humans. For example, WBM fruit extracts inhibited C. acnes-induced in vitro or in vivo inflammatory responses [12]. Furthermore, WBM leaf extracts possessed significant antioxidant, cytoprotective and anti-melanogenic activities [13], and suppressed C. acnes-induced inflammation [14]. These findings indicate that extracts of WBM contain antioxidant and anti-inflammatory substances that might be used to treat acne vulgaris and perhaps other ailments as well. We therefore explore bioactive components in WBM leaf extracts which might account for anti-inflammatory properties.
To this end, we first used bioassay-guided fractionation methods to isolate and identify active compounds in a methanolic extract (ME) of WBM leaves. Next, using two C. acnes-induced models of human THP-1 monocytes and mouse ear edema, we determined the anti-inflammatory effect of WBM leaf extract and its components against C. acnes. In addition, mechanisms underlying WBM leaf extract and its components that suppressed inflammation were explored. This study provides new understanding regarding how active ingredients of WBM extracts modulate in vitro and in vivo inflammatory responses.
Effects of ME of WBM Leaf on C. acnes-Induced Cellular IL-8 Production and Mouse Ear Edema
To investigate whether incubation of THP-1 cells with the ME of WBM leaf affected cell viability, the culture medium was supplemented with various concentrations (up to 100 µg/mL) of tested samples. No negative effect on cell proliferation was observed when the concentration of WBM leaf ME was 100 µg/mL or less (data not shown). Since IL-8 is a neutrophilic chemokine which plays a critical role in the development of acne vulgaris, we measured the production of IL-8 by THP-1 cells in order to determine whether tested samples suppressed C. acnes-induced inflammation. Figure 1a shows that IL-8 production was significantly increased following C. acnes stimulation. However, supplementation of the medium with different concentrations of the ME of WBM leaf significantly reduced the production of IL-8 by as much as 36%. Using the C. acnes-induced mouse ear edema model, we investigate the in vivo anti-inflammatory effects of the ME of WBM leaf. Inoculation of mouse ears with C. acnes increased edema 2.1-fold (p < 0.001) relative to the control. WBM leaf ME (0.25 and 0.5 mg) significantly reduced ear swelling as measured by ear thickness (by 25%) and ear Molecules 2020, 25, 4277 3 of 18 disc weight (by 32%) (Figure 1b). These findings indicate that topical injection of the ME of WBM leaf suppresses C. acnes-induced inflammation in vivo. Our results are in accordance with previous findings reported by Huang and colleagues, who demonstrated that total phenolic extracts of WBM leaves mitigated inflammation by suppressing C. acnes-induced production of cellular cytokines and mouse ear swelling [14]. Furthermore, ethanol/ethyl acetate extract of WBM leaf inhibited heat-killed P. gingivalis-induced cytokine production by THP-1 cells [15]. Collectively, these findings document that WBM leaf extracts suppressed bacteria-induced inflammation in vitro and in vivo. findings reported by Huang and colleagues, who demonstrated that total phenolic extracts of WBM leaves mitigated inflammation by suppressing C. acnes-induced production of cellular cytokines and mouse ear swelling [14]. Furthermore, ethanol/ethyl acetate extract of WBM leaf inhibited heat-killed P. gingivalis-induced cytokine production by THP-1 cells [15]. Collectively, these findings document that WBM leaf extracts suppressed bacteria-induced inflammation in vitro and in vivo.
Lu and colleagues previously demonstrated that WBM extracts inhibit the growth of a variety of bacterial species [16]. In the present study, having documented that the ME of WBM leaf reduces cytokine production by virtue of its anti-inflammatory properties, we speculated that the decrease in pro-inflammatory mediator production might be due to the anti-bacterial effect of the ME of WBM leaf. In fact, the results of our antibacterial assay showed that the minimal inhibitory concentration (MIC) of WBM leaf ME was higher than 500 μg/mL; thus, the MIC value was at least 5-fold higher than the highest concentration (100 μg/mL) of WBM leaf ME included in the THP-1 cell culture medium. We conclude, therefore, that the anti-bacterial properties of WBM leaf ME do not contribute to the reduction of C. acnes-induced pro-inflammatory cytokine production. (25,50 or 100 μg/mL) of ME for 24 h. The culture supernatants were subsequently collected and analyzed for the IL-8 levels (a). In the ear edema mouse model, ME (0.25 or 0.5 mg/site) or vehicle (phosphatebuffered saline; PBS) was intradermally injected, immediately followed by the C. acnes injection. The inhibitory effects of ME on C. acnes-induced ear swelling were evaluated by measuring the ear thickness and ear biopsy weight (b). Each value shows the mean ± SD. Values with different symbols are significantly different from the C. acnes control (C. acnes alone) at p < 0.05 (*) and p < 0.001 (***).
Effects of Four Partitioned Fractions from ME of WBM Leaf on C. acnes-Induced Cellular IL-8 Production
In a separate study, to further inquire if four partitioned fractions from the ME of WBM leaf would suppress IL-8 production, the culture medium was supplemented with various concentrations (up to 800 μg/mL) of tested samples. There was no cytotoxic effect when THP-1 cells were incubated with culture medium containing as much as 200 μg/mL of the n-hexane (Hex), or 400 μg/mL of the other three sub-extracts, EtA, BuOH and H2O (data not shown). The production of IL-8 by C. acnesstimulated THP-1 cells was markedly decreased by Hex (up to 51.2% at 200 μg/mL), followed by BuOH (46.6% at 300 μg/mL), EtA (40.0% at 300 μg/mL) and H2O (27.5% at 300 μg/mL) ( Figure 2). Among the four sub-extracts, Hex exerted the most potent suppressive effect on IL-8 production. Lu and colleagues previously demonstrated that WBM extracts inhibit the growth of a variety of bacterial species [16]. In the present study, having documented that the ME of WBM leaf reduces cytokine production by virtue of its anti-inflammatory properties, we speculated that the decrease in pro-inflammatory mediator production might be due to the anti-bacterial effect of the ME of WBM leaf. In fact, the results of our antibacterial assay showed that the minimal inhibitory concentration (MIC) of WBM leaf ME was higher than 500 µg/mL; thus, the MIC value was at least 5-fold higher than the highest concentration (100 µg/mL) of WBM leaf ME included in the THP-1 cell culture medium. We conclude, therefore, that the anti-bacterial properties of WBM leaf ME do not contribute to the reduction of C. acnes-induced pro-inflammatory cytokine production.
Effects of Four Partitioned Fractions from ME of WBM Leaf on C. acnes-Induced Cellular IL-8 Production
In a separate study, to further inquire if four partitioned fractions from the ME of WBM leaf would suppress IL-8 production, the culture medium was supplemented with various concentrations (up to 800 µg/mL) of tested samples. There was no cytotoxic effect when THP-1 cells were incubated with culture medium containing as much as 200 µg/mL of the n-hexane (Hex), or 400 µg/mL of the other three sub-extracts, EtA, BuOH and H 2 O (data not shown). The production of IL-8 by C. acnes-stimulated THP-1 cells was markedly decreased by Hex (up to 51.2% at 200 µg/mL), followed by BuOH (46.6% at 300 µg/mL), EtA (40.0% at 300 µg/mL) and H 2 O (27.5% at 300 µg/mL) ( Figure 2). Among the four sub-extracts, Hex exerted the most potent suppressive effect on IL-8 production.
critical role in initiating and prolonging inflammation [18,19]. Phytochemicals exert their antioxidant properties by scavenging free radicals generated during the process of inflammation, thereby reducing oxidative stress and cell damage. We report that total phenolic contents of Hex, EtA, BuOH and H2O sub-extracts were 37.2 ± 2.42, 21.0 ± 0.76, 31.6 ± 1.22 and 26.6 ± 2.32 mg GAE/g, respectively. The presence of phytochemical compounds in the fractions may be partially responsible for the antiinflammatory properties observed.
Effects of Ethanol-Soluble Fraction (ESF) on C. acnes-Induced Cellular IL-8 Production and Mouse Ear Edema
Since the Hex fraction from the ME of WBM leaf exerted the most potent suppressive effect on IL-8 production, it was therefore subjected to further separation techniques. Using silica gel column chromatography, three sub-fractions of the Hex fraction were collected. Among the three subfractions, the ethanol phase of the first sub-fraction (Hex-1) was cytotoxic at doses above 50 μg/mL. There was no cytotoxicity of Hex-3 at concentrations below 200 μg/mL, and no inhibitory effect on IL-8 production was observed (data not shown). The ethanol solution of the second sub-fraction (Hex-2) had no inhibitory effect on cell proliferation; however, Hex-2 had a significant suppressive effect on C. acnes-induced IL-8 production at concentrations below 100 μg/mL.
We further explored the modulatory effect of different concentrations (12.5, 25, 50 or 100 μg/mL) of Hex-2 (designated as ESF) on C. acnes-induced IL-8 production, and we found that the ESF significantly lowered levels of IL-8 when the dose was 25 μg/mL or higher (Figure 3a), indicating that the ESF exerted in vitro anti-inflammatory effects in THP-1 monocytes. Since C. acnes-induced inflammatory responses are strongly regulated by the mitogen-activated protein kinase (MAPK) Figure 2. Effects of four partitioned fractions from ME of WBM leaf on C. acnes-induced IL-8 production in vitro. Four partitioned fractions as described in Figure 6, including n-hexane (Hex), ethyl acetate (EtA), n-butanol (n-BuOH) and water (H 2 O) sub-extracts. THP-1 cells were cultured with DMSO as the negative control, or co-incubated with C. acnes (M.O.I. = 75) and different concentrations of four respective sub-extracts for 24 h. IL-8 level was examined by the method described above. Each value shows the mean ± SD. Values with different symbols are significantly different from the C. acnes control (C. acnes alone) at p < 0.05 (*), p < 0.01 (**) and p < 0.001 (***).
Certain plant phytochemicals, such as polyphenols and flavonoids, can suppress the production of pro-inflammatory mediators, such as cytokines and eicosanoids, involved in the process of acute inflammation in humans [17]. In fact, one of the mechanisms underlying the anti-inflammatory effects of herbal extracts may be due, in part, to the antioxidant compounds they contain. It is widely recognized that excessive production of free radicals, such as hydroxyl, superoxide and nitrous oxide in sebum produced by C. acnes-infected sebaceous glands, and their uncontrolled regulation play a critical role in initiating and prolonging inflammation [18,19]. Phytochemicals exert their antioxidant properties by scavenging free radicals generated during the process of inflammation, thereby reducing oxidative stress and cell damage. We report that total phenolic contents of Hex, EtA, BuOH and H 2 O sub-extracts were 37.2 ± 2.42, 21.0 ± 0.76, 31.6 ± 1.22 and 26.6 ± 2.32 mg GAE/g, respectively. The presence of phytochemical compounds in the fractions may be partially responsible for the anti-inflammatory properties observed.
Effects of Ethanol-Soluble Fraction (ESF) on C. acnes-Induced Cellular IL-8 Production and Mouse Ear Edema
Since the Hex fraction from the ME of WBM leaf exerted the most potent suppressive effect on IL-8 production, it was therefore subjected to further separation techniques. Using silica gel column chromatography, three sub-fractions of the Hex fraction were collected. Among the three sub-fractions, the ethanol phase of the first sub-fraction (Hex-1) was cytotoxic at doses above 50 µg/mL. There was no cytotoxicity of Hex-3 at concentrations below 200 µg/mL, and no inhibitory effect on IL-8 production was observed (data not shown). The ethanol solution of the second sub-fraction (Hex-2) had no inhibitory effect on cell proliferation; however, Hex-2 had a significant suppressive effect on C. acnes-induced IL-8 production at concentrations below 100 µg/mL.
We further explored the modulatory effect of different concentrations (12.5, 25, 50 or 100 µg/mL) of Hex-2 (designated as ESF) on C. acnes-induced IL-8 production, and we found that the ESF significantly lowered levels of IL-8 when the dose was 25 µg/mL or higher (Figure 3a), indicating that the ESF exerted in vitro anti-inflammatory effects in THP-1 monocytes. Since C. acnes-induced inflammatory responses are strongly regulated by the mitogen-activated protein kinase (MAPK) pathway [20], we inquire whether the ESF might be inhibiting IL-8 production by way of inactivating MAPK. Figure 3b shows that C. acnes stimulation significantly increased levels of phosphorylated p38, extracellular signal-regulated kinase (ERK) and c-Jun N-terminal kinase (JNK). However, incubation of C. acnes-stimulated THP-1 monocytes with the ESF significantly suppressed the expression of phosphorylated p38 by up to 48%, ERK by up to 43% and JNK by up to 46%.
Furthermore, using the mouse ear edema model, we observed that the ESF exerted in vivo anti-inflammatory properties by lowering C. acnes-stimulated ear thickness by up to 20% and ear biopsy weight by up to 23% (Figure 3c). The anti-inflammatory agent luteolin suppressed mouse ear swelling by 11.5%. We found a similar extent of mouse ear swelling relieved by the injection of the ESF (2-4 µg) or luteolin (50 µg), indicating that certain components in the ESF might account for the potent suppressive effect. This finding prompted us to identify which bioactive components might be responsible for the anti-inflammatory properties of the ESF sample.
MAPK, a family of serine/threonine protein kinases that includes p38, ERK and JNK, is involved in the regulation of a wide variety of fundamental cellular responses, physiologic functions and pathological processes [21]. It is widely accepted that the MAPK signaling cascade is activated by bacterial stimuli through TLR2, resulting in the over-expression and production of various pro-inflammatory mediators [22]. For example, MAPK plays a central role in Streptococcus pneumoniaeor C. acnes-stimulated inflammatory responses in murine microglia and human neonatal epidermal keratinocytes [20,22]. The results of the present study demonstrate that incubation of C. acnes-stimulated THP-1 cells with the ESF significantly attenuates MAPK activation, resulting in the suppression of IL-8 production ( Figure 4). This finding corroborated our previous finding that expression of phosphorylated MAPK is suppressed by the unfractionated phenolic extract (TPE) of WBM leaf in THP-1 cells co-stimulated with C. acnes [14]. This fact might be strongly related to the cellular mechanisms underlying the ability of the ESF to lower the production of downstream pro-inflammatory mediators, thereby limiting in vivo inflammatory responses.
GC-MS Analysis of ESF
Using GC-MS, we separated and identified 24 known biological compounds in the ESF ( Table 1). The identification and characterization of compounds were based on the order of elution in an HP-5MS column. The retention time, electron ionization mass spectrum (EIMS) fragments, molecular formula and the percentage of each of these compounds were also recorded. There were 11 fatty acid alkyl esters, 7 terpene-related compounds, 3 ketones, 2 alkanes and 1 aldehyde present in the ESF, and three major compounds were 3-phytylmenadione (vitamin K1) (15.8% of total), α-tocopherol (13.6%) and squalene (12.2%). pathway [20], we inquire whether the ESF might be inhibiting IL-8 production by way of inactivating MAPK. Figure 3b shows that C. acnes stimulation significantly increased levels of phosphorylated p38, extracellular signal-regulated kinase (ERK) and c-Jun N-terminal kinase (JNK). However, incubation of C. acnes-stimulated THP-1 monocytes with the ESF significantly suppressed the expression of phosphorylated p38 by up to 48%, ERK by up to 43% and JNK by up to 46%. Furthermore, using the mouse ear edema model, we observed that the ESF exerted in vivo antiinflammatory properties by lowering C. acnes-stimulated ear thickness by up to 20% and ear biopsy weight by up to 23% (Figure 3c). The anti-inflammatory agent luteolin suppressed mouse ear swelling by 11.5%. We found a similar extent of mouse ear swelling relieved by the injection of the ESF (2-4 μg) or luteolin (50 μg), indicating that certain components in the ESF might account for the potent suppressive effect. This finding prompted us to identify which bioactive components might be responsible for the anti-inflammatory properties of the ESF sample.
MAPK, a family of serine/threonine protein kinases that includes p38, ERK and JNK, is involved in the regulation of a wide variety of fundamental cellular responses, physiologic functions and pathological processes [21]. It is widely accepted that the MAPK signaling cascade is activated by bacterial stimuli through TLR2, resulting in the over-expression and production of various proinflammatory mediators [22]. For example, MAPK plays a central role in Streptococcus pneumoniae-or C. acnes-stimulated inflammatory responses in murine microglia and human neonatal epidermal keratinocytes [20,22]. The results of the present study demonstrate that incubation of C. acnesstimulated THP-1 cells with the ESF significantly attenuates MAPK activation, resulting in the suppression of IL-8 production ( Figure 4). This finding corroborated our previous finding that expression of phosphorylated MAPK is suppressed by the unfractionated phenolic extract (TPE) of WBM leaf in THP-1 cells co-stimulated with C. acnes [14]. This fact might be strongly related to the cellular mechanisms underlying the ability of the ESF to lower the production of downstream proinflammatory mediators, thereby limiting in vivo inflammatory responses.
GC-MS Analysis of ESF
Using GC-MS, we separated and identified 24 known biological compounds in the ESF ( Table 1). The identification and characterization of compounds were based on the order of elution in an HP-5MS column. The retention time, electron ionization mass spectrum (EIMS) fragments, molecular formula and the percentage of each of these compounds were also recorded. There were 11 fatty acid alkyl esters, 7 terpene-related compounds, 3 ketones, 2 alkanes and 1 aldehyde present in the ESF, and three major compounds were 3-phytylmenadione (vitamin K1) (15.8% of total), α-tocopherol (13.6%) and squalene (12.2%). concentration was determined using the method described above (a). MAPK activation was determined by western blot (b). In the ear edema mouse model, PBS, ESF (2, 4 or 6 µg/site) or luteolin (50 µg/site) was intradermally injected, immediately followed by the C. acnes injection. Infiltrated neutrophils were observed in a hematoxylin and eosin-stained cross section of the C. acnes-injected ear (×1000 magnification panel). Arrow ( ): neutrophil infiltration. Scale bars represent 200 µm. The inhibitory effects of ESF and luteolin on C. acnes-stimulated mouse ear edema was quantified as described above (c). Each value shows the mean ± SD. Values with different symbols are significantly different from the C. acnes control (C. acnes alone) at p < 0.05 (*), p < 0.01 (**) and p < 0.001 (***).
Previously, 3-phytylmenadione had been reported to suppress lipopolysaccharide (LPS)-stimulated IL-6 production by inactivating the nuclear factor kappa B (NF-κB) signaling pathway in murine RAW264.7 macrophages, human THP-1 monocytes and primary human fibroblasts [23,24]. Alpha-tocopherol exerted anti-inflammatory properties through the modulation of cell signaling cascades, such as protein kinase C (PKC), NF-κB and peroxisome proliferator-activated receptors (PPARs) [25]. Furthermore, the natural triterpene squalene suppressed production and expression of pro-inflammatory mediators by murine peritoneal macrophages and human monocytes and neutrophils [26]. Collectively, since these three bioactive components might account for the suppressive effect of the ESF on IL-8 production, they warrant further investigation.
In addition, methyl palmitate (9.9% of total), methyl linolenate (7%) and the other nine fatty acid alkyl esters (together 6.6%) were identified in the ESF sample. Methyl palmitate has been shown to reduce chemical-induced hepatotoxicity in rats [27], and inhibit phagocytosis and the release of pro-inflammatory mediators in primary rat Kupffer cells [28]. Methyl linolenate significantly blocks melanogenesis and inhibits tyrosinase activity in mouse B16 melanoma cells [29]. However, to date, there are no published reports of the anti-inflammatory properties of fatty acid acyl esters. In a separate work, we found that oleic acid, linoleic acid and α-linolenic acid cannot inhibit IL-8 production in C. acnes-stimulated THP-1 cells (data not shown).
Over time, we have become interested in knowing the anti-acne properties of non-nutrient phytochemicals in WBM. For example, we had previously reported that phytol, a precursor of vitamin E and vitamin K1, suppressed C. acnes-induced IL-8 production in THP-1 cells. [12]. In this study, β-ionone and dihydroactinidiolide were further identified by comparison of the recorded mass spectra with those of authentic standards (Supplementary Figure S1). Since β-ionone and dihydroactinidiolide are both precursors and metabolites of carotenoids, we inquired to determine whether anti-inflammatory properties of carotenoid-rich WBM extracts might be due, in part at least, to one or both of these compounds. Based on published reports, β-ionone is a metabolite of β-carotene and other carotenoids produced by enzymatic degradation [31]. It is also a precursor in the synthesis of vitamin A and β-carotene [32,33]. Recently, β-ionone has been reported to have vitamin A activity, exert anticarcinogenic and antitumor effects in human colon cancer cells [34], and reduce LPS-stimulated inflammation in murine BV-2 microglial cells [35]. Furthermore, dihydroactinidiolide, a structural analog of the anti-inflammatory agent loliolide, is a C11-terpene flavor compound present in a wide range of plants. A recent report demonstrated that dihydroactinidiolide exerted free radical scavenging activity with 1,1-diphenyl-2-picrylhydrazine (DPPH). It also has metal-chelating activity, and is neuroprotective against the Alzheimer's amyloid β-peptide (Aβ 25-35 )-induced cytotoxicity in mouse neuroblastoma Neuro2A cells [36]. However, little is known about the effects of dihydroactinidiolide in immune cells. Thus, we further determined suppressive effect of β-ionone and dihydroactinidiolide on C. acnes-stimulated inflammation in THP-1 cells and mouse ear edema models.
Effects of β-Ionone and Dihydroactinidiolide on C. acnes-Induced Cellular IL-8 Production and Mouse Ear Edema
As shown in Figure 4a, β-ionone and dihydroactinidiolide significantly reduced IL-8 production (up to 46%) in C. acnes-stimulated THP-1 cells, and this suppressive effect on IL-8 production was comparable with that of the anti-inflammatory agent luteloin. When both compounds were tested to determine if they also exerted in vivo anti-inflammatory effect by reducing mouse ear edema, we found that a live culture of C. acnes stimulated the mouse ear to swell 2.5-fold (p < 0.001) and ear biopsy weight to increase 2.5-fold (p < 0.001) as compared to the untreated control (Figure 4b). Injection of β-ionone into mouse ears suppressed ear swelling by 24% (thickness) and 17% (weight), as well as the injection of luteloin by 12% (thickness) and 13% (weight) (Figure 4b). However, dihydroactinidiolide had no effect on ear thickness, but did slightly increase ear biopsy weight. These findings suggest that the anti-inflammatory properties of β-ionone and dihydroactinidiolide partially contributed to the suppressive effect of the ESF on IL-8 production in C. acnes-stimulated THP-1 cells. However, only β-ionone exerts an in vitro anti-inflammatory effect, and plays a critical role in the relief of mouse ear edema co-inoculated with the ESF and live C. acnes.
In a separate study, we inquired if β-ionone affected immune responses in C. acnes-stimulated mouse ear edema. The results of histological analysis demonstrated that inoculation of the mouse ear with live C. acnes causes epidermal microabscesses (Figure 4b). However, C. acnes-induced swelling was attenuated by β-ionone or luteolin treatment (Figure 4b). Flow cytometric analysis also showed that infiltration of leukocytes (CD45 + ) and neutrophils (CD45 + Ly6G + ) in mouse ear tissue was evident after 12 h of C. acnes stimulation (Figure 4c). Topical injection of β-ionone or luteolin along with the C. acnes administration significantly lowered the proportions of infiltrated inflammatory leukocytes (from 48% to 38%) and neutrophils (from 46% to 25-30%) and expression levels of inflammatory IL-1β in both cells approximately from 35% to 23% or 17% (Figure 4c).
A recent study by Fenini and colleagues demonstrated that increased amounts of IL-1β were released by activating inflammatory cells, keratinocytes and sebocytes with C. acnes. IL-1β and several other mediators markedly induced neutrophil recruitment to the skin [37], and infiltrated neutrophils produced large amounts of IL-1β to further promote the development of acne [38]. Since high levels of IL-1β were observed in C. acnes-induced human acne lesions and mouse skin lesions, Kistowska and colleagues suggested that IL-1β could be the critical role of C. acnes in the pathogenesis of acne vulgaris [38]. In the present study, β-ionone reduced immune cell migration and inflammatory cytokine gene expression, thereby decreasing epidermal microabscess and edema formation. We speculate that the in vivo anti-inflammatory effect of β-ionone on C. acnes-induced skin inflammation is attributable to the suppression of immune cell infiltration and IL-1β expression.
comparable with that of the anti-inflammatory agent luteloin. When both compounds were tested to determine if they also exerted in vivo anti-inflammatory effect by reducing mouse ear edema, we found that a live culture of C. acnes stimulated the mouse ear to swell 2.5-fold (p < 0.001) and ear biopsy weight to increase 2.5-fold (p < 0.001) as compared to the untreated control (Figure 4b). Injection of β-ionone into mouse ears suppressed ear swelling by 24% (thickness) and 17% (weight), as well as the injection of luteloin by 12% (thickness) and 13% (weight) (Figure 4b). However, dihydroactinidiolide had no effect on ear thickness, but did slightly increase ear biopsy weight. These findings suggest that the anti-inflammatory properties of β-ionone and dihydroactinidiolide partially contributed to the suppressive effect of the ESF on IL-8 production in C. acnes-stimulated THP-1 cells. However, only β-ionone exerts an in vitro anti-inflammatory effect, and plays a critical role in the relief of mouse ear edema co-inoculated with the ESF and live C. acnes.
In a separate study, we inquired if β-ionone affected immune responses in C. acnes-stimulated mouse ear edema. The results of histological analysis demonstrated that inoculation of the mouse ear with live C. acnes causes epidermal microabscesses (Figure 4b). However, C. acnes-induced swelling was attenuated by β-ionone or luteolin treatment (Figure 4b). Flow cytometric analysis also showed that infiltration of leukocytes (CD45 + ) and neutrophils (CD45 + Ly6G + ) in mouse ear tissue was evident after 12 h of C. acnes stimulation (Figure 4c). Topical injection of β-ionone or luteolin along with the C. acnes administration significantly lowered the proportions of infiltrated inflammatory leukocytes (from 48% to 38%) and neutrophils (from 46% to 25-30%) and expression levels of inflammatory IL-1β in both cells approximately from 35% to 23% or 17% (Figure 4c).
A recent study by Fenini and colleagues demonstrated that increased amounts of IL-1β were released by activating inflammatory cells, keratinocytes and sebocytes with C. acnes. IL-1β and several other mediators markedly induced neutrophil recruitment to the skin [37], and infiltrated neutrophils produced large amounts of IL-1β to further promote the development of acne [38]. Since high levels of IL-1β were observed in C. acnes-induced human acne lesions and mouse skin lesions, Kistowska and colleagues suggested that IL-1β could be the critical role of C. acnes in the pathogenesis of acne vulgaris [38]. In the present study, β-ionone reduced immune cell migration and inflammatory cytokine gene expression, thereby decreasing epidermal microabscess and edema formation. We speculate that the in vivo anti-inflammatory effect of β-ionone on C. acnes-induced skin inflammation is attributable to the suppression of immune cell infiltration and IL-1β expression.
(a) (10,20 or 50) of β-ionone or dihydroactinidiolide for 24 h. IL-8 level was examined by the method described above (a). PBS, β-ionone, dihydroactinidiolide or luteolin (50 µg/site) was intradermally injected, immediately followed by the C. acnes injection. Infiltrated neutrophils were observed in a hematoxylin and eosin-stained cross section of the C. acnes-injected ear. The inhibitory effects on C. acnes-stimulated mouse ear edema were evaluated by examined by the method described above (b). Evaluation of C. acnes-induced immune cells by flow cytometry in mouse ear after intradermal injection of PBS, β-ionone or luteolin. Twelve hours after the injection, flow cytometric analysis of the inflammatory cells harvested from C. acnes-induced ear tissues was performed. Cell suspensions were incubated with anti-CD45/PerCP and anti-Ly6G/FITC, and analyzed by flow cytometry (c). Each value shows the mean ± SD. Values with different symbols are considered to be significantly different from the C. acnes control (C. acnes alone) at p < 0.05 (*), p < 0.01 (**) and p < 0.001 (***).
Effects of β-Ionone on Cellular IL-1β Production and Caspase-1 Expression
Using human THP-1 monocytic cells, we investigated a possible mechanism underlying the inhibitory effect of β-ionone on C. acnes-stimulated IL-1β expression. Figure 5a shows that C. acnes significantly induced IL-1β production in THP-1 monocytes. The induction of IL-1β by C. acnes is considered as the major factor causing and prolonging skin inflammatory responses. When cells were co-incubated with C. acnes and β-ionone, levels of secreted IL-1β were significantly reduced by up to 39% (Figure 5a). We further inquired whether increasing production of IL-1β by C. acnes could be attributed to the induced expression of specific protease, caspase-1, previously called interleukin-1β converting enzyme (ICE), which is the rate-limiting enzyme involved in the cleavage pro-IL-1β (the precursor of IL-1β) to form IL-1β [39]. We observed that C. acnes-stimulated the over-expression of pro-caspase-1 (inactive zymogen of caspase-1) and proteolytic subunit p10 of pro-caspase-1 (active caspase-1) (Figure 5b). β-ionone did not affect expression of pro-caspase-1; however, it did significantly suppress cleaved caspase-1 by up to 54% (50 µM) as compared to the C. acnes-stimulated control (Figure 5b). Values with different symbols are considered as significantly different from the C. acnes control (C. acnes alone) at p < 0.01 (**) and p < 0.001 (***).
In conclusion, we have demonstrated that methanolic extracts of WBM leaf, four partitioned fractions from the ME and the ESF were prepared using bioassay-guided isolation techniques. The ESF significantly suppresses C. acnes-stimulated MAPK-mediated pro-inflammatory IL-8 production in human monocytes, and reduces ear swelling in a C. acnes-induced mouse ear edema model. Furthermore, β-ionone from the ESF lowered caspase-1 over-expression and pro-inflammatory mediator production in THP-1 monocytes, and lessened epidermal microabscess, neutrophilic infiltration and IL-1β expression in the mouse ear. Collectively, our findings supported that WBM leaf extracts exerted anti-inflammatory properties, and also demonstrated that the ESF is a potential The data in Figures 4 and 5 indicate that increasing levels of IL-1β expression and production in mouse ear edema and murine monocytes might be due, in part, to the stimulation of active caspase-1 expression by live C. acnes. Our findings were in accordance with results of previous reports which show that C. acnes-upregulated caspase-1 gene expression triggers the cleavage of pro-IL-1β to promote the production and secretion of IL-1β [40,41]. It is widely recognized that caspase-1 is a critical regulator of the inflammatory response, and that his highly specific protease is activated by a cytosolic multi-protein complex, named inflammasome (caspase-1-activating platform), in response to exposure to C. acnes. Since excessive levels of IL-1β, caspase-1 and NLRP3 inflammasome were detected in C. acnes-infected pilosebaceous follicles and skin lesions, advancing the understanding of the molecular mechanisms regarding IL-1β, caspase-1 and inflammasome may provide knowledge into acne pathogenesis that might identify targets for the therapy of acne.
Using the bioassay-guided isolation approach, we herein demonstrated that the ME of WBM leaf and its four partitioned fractions suppressed C. acnes-induced inflammatory responses in THP-1 cells. Furthermore, the ESF collected from the most potent hexane fraction significantly reduced C. acnes-induced cellular IL-8 production through the inactivation of MAPK cell signaling, and alleviated C. acnes-stimulated mouse ear edema. Our results show that the ESF is more potent than luteolin, an anti-inflammatory agent, in suppressing C. acnes-induced inflammation. We therefore suggest that this ESF should be considered as a therapeutic means to alleviate or relieve inflammatory skin diseases. Future in vivo investigations of the anti-inflammatory, toxicological and physiological aspects of the effects of the ESF on skin inflammation are warranted.
In addition to WBM leaf extracts, β-ionone significantly reduced C. acnes-stimulated pro-inflammatory IL-1β or IL-8 production and mouse ear edema. The anti-inflammatory effect of β-ionone on immune responses might be due, in part, to the suppression of C. acnes-induced upregulated caspase-1 gene expression. A limitation of the present study is that β-ionone was not one of major compounds among 24 identified components of the ESF, and our findings accounted for the partial anti-inflammatory properties of the ESF we observed. Additional studies are needed to further determine key components involving in the suppressive effects of the ESF on inflammatory responses.
In conclusion, we have demonstrated that methanolic extracts of WBM leaf, four partitioned fractions from the ME and the ESF were prepared using bioassay-guided isolation techniques. The ESF significantly suppresses C. acnes-stimulated MAPK-mediated pro-inflammatory IL-8 production in human monocytes, and reduces ear swelling in a C. acnes-induced mouse ear edema model. Furthermore, β-ionone from the ESF lowered caspase-1 over-expression and pro-inflammatory mediator production in THP-1 monocytes, and lessened epidermal microabscess, neutrophilic infiltration and IL-1β expression in the mouse ear. Collectively, our findings supported that WBM leaf extracts exerted anti-inflammatory properties, and also demonstrated that the ESF is a potential anti-inflammatory agent for modulating in vitro and in vivo inflammatory responses.
Isolation and Determination of Active Compounds from WBM Leaf Extract
Wild bitter melon leaves (Hualien No.1) were obtained from the Hualien District Agricultural Research and Extension Station (Hualien, Taiwan). The leaves were washed, air dried and then finely ground and extracted with methanol. As shown in Figure 6, ground WBM leaf powder was immersed and extracted with methanol (1:20, w/v) at room temperature for 4 h. The residue was re-extracted overnight with methanol (1:20, w/v). The combined filtrates were then centrifuged at 12,000× g for 10 min, and evaporated to dryness to yield the ME (19.1% of the original weight). Next, combined MEs of WBM leaf (71 g) were dissolved and suspended in 200 mL of water in a separatory funnel prior to being partitioning sequentially with n-hexane, ethyl acetate and n-butanol (200 mL each for one time). The yield of fractionated extracts was based on the weight of the crude methanol extract. Under reduced pressure evaporation or freeze drying, the four sub-extract weights were as follows: n-hexane (Hex) (24.9 g, 35% of yield), ethyl acetate (EtA) (5.90 g, 8.3% of yield), n-butanol (BuOH) (7.35 g, 10.3% of yield) and aqueous solution (H 2 O) (27.1 g, 38.2% of yield). All extracts were stored frozen at −20 • C until used. Following bioassay-guided procedures, the active sub-extract Hex was further fractionated using chromatographic techniques. Hex (10 g) was then re-dissolved with n-hexane and separated using silica gel column chromatography (Silicycle SiliaFlash P60 230-400 mesh). Three fractions were collected and evaporated to dryness to give "Hex-1" (184 mg; yellow), "Hex-2" (729 mg; brownish-red) and "Hex-3" (47.9 mg; light yellow). The three sub-extracts were dissolved in ethanol. Hex-3 was totally dissolved in ethanol, however, Hex-1 and Hex-2 were dissolved in ethanol, resulting in the insoluble residue and ethanol-soluble fraction. The ethanol solution of Hex-2 was then evaporated under reduced pressure and designated the "ethanol-soluble fraction (ESF)" extract (438 mg).
Molecules 2020, 25, x FOR PEER REVIEW 13 of 18 and extracted with methanol (1:20, w/v) at room temperature for 4 h. The residue was re-extracted overnight with methanol (1:20, w/v). The combined filtrates were then centrifuged at 12,000× g for 10 min, and evaporated to dryness to yield the ME (19.1% of the original weight). Next, combined MEs of WBM leaf (71 g) were dissolved and suspended in 200 mL of water in a separatory funnel prior to being partitioning sequentially with n-hexane, ethyl acetate and n-butanol (200 mL each for one time). The yield of fractionated extracts was based on the weight of the crude methanol extract. Under reduced pressure evaporation or freeze drying, the four sub-extract weights were as follows: nhexane (Hex) (24.9 g, 35% of yield), ethyl acetate (EtA) (5.90 g, 8.3% of yield), n-butanol (BuOH) (7.35 g, 10.3% of yield) and aqueous solution (H2O) (27.1 g, 38.2% of yield). All extracts were stored frozen at -20 °C until used. Following bioassay-guided procedures, the active sub-extract Hex was further fractionated using chromatographic techniques. Hex (10 g) was then re-dissolved with n-hexane and separated using silica gel column chromatography (Silicycle SiliaFlash P60 230-400 mesh). Three fractions were collected and evaporated to dryness to give "Hex-1" (184 mg; yellow), "Hex-2" (729 mg; brownish-red) and "Hex-3" (47.9 mg; light yellow). The three sub-extracts were dissolved in ethanol. Hex-3 was totally dissolved in ethanol, however, Hex-1 and Hex-2 were dissolved in ethanol, resulting in the insoluble residue and ethanol-soluble fraction. The ethanol solution of Hex-2 was then evaporated under reduced pressure and designated the "ethanol-soluble fraction (ESF)" extract (438 mg). The ME of WBM leaf and the various fractions of the n-hexane layer were weighed and reconstituted with a known volume of DMSO for use in the subsequent experiments in C. acnesstimulated THP-1 cells. Their inhibitory effects against C. acnes-induced inflammatory responses were then determined. Figure 6. Flowchart of the extraction methods used to separate the anti-inflammatory components from WBM leaf. Fractions were designated as Hex-1, Hex-2 and Hex-3, as indicated in the figure. The structures of 1 (β-ionone) and 2 (dihydroactinidiolide) isolated from the Hex-2 layer.
Analysis of ESF by Gas Chromatography-Mass Spectrometry
The ESF from the n-hexane layer was analyzed by gas chromatography (Agilent Technologies, Palo Alto, CA, USA) and mass spectrometry using electron impact ionization mode, quadruple mass The ME of WBM leaf and the various fractions of the n-hexane layer were weighed and re-constituted with a known volume of DMSO for use in the subsequent experiments in C. acnes-stimulated THP-1 cells. Their inhibitory effects against C. acnes-induced inflammatory responses were then determined.
Analysis of ESF by Gas Chromatography-Mass Spectrometry
The ESF from the n-hexane layer was analyzed by gas chromatography (Agilent Technologies, Palo Alto, CA, USA) and mass spectrometry using electron impact ionization mode, quadruple mass MOST-108-2320-B-264-001 MY-2 (to L.-T.C.). The APC was funded by National Taiwan Normal University, Taipei, Taiwan.
|
2020-09-24T13:06:20.484Z
|
2020-09-01T00:00:00.000
|
{
"year": 2020,
"sha1": "5eb8b92b02423cc18584a4ca6a2e5bc79b9097fc",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/molecules25184277",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "66caeddb58cfc81c486f5a79b40b61a721151eac",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
59149617
|
pes2o/s2orc
|
v3-fos-license
|
Simultaneous Realization ofWavelength Conversion , 2 R Regeneration , and All-Optical Multiple Logic Gates with OR , NOR , XOR , and XNOR Functions Based on Self-Polarization Rotation in a Single SOA : An Experimental Approach
We highlight the feasibility of experimental implementation of both inverted and noninverted wavelength conversion, 2R regeneration, and all-optical logic functions, such as OR, NOR, XOR, and XNOR optical gates by exploiting the self-polarization rotation in a semiconductor optical amplifier (SOA) device without changing the setup configuration. Switching between each optical function is done by only adjusting the input optical power level. In order to allow optimum control and preserve the polarization state of the injected and collected signals, the polarimetric measures have been carried out in free space.
Introduction and State of the Art
Semiconductor optical amplifier (SOA) is a promising and fundamental component in today's photonic networks and next-generation optical networks. It is characterized by high nonlinearities, compactness, multifunctionality, and high ability of integration. It has proven to be a versatile and multifunctional device to be used to achieve different functions in access, core, and metropolitan networks. Particularly, it has been envisioned for all-optical signal processing tasks at very high bit rates, that cannot be handled by electronics, such as wavelength conversion [1][2][3][4], signal regeneration [5,6], optical switching [7] and, optical logic operations [8][9][10].
All-optical wavelength converters and optical regenerators can be achieved by exploiting SOA nonlinearities such as cross-gain modulation (XGM) [11], cross-phase modulation (XPM) [3,12,13], four-wave mixing (FWM) [14,15], and cross-polarization modulation (XPolM) [6,16,17]. They have attracted a lot of interest thanks to their attractive features, such as small size, fast carrier dynamics, multifunctional aspect, power consumption, optical power efficiency, and high potential of integration. The main features of wavelength converters include their transparency to bit rate and signal format, operation at moderate optical power levels, low electrical power consumption, small frequency chirp, cascadability of multiple stages of converters, and signal reshaping.
All-optical wavelength converters at bit rates from 10 up to 100 Gbit/s were experimentally and theoretically investigated, by Leuthold et al., by using a fully integrated. SOA-delayed interference configuration [1]. Furthermore, Randhawa et al. [3] have simulated wavelength converter for future broadcast networks at 40 Gbit/s using low-cost SOAs. Their performance analysis is carried out for an all-optical frequency converter based on XPM in two SOAs arranged in a Mach-Zehnder interferometer (MZI) configuration to evaluate the efficiency of conversion. Their results show that conversion is possible over a wavelength separation of 1 nm between the pump and the input wavelength. They have demonstrated that increasing the driving current can decrease the XPM effect and the XGM scheme shows extinction ratio degradation for conversion to longer wavelengths [3]. In addition, Spyropoulou et al. [4] have presented theoretical and experimental performance analysis of 40 Gbit/s non-return-to-zero (NRZ) all-optical wavelength conversion using a differentially biased SOA-MZI. Their theoretically obtained results are confirmed through experiments that demonstrate successful 40 Gbit/s wavelength conversion functionality for NRZ data signals only when a differentially biased SOA-MZI configuration is employed, whereas an error floor is obtained when 40 Gbit/s NRZ all-optical wavelength conversion with the standard single-control SOA-MZI scheme is attempted [4]. Moreover, Turkiewiez et al. [16] have reported all optical 1310 to 1550 nm wavelength conversion based on nonlinear polarization rotation in an SOA, at bit rate 10 Gbit/s, in between two transmission links by using two standard single-mode fiber-based spans.
Wavelength conversion based on FWM process in SOAs is an attractive technique, compared to XGM and XPM, since it is independent of modulation format, ultrafast, and capable of dispersion compensation. It offers strict transparency, including modulation-format and bit-rate transparency and is capable of multiwavelength conversions. However, it has low conversion efficiency and needs careful control of the input signal polarization. The main drawbacks of wavelength conversion based on FWM are polarization sensitivity and the frequency-shift dependent conversion efficiency. However, wavelength conversion based on XPolM is another promising approach. It uses the optically induced birefringence and dichroism in an SOA and has great potential to offer wavelength conversion with high extinction ratio.
Optical logic gates can be realized either by exploiting SOA nonlinearities, such as XGM [18][19][20], FWM [20,21], and XPolM [10,22,23]. Berrettini et al. [20] have demonstrated an integrable scheme of reconfigurable and ultrafast photonic logic gate, based on a single SOA and able to process ultrafast signals. They have implemented XNOR function exploiting XGM and FWM in an SOA. They have showed that the same scheme can be easily reconfigured to obtain AND, NOR, and NOT logic gates [20].
Although the principle of all-optical gates, wavelength conversion, and 2R optical regeneration, which are based on nonlinear polarization rotation, has already been demonstrated by others authors [6,10,[22][23][24][25], we propose and argue, in the next sections of this paper, a promising approach, which has not been reported yet according to our knowledge, of the implementation method of optical OR, NOR, XOR, and XNOR gates, wavelength converter, and 2R optical regenerator by exploiting the self-polarization rotation (SPR) in a SOA structure. The implementation of those functions was made by referring to the same setup configuration in free space that can allow optimum control and preservation of the polarization state of the injected and collected signals. Switching between each optical function is done by only adjusting the input optical power level.
Presentation of the Experimental Setup
For allowing optimum control and preservation of the polarization state of the injected and collected signals, the experimentation has been carried out in free space in the research laboratory in electronics, signal, optoelectronics and telecommunications (RESO), Brest National Engineering school (ENIB), France. We used a commercial, a bulk, and a tensile-strained SOA structure, having the reference: 1550 CRI/P-SN 2106, which is manufactured by OptoSpeed. It is based on InP/GaInAsP, having an active layer length L = 500 μm, active zone width W = 2, 5 μm, and an active layer height d = 0, 2 μm. The experimental setup is shown in Figure 1. The SOA is placed in such a way that their TE and TM axes correspond, respectively, to the horizontal and vertical axes of the lab referential.
As the experiment was done in free space, the risk of errors is high. In order to reduce it, we adopted three calibration steps, which are (i) the optical beams alignment, (ii) the optical elements alignment, (iii) the calibration of the bench polarimeter at light running.
Light emitted from the SOA was collected and collimated with a microscope objective, then passed through the equivalent of a polarization controller, which is formed with a quarter-wave plate (QWP) and a half-wave plate (HWP). Subsequently, it was passed through a linear polarizer (LP) acting as an analyzer. Then, it was recollected with a fibred collimator (FC) that is connected to an optical spectrum analyzer (OSA), having a resolution of 0.07 nm in order to reject the amplified spontaneous emission (ASE) of the SOA. The passing axis of the linear polarizer, when set vertically, coincided with the TM axis in the sample and defined a reference direction from which the orientation θ of the fast axis of the quarter-wave plate was estimated. This orientation could be modified, as the quarter-wave plate was mounted on a rotation stage whose movements were accurately determined by a computer-controlled step motor.
In order to inject a linear polarization while assuring an equality of both TE and TM powers, the linear input polarizer was fixed to an angle θ = 135 • with regard to the horizontal axis. The linearization was made with the output polarization controller, which consists of the QWP and the HWP, whereas the signal blocking was made with the output polarizer (LP) around a power, known as the blocking power. We have varied both QWP and HWP in order to obtain the lowest possible output power.
After several tests, we have chosen a SOA blocking power having a value equal to −2 dBm, which is situated in the saturation regime interval of the device, because it allows to obtain a strong variation of the output power for a slight variation of the input power. This value seems to be the best compromise to optimize the static performances of the optical signal processing functions. Indeed, it allows a very good improvement of the extinction rate of the injected signal. The evolution of the transfer function of the SOA after blocking the output signal at an input power equal to P in = −2 dBm for a bias current equal to 150 mA and 200 mA is illustrated in Figure 2. We notice that the SOA output power takes much more significant values when the injected current increases which corresponds to a low contribution of the ASE. We can also note that the curve of the measured static transfer function of the SOA can show three various regimes according to the injected optical power. The first regime corresponds to a "slow" increase of the output power by increasing the input power. The second regime makes reference to a fast diminution of the output power further to the blocking. In the third regime, the SOA output power becomes more and more important with the increase of the injected optical power.
Experimental Implementation of both Inverted and Noninverted Wavelength Conversion and 2R Optical Regeneration
All-optical wavelength conversion refers to the operation that consists of the transfer of the information carried in one wavelength channel to another wavelength channel in optical domain. It is a key requirement for optical networks, since it has basically to be used to extend the degree of freedom to the wavelength domain. Moreover, All-optical wavelength conversion is also indispensable in future optical packet switching networks to optimize the network performance metrics. It is very useful in the implementation of switches in WDM networks. In addition, it is crucial to lower the access blocking probability and therefore increase the utilization efficiency of the network resources in wavelength routed optical networks. Referring to Figure 3, we can underline that by exploiting the nonlinear rotation of polarization in a single SOA, we can realize both inverted and noninverted wavelength conversion according to the choice of the average power value of the signal to be injected (pump). Indeed, if the value of this last one is lower than the blocking power, an inverted conversion wavelength is achieved. For the opposite case, a noninverted conversion is accomplished.
According to Figure 4, we can notice that the output extinction ratio is higher than the input extinction ratio (ER in < ER out ). This result allows us to note that by exploiting the self-polarization rotation, it is possible to accomplish 2R optical regeneration of a signal. The improvement of the extinction ratio is about 11 dB if the input power is fixed to 0 dBm. To benefit from the extinction ratio improvement, the power corresponding to the low level of the signal to be regenerated must be slightly superior to the blocking power and the power referring to its high level must not be very high, in order to limit the SOA saturation phenomenon. Figure 5: Measured static transfer function and principle of operation for OR function at a blocking power equal to −2 dBm and wavelength equal to 1550 nm. Figure 5 exhibits the measured static transfer function of the OR gate that can be achieved and implemented using the same experimentation based on SPR. The principle of operation for OR function is as follows: we consider that the pump signal is composed of signals: E 1 and E 2 , which are simulated as logical entries for the logical gate. The output probe signal (E s = E 1 + E 2 ) of the device serves as logical output. The three signals: E 1 , E 2 , and E s are simultaneously injected in the SOA. Then, the output stage, in the setup experimentation, is adjusted to block the signal when both pump signals are in their minimum power level, which corresponds to the low logic level (00). Consequently, the output logic level is low (0). Other cases correspond to the high logic level (1). So, the achievement of the optical OR logic gate is completed.
Concept of All-Optical NOR Gate Implementation.
Referring to Figure 6, we can note that the optical NOR function can be achieved and implemented. Its functioning principle is the following: we assume that the pump signal is composed of signals: E 1 and E 2 , which are considered as logical entries for the logical gate. The output probe signal (E s = E 1 + E 2 ) of the device serves as logical output. The three signals: E 1 , E 2 , and E s are simultaneously injected in the SOA. Then, the output stage, in the setup experimentation, is adjusted to block the signal when one among both pump signals is in its maximum power level, which corresponds to the high logic level (01, 10, 11). As a result, the output logic level is low (0). The other case corresponds to the high logic level (1). Consequently, the same experimentation serves to accomplish the optical NOR logic gate implementation. Figure 7 displays the measured static transfer function of the optical XOR gate that can be achieved and implemented using SPR. Its functioning principle is the following: the pump signal is assumed to be composed of signals: E 1 and E 2 , which are considered as logical entries for the logical gate. The output probe signal (E S = E 1 · E 2 + E 1 · E 2 ) of the device serves as logical output. The three signals: E 1 , E 2 , and E s are simultaneously injected in the SOA. Then, the output stage, in the setup experimentation, is adjusted to block the signal when both pump signals are in their maximum power level or when they are in their minimum power level, which corresponds, respectively, to the high logic levels (11) low (0). The other case corresponds to the high logic level (1). As a result, the achievement of the optical XOR logic function implementation is obtained by adopting the same experimental setup.
Concept of All-Optical XNOR Gate Implementation.
Referring to the measured static transfer function, which is presented in Figure 8, we can notice that XNOR gate can be achieved and implemented. Its principle of operation is the following: we consider that the pump signal is composed of signals: E 1 and E 2 , which are playing the role of logical entries for the logical gate. The output probe signal (E S = E 1 · E 2 + E 1 · E 2 ) of the device serves as logical output. The three signals: E 1 , E 2 , and E s are simultaneously injected in the SOA. Then, the output stage, in the setup experimentation, is adjusted to block the signal when only one pump signal is in its maximal power, which corresponds to the high logic levels (01 and 10). As a result, the output logic level is low (0). Other cases correspond to the high logic level (1). This case corresponds to the fulfillment of the optical XNOR logic gate.
Conclusion
We have proposed and experimentally demonstrated the achievement of wavelength converter, 2R optical regenerator and optical OR, NOR, XOR, and XNOR logic gates by exploiting the self-polarization rotation in a SOA structure, using the same setup configuration. The implementation of these all-optical signal processing functions is based on experimental investigation in free space space in order to allow optimum control and preserve the polarization state of the injected and collected signals. Switching between these functions is done by only adjusting the input optical power level. Since each one of the proposed functions can be applied to various networking applications, they will play an important role in future high-capacity optical communication networks. This study can be extended to exploit the obtained static response of the SOA using the SPR and demonstrate its capabilities in the dynamic regime.
|
2018-12-17T05:02:42.877Z
|
2012-04-26T00:00:00.000
|
{
"year": 2012,
"sha1": "272f398f50a82387f01ab31bc8c43a996b27850b",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/ijo/2012/627018.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "272f398f50a82387f01ab31bc8c43a996b27850b",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
}
|
100514792
|
pes2o/s2orc
|
v3-fos-license
|
Simulation of a liquid drop on a vibrating hydrophobic surface
Mathematical simulation is used to study processes describing a liquid droplet oscillation on a solid surface. The pattern of generated internal flows is characterized by complex interaction between capillary and gravity waves, free surface and contact angle. Interaction process factors are analysis. The given results are compared with the experimental ones.
Introduction
Problem of a solid heaving with Faraday wave's generation in a liquid is an example of complex dynamic interaction in solid -liquid system [1]. When liquid layer with free surface oscillates in heave we can observe surface capillary waves -Faraday wave's. Its dynamics depends on the system control values such as: viscosity, surface tension and liquid density as well as external action characteristics. Such waves are initiated after they reach some threshold amplitude of oscillation.
Let's consider preliminary stage of the phenomenon with small amplitudes that endure liquid heave oscillations on its surface to initiate capillary -gravity waves. The present thesis is concerned with a research on internal flows in a small amount of liquid -a droplet and its influence on generation and initiation threshold of waves.
Most articles in experimental hydrodynamics on studying a parametrical resonance in a liquid research have the same model in common, i.e. as a rule an oscillating vessel is considered. [2,3]. Frequency range of gravity waves initiation on a free surface in a liquid droplet is experimentally studied in scientific works [4,5,6,7]. Thus, the work [4], is concerned with problems of detection of various oscillation modes for free surface of a droplet placed on a vibrating water proof surface with small hysteresis of a contact angle, as well as with contact angle and hysteresis impact on liquid oscillation modes research. Works [5,6,7] study oscillations of a liquid of extra small volume (5 µl) laying on a solid surface vibrating at a low frequencies (less than 800Hz) and with small amplitudes (up to 22 µm). Data on droplet free surface depending on the vibrating surface location are provided, instant droplet profiles and internal capillary flows obtained using PIV method are also considered. Moreover, articles [5,6,7] represent obtained droplet shapes corresponding to the various surface vibration modes.
There are only few studies [3,8,9,10] that are dedicated to the solid surface vibration with a liquid of various volume simulation problems. At the same time problems of mathematical simulation of a single droplet placed on vertically vibrating at low frequencies and small amplitudes (solid) surface are concerned only in article [10] that study oscillation and spraying of a single liquid droplet of 30µl on a solid rod. In this study the physical thesis statement has several rather strong assumptions: the initial droplet configuration was determined as half sphere, in addition to this the possible movement of a contact line was not considered and the state on the border of a solid surface was assumed as sticking; in hydrodynamic equations cycling acceleration of a body is added to the gravity acceleration (system parameter) considered in equation factors or boundary conditions. Navier-Stokes equations in axially symmetrical statement are solved by marker and cell projection method. The equation system is approximated by space with the finite volume method on a structured grid. Flow reconstruction on the cell edges is performed using central difference schemes and the convective terms sampling was performed using hybrid Nichols difference scheme. The solution of given algebraic systems was achieved under Cholesky conjugate gradient scheme.
Experimental researches prove that internal flows in a droplet are characterized by a clear threedimensionality, and wetting angle is one of the key parameter in a system. Solution methods of such tasks concerning free liquid surface require high resolution of thin layers (gas-liquid-solid contact point) and developed free surfaces implementation.
The given study is set in a following way. First chapter provides the problem description and general assumptions. Second chapter observes the features of calculating the free surface using the control volume method and VOF. Third chapter provides both results of selected schemes and algorithms tests with the task [6] having experimental description as an example, and analysis of obtained instant capillary flows patterns in an oscillating droplet, it also concerns typical topological features of flows.
Problem statement
We consider the problem of liquid droplet motion caused by vertical movements of a solid surface. Let the area The experimental apparatus is described in article [6], the surface was carefully cleaned and covered by hydrophobic coating. De-ionised water droplet of 5µl volume was placed in a center of a surface (on a center line of a rigid surface, fig. 1) using a dropper, thus the wetting angles were equal to 1 115 = , and the contact diameter and droplet height -02 , 2 mm and 1.52 mm, respectively.
Needless to say, vertical oscillations of a solid surface obtained in the study [6] are not harmonic but due to insignificance of the differences they were considered as sinusoidal oscillations under the numerical experiment.
Methods and algorithms
The given problem can be considered as system of two immiscible incompressible viscous fluids the motion of each is described by Navier-Stoks equations and continuity equation where e -unit normal, external to the In this study the wetting angle value will be determined basing on empirical model of dynamic contact angle [11] , in this case having the meaning of volume concentration of a liquid. By doing so we determine density, pressure and viscosity functions as respectively, and rewrite the system (1), (2) considering (4), (3) as following We pass on to the modified pressure Then the equation (5) is written as Surface tension force acting in a thin transient layer, that in the limit is thin infinite and corresponded to the 0 , can be converted to volume one [12] ). The system (6), (9), (7) was numerically solved using control volume method for the sampling of the original equations. In order to simplify the further stage we pass in the conventional manner on to the dimensionless variables, in this case only written form of the equation (9) will be changed Lets write an approximation of the equations (6), (10) in semi-discrete form Numerical solution of incompressible fluid dynamics equations of type (9), (6) require application of special methods to obtain velocity and pressure fields meeting the conditions of conservation and continuity by time on each step. In this study pressure implicit procedure with splitting of operator offered by Issa [13] called PISO is used.
Discretization of convective terms by area is performed using second order Van Leer's scheme. Reconstruction of flows on the edges of cells is performed using TVD scheme where approximation is based on central differences, and Van Leer's limiter [14] is applied as required limiter function. The obtained algebraic equation system is solved numerically using conjugate gradient method with incomplete Cholesky diagonal preconditioner.
The equation (7) is solved on each time step after system (9), (6) solution. In order to do so it is more convenient to rewrite (7) in conservative form When solving the equation (14) a convective term approximation is performed is performed to that described above. And scalar coefficient c serves to control the artificial compression in the area of dispersion to compensate the numerical diffusion effect, in this study the coefficient
Numerical simulation
This study investigates low-frequency (
8
Hz) oscillation of a water drop 5 µl in volume on a rigid surface, that is experimentally described [6], including the analysis of droplet profile and data on time-to time variation of free surface height change.
Mathematical simulation of liquid motion under oscillations was performed using volume-of-fluid method in a cell in two stages: first -the solution of liquid droplet stabilization on a horizontal rigid 11th International Conference on "Mesh methods for boundary-value problems and applications" IOP Publishing IOP Conf. Series: Materials Science and Engineering 158 (2016) 012026 doi:10.1088/1757-899X/158/1/012026 surface problem; second -the vibration of the surface with a stabilized droplet on it. Droplet was set into equilibrium considering contact angle change, stabilization of the droplet from a cylindrical column of liquid of given volume under gravitation and surface tension was investigated. The droplet configuration was compared to the experimental by profile height, 3 phase contact diameter and wetting angle real value.
A Low-frequency oscillation of a liquid drop on a rigid surface simulation was performed on a grid consisting in 364140 hexagons. Let's concider methods of accounting of the contact angle in a 3 phase gas-liquid-surface contact line. Wetting angle is determined not only by physical and chemical properties of a liquid but by surface material properties and is not constant even in case of liquid and surface stability. As the dynamic component of the contact angle during surface oscillations is not zero it is possible to estimate its effect both on oscillating droplet profile and on droplet local height. Fig. 2 shows diagram of the droplet height change in time resulting from vibration process simulation considering the dynamic change of wetting angle (curve 1, fig. 2) and using only static contact angle model (curve 2, fig. 2). a) b) Figure 2. Droplet height variation diagrams: а) concidering: 1 -dynamic wetting angle; 2static contact angle b) for mode 2: 1 -experiment [6], 2 -calculation.
Given curves have similar quantities and qualitative values but the amplitude difference on peaks reach up to 15%. Thus, static wetting angle consideration alone leads to droplet height amplitude lowering. Needless to say, oscillation frequency increase leads to contact angle determination method impact increase. Free droplet surface oscillation forms on a vibrating at frequency of 85 Hz surface ( fig.3) were determined, i.e. one circle nodal line on a free surface corresponding to the oscillation second mode [6] was discovered. a) b) Figure 3. Droplet forms for the second mode: а) numerical; б) experimental [6].
As shown on fig. 3, the simulation with set up parameters provides oscillation modes equal to the experimental ones, but local values of calculated phase interface motions ( fig. 3 a) fig. 3 b) that results from discovered differences of equilibrium profiles and can be solved using additional algorithms of free surface reconstruction. Let's consider diagrams of droplet height change on a vibrating surface ( fig. 2 b). Time-dependencies of droplet height change obtained during simulation match in qualitative values with experimental values. However, droplet oscillation amplitude values are significantly (up to 41%) less than experimental values due to small qualities of free surface motion ( fig. 3).
To study the internal flow peculiarities we generated instant patterns of droplet velocity vectors at various moments of time ( fig. 4).
Conclusion
The studied problem of liquid droplet oscillation on a solid surface under heave low-frequency oscillations is one of the few studies that have experimental description that allows numerical schemes and algorithms testing. Comparison of the numerical simulation results and experimental data shows that applied VOF method provides appropriate description of surface-to-droplet energy transfer processes. In addition, detected significant differences of calculated drop oscillation amplitude values from experimental ones indicate a need for more detailed account of both surface forces and dynamic contact angle range. Thus algorithms for surface forces and dynamic contact angle change determination in air-liquid-surface triple point require additional improvement and testing.
|
2019-04-08T13:10:47.928Z
|
2016-11-01T00:00:00.000
|
{
"year": 2016,
"sha1": "f87aae2429376753ca41bb921c6c6fcd1b928d00",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/158/1/012026",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "ec96e12f5f03883b1a63e5f980e27ea8e3a13884",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Chemistry",
"Physics"
]
}
|
119340113
|
pes2o/s2orc
|
v3-fos-license
|
Medium effects in the pion pole mechanism (photon photon -->pion-zero -->neutrino-R antineutrino-L (neutrino-L antineutrino-R)) of neutron star cooling
Nuclear medium effects in the neutrino cooling of neutron stars through the exotic reaction channel \gamma \gamma -->\pi^0-->\nu_R \bar{\nu_L} (\nu_L \bar{\nu_R}) are incorporated. Throughout the paper we discuss different possibilities of right-handed neutrinos, massive left-handed neutrinos and standard massless left-handed neutrinos (reaction is then allowed only with medium modified vertices). It is demonstrated that multi-particle effects suppress the rate of this reaction channel by 6-7 orders of magnitude that does not allow to decrease existing experimental upper limit on the corresponding \pi^0\nu\bar{\nu} coupling. Other possibilities of the manifestation of the given reaction channel in differente physical situations, e.g. in the quark color superconducting cores of some neutron stars, are also discussed. We demonstrate that in the color-flavor-locked superconducting phase for temperatures T<(0.1-10) MeV (depending on the effective pion mass and the decay width) the process is feasibly the most efficient neutrino cooling process, although the absolute value of the reaction is rather small.
Introduction
Many years ago Pontecorvo and Chiu and Morrison [1] suggested that the process γγ → νν might play an important role as a mechanism for stellar cooling. Gell-Mann [2] subsequently showed that this process is forbidden in a local (V-A) theory.
However, it can occur at the one-loop level which has been computed by Levine [3] for an intermediate-boson (V-A) theory, and the stellar energy loss rate through γγ → νν was found to be smaller than the rates for competing processes (pair annihilation e + e − → νν and photo-neutrino production γe → eνν). This result is not modified when the cross section of the above process is computed in the standard model, as was shown by Dicus [4]. Only for very peculiar neutrino coupling to photons or unnaturally large neutrino masses this reaction overwhelms the result of the standard model [5].
There is still another possibility proposed by Fischbach et al. [6], where the reaction γγ → νν could be significant. This is the case when the process is mediated by a pseudoscalar resonance and the latter decays into νν due to the existence of right-handed neutrinos or due to new interactions beyond the standard model.
It was assumed that in astrophysical conditions only the pion resonance could be important (next in mass not strange η-resonance is too heavy in standard conditions) and the process was termed the pion-pole mechanism. Thus the process which we will continue to discuss in this paper is γγ → π 0 → νν.
Of course, if the temperature is high enough, and on the other hand, the pion dispersion relation in matter allows for the quasiparticle spectrum branch, there appears a significant number of thermally equilibrated pion quasiparticles. Then the process π 0 → νν may also be important. In this process the initial thermally equilibrated pion is on its mass-shell modified in the matter. In the process γγ → π 0 → νν the initial reaction states contain no pion, the virtual pion only transfers the interaction from thermally equilibrated photons in the γγ annihilation process to produced νν. As we will see below, the process γγ → π 0 → νν has an output of the energy Q varying with the temperature as Q ∝ T n where the power n changes with the temperature typically from n = 3 for rather high temperature (T is still much smaller than the pion mass m π = 140 MeV) to n = 11 for low temperatures.
The process π 0 → νν having essentially larger phase space volume (one particle in the initial state) yields however the exponentially suppressed output of the energy, Q ∝ T 3/2 e −mπ/T at T < m π since the initial particle is massive, in difference with photons. Concentrating on the discussion of the rate of the γγ → π 0 → νν reaction channel we shall also compare it with the rate of the competing π 0 → νν process and other relevant processes.
Temperatures of the order of 10 − 60 MeV are expected in interiors of protoneutron stars formed in supernova (SN) explosions during the early cooling phase of the proto-neutron star evolution, according to existing numerical simulations [7]. It is exactly in such a situation the pion-pole mechanism is the most effective (T ∼ m π /a, a ∼ 2 ÷ 50). Indeed, in Ref. [8] the mechanism was applied to the case of supernova SN1987A and it was found to be a quite important process even if the pion partial production rate of neutrinos (Γ(π 0 → ν RνL (ν LνR ))) is many orders of magnitude smaller than the presently accepted value of the experimental upper limit. In case of ν LνR with the vacuum vertex the process rate is proportional to the squared neutrino mass. The result [8] was criticized by Raffelt and Seckel [9] basing on the fact that in the calculation of Ref. [8] it was used the vacuum value for the total pion width, Γ vac π , which is a tiny quantity, Γ vac π ≃ 0.58 · 10 −7 m π . They argued that in supernova cores pion states are damped mostly by nucleon absorption rather than by the free decay so that the width in the medium (Γ med π ) is much larger than in vacuum and the pion-pole mechanism should be strongly suppressed. Besides, in the medium the process may go on massless left-handed neutrinos through the intermediate nucleon particle − nucleon hole states, i.e. as γγ → π 0 → nn −1 → ν LνR reaction channel.
Without any doubt the neutrino emission from the dense hadronic component in neutron stars is the subject of strong modifications due to collective effects in the nuclear matter [11], and it is interesting to quantitatively know how much the pionpole mechanism is influenced by the properties of the dense medium and even, if the process is strongly suppressed, what is the main effect causing this suppression.
In this work we compute the cooling rate due to the neutrino emission through the pion-pole mechanism in the reaction γγ → π 0 → ν RνL (or ν LνR ) for the conditions of proto-neutron stars including the effects of dense medium in the pion polarization operator (Section 2). Then we discuss other relevant reaction channels. In order to clarify their connection to the process γγ → π 0 → ν RνL (or ν LνR ) we review in the Appendix the optical theorem formalism, see the discussion in [11], that allows to calculate consistently the reaction rates including complicated medium effects. Continuing to study possible physical situations where the pion pole cooling mechanism could be important we consider in Section 3 a possible consequence of the pion pole mechanism in the case of neutron stars with the color superconducting quark cores. We argue that in the color-flavor-locked superconducting phase for temperatures T < ∼ (0.1 ÷ 10) MeV the process is feasibly the most efficient neutrino cooling process, although the absolute value of the reaction rate is rather small.
Then we draw our conclusion.
2 Emissivity of the process γγ → π 0 → νν from the hadronic matter The cross section of the process γγ → π 0 → ν RνL (ν LνR ) in vacuum is given by where F (s) is an unknown function of s ≡(c.m. energy) 2 , representing the product of the vertex function for the off-shell processes γγ → π 0 and π 0 → νν, constrained to F (s = m 2 π ) = 1. Following Ref. [6] we may assume F (s) ≃ 1 off mass-shell, that seems to be a reasonable approximation for the energies and momenta that we are dealing with. Γ(π 0 → γγ) is the partial width of the pion decay into two photons. Here we assume that Γ(π 0 → γγ) is approximately equal to the total pion width (Γ vac π ), for which we use the experimental value. The pion partial width into neutrinos Γ(π 0 → νν) has the following experimental upper limit Γ(π 0 → νν)/Γ vac π < 8.3 × 10 −7 [12]. In (1) one recognizes the free pion propagator modulus squared | D vac π | 2 entering squared matrix element of the reaction under consideration.
Strictly speaking, in application to the nuclear medium all the terms in (1) should be modified. E.g., the partial width Γ(π 0 → γγ) may increase with the temperature [13]. The pion partial width into neutrinos Γ(π 0 → νν) is determined by the corresponding π 0 − νν coupling. The squared vertex is averaged over the neutrinoantineutrino phase-space volume, see Appendix. If we knew the explicit form of the coupling we could explicitly calculate Γ(π 0 → νν) and its temperature dependence.
However the main modification comes from the change of the pion propagator in dense nuclear medium due to the pion pole. Therefore, below we consider only this modification. Then, one should replace in (1) the free pion propagator D vac π by the in-medium one where Π R π 0 (w, k, ρ, Y, T ) is the total retarded polarization operator of the neutral pion, dependent on the pion energy ω, momentum k, baryon (nucleon in our case) density ρ, isotopic composition Y = Z/(N + Z), and the temperature T .
If photons are in thermal equilibrium, the energy loss rate of the process γγ → π 0 → ν RνL (ν LνR ) occurring in dense interior of the proto-neutron star, at which the energy is converted into neutrino pairs, is given by where ω 1 , ω 2 are the photon energies, k 1 and k 2 are their momenta, θ is the angle between photons, v rel is the relative-velocity factor and σ med π is the medium dependent cross section of the process given by (1) with D vac π replaced by D R π 0 . Eq. (3) is the straightforward generalization of the result [6,8] obtained with the help of the replacement σ vac π → σ med π . The energy loss rate (3) can be presented in the following form m π = 140 MeV, T 9 = T /10 9 K, τ = (k B T /m π ) and I(τ ) is given by the dimensionless integral Further we assume that m ν /k B T ≪ 1 and then put the lower limit (κ) of the integral in the variable u equal to zero, that corresponds to using m ν = 0 in all the phase space calculations. However we take m ν = 0 into account evaluating the Γ(π 0 → ν LνR ) width. For the further convenience we assume that ReΠ R π 0 and ImΠ R π 0 are the real and imaginary parts of Π R π 0 (w, k, ρ, Y, T ) describing only the strong interaction processes. Therefore we separated in (6) the value Γ vac π which is due to the electromagnetic and the weak interaction. With I(τ ) replaced to I vac (τ ) (putting ReΠ R π 0 and ImΠ R π 0 to zero) we reproduce the results of Refs. [6,8]. One easily finds the corresponding asymptotic expression ζ is the Riemann function, ζ(5) ≃ 1.037, ζ(6) ≃ 1.017, ζ(7) ≃ 1.008. In order to get (7) we dropped Γ π and ReΠ R and expanded (6) in the value 2x 1 x 2 uτ 2 ≪ 1. Since typical values x 1 ∼ x 2 ∼ 6 in the resulting integral, the limit expression is valid for τ ≪ 0.1. As we checked numerically the limit (7) is actually achieved only at τ < ∼ 0.01, if Γ π is as small as Γ vac π . Using Eq. (7) and the evaluation of Γ(π 0 → γγ) we obtain at small temperatures.
When the temperature increases the denominator becomes to be near the pole.
Then dividing and multiplying I(τ ) by Γ π and using the corresponding presentation of the δ-function we roughly estimate From the latter estimate we recognize the resonant character of the rate. I vac (τ ∼ 0.1) is ∼ 10 9 times larger compared with I vac (τ = 0) for Γ π ∼ Γ vac π . For higher temperatures assuming 2x 1 x 2 uτ 2 ≫ 1 we estimate that produces Q vac ≃ Q med ≃ 2.6 · 10 29 (m −1 π Γ(π 0 → νν)) T 7 9 , erg/(cm 3 · s). In the so called "standard scenario" of the neutron star cooling one considers the modified Urca process as the most efficient process. The emissivity of the modified Urca process is estimated [14] with the help of the free pion propagator as Q M U ∼ 10 21 T 8 9 erg/(cm 3 · s). Comparing it with (5), (9) we see that for T > ∼ 10 MeV the process under consideration would be much more efficient process than the modified Urca process, if the pion width and the mass in medium were not essentially changed compared to the vacuum values. However in reality the width Γ vac π is replaced to a much larger medium value. Also the pion mass is modified by the polarization effect (thereby the pion pole begins to manifest itself at T > ∼ m ef f π /a, rather than at T > ∼ m π /a). Thus one may expect that with taking into account of the medium effects the estimation (7) is not essentially modified whereas the value (9) must be significantly suppressed mainly due to the suppression of the width.
Techniques for the description of collective effects in dense hadronic matter have been developed in the last decades [15,16]. E.g., the medium effects appearing in the pion propagator have been discussed in detail in Ref. [17] (see Appendix B of that work). For the temperatures and pion energies with which we are concerned the main modification of the pion polarization operator is due to the density dependence rather than the temperature dependence and it is, thereby, a reasonable approximation to put T = 0 in Π R π 0 (ω, k, ρ, Y, T ). Depending on the values of the typical pion energy and momentum different terms can be important in the pion polarization operator. At small pion energies ω/m π ≪ 1 and for typical momenta k ∼ p F N , p F N is the Fermi momentum of the nucleon, the nucleon particle -hole contribution is attractive and the dominant one, what results in the softening of the virtual pion mode with increase of the density and leads to the possibility of the pion condensation at ρ > ρ c > ρ 0 , cf. [15,16,17]. In our case ω > k, as follows from the reaction kinematics, and the nucleon particle -hole contribution is minor. Then the main terms in the pion polarization operator are the ∆-particle -nucleon hole part and the regular part related to more complicated intermediate states. Thus, , and the partial contributions are given by [17]: π is the saturation nuclear density, the form factor Γ 2 πN ∆ ≈ Γ 2 πN N /β ≃ 1/β for the rather small momenta of our interest, β ≃ 1 + 0.23k 2 /m 2 π is an empirical factor taking into account a contribution of the high-lying nucleon resonances. The nucleon-∆ isobar correlation factor Γ(g ′ ∆ ) is given by with C ≃ 0.9 m 2 π (ρ/ρ 0 )Γ 2 πN ∆ and m ⋆ N (ρ) is the effective mass of the nucleon quasiparticle, γ 0 k 3 takes into account the ∆-isobar width, with an empirical value γ 0 ≃ 0.08 m −2 π Γ 2 πN ∆ . In the numerical evaluations below we for simplicity assumẽ ω 2 ∆ (t) ≫ ω 2 that is a reasonable approximation for the conditions we are dealing with.
The regular part of the pion polarization operator is yet more model dependent.
Its value is recovered with the help of the pionic atom data, cf. [18], and a procedure of going off mass-shell. We present it in the following form, cf. [17], The factor ξ which we inserted in these expressions compared to those of [17] takes into account asymmetry of the isotopic composition of the proto-neutron star matter. Near the pion mass-shell, with ξ = 4Y (1 − Y ) we approximately describe results given by the potentials I-III used in [18] for the pion atoms and with ξ = 1, the results for the potential IV. For the value Y ≃ 0.4 typical for initial stage of proto-neutron star cooling in both mentioned cases one can put ξ ≃ 1.
The real and imaginary terms shown above are the ones which enter Eq.(6), being responsible for the density effects, where we put ξ ≃ 1. We also used that , and the linear in ω terms entering ReΠ R π ± for Y = 1/2 do not contribute to ReΠ R π 0 .
We computed the energy loss rate including medium effects in the pion polarization operator. In Figure 1 we present the results for the ratio of the energy loss rate (Q med ) due to the pion-pole mechanism calculated with medium effects taken into account in the pion polarization operator to the one (Q vac ) computed with Re Π R = 0 and the vacuum pion decay width (cf. (8)), for three values of the nuclear matter density: ρ = (0.5; 1; 2)ρ 0 . We used m ⋆ N ≃ (0.9; 0.85; 0.7)m N for those densities and put m π ≃ 140 MeV. The ratio Q med /Q vac is depicted in Figure 10 20 1 as a function of temperature. This ratio does not depend on an unknown value of Γ(π 0 → νν). We see that the nuclear matter effects decrease the output of the energy typically by six to seven orders of magnitude depending on the temperature and the density, in agreement with the expectations of Raffelt and Seckel [9]. We also show in Table 1 the results for the dimensionless integral of Eq.(6).
Qualitatively, the reasons for the enormous decrease of the energy loss rate are related to the strong pion absorption in nuclear matter, as has been predicted in Ref. [9]. The total pion width at appropriate pion energies and momenta grows up to tens of MeV with the density, that is orders of magnitude larger than the vacuum contribution Γ vac π . Other reason is the energy-momentum dependence of the real part of the pion polarization operator. This is also not a small change because the mechanism is totally dependent on the resonant behavior, and outside the resonance the contribution sharply decreases. We do not need to be on the top of the resonance, however we cannot be too far away either. Medium effects push the typical pion energy to a larger value (ReΠ R (ω, k) > 0 at pion energies and momenta in the region of the pole).
The ratio Γ(π 0 → νν)/Γ(π 0 → γγ) in many models of elementary particles beyond the standard one is proportional to the neutrino mass (m ν ). In Ref. [8] a limit on the neutrino mass was obtained due to the strong constraint on Γ(π 0 → νν).
However, with medium effects included into consideration no astrophysical limit on m ν better than the existent ones can be obtained. A nonzero neutrino mass induces other mechanisms much more efficient than the one generated by the pion resonance in this situation, and these mechanisms provide then tighter constraints on the neutrino masses (see, for instance, Ref. [19]).
Above we discussed only the role of the reaction channel γγ → π 0 → νν. But there are other competing reaction channels. The relation between all these processes becomes to be clear if one applies the so called optical theorem formalism for the calculation of the reaction rates, see discussion in the Appendix.
First, if the pion can be described within the quasiparticle approximation in some region of its energy and momentum, one has a contribution of the process π 0 → νν whose rate is given by, Here we used the dispersion relation ω 2 = (m * π ) 2 + v 2 π k 2 for small momenta k ∼ k B T typical in this reaction. The effective mass m * π ∼ m π and the velocity v π < 1 are calculated according to eqs. (11) - (15). The effects of the partial Γ(π 0 → γγ) width are not present in this process.
Second, the presence of finite width of the virtual pion ImΠ π 0 = 0 due to the strong interaction means also a finite contribution of the reaction π 0 virt → νν that does not need γγ and pion quasiparticle states, but relates to the corresponding nucleon states, cf. [10,11] (it is clearly seen after the cut of the in-medium pion Green function describing propagation of the in-medium pion). Presence of the imaginary part of the nucleon-hole term of the pion propagator (with the full vertex) in appropriate energy-momentum region would lead to a contribution of the processes N → Nνν, NN → NNνν going via the virtual pion, the later couples N with νν. The presence of the imaginary part of the regular term in the pion polarization operator corresponds to more involved multi-nucleon and multi-pion states.
Third, the processes as γγ → π 0 → nn −1 → ν LνR (n −1 is the neutron hole) with ordinary left-handed massless neutrinos are possible leading to substantially larger contribution to the emissivity than that related to the massive left-handed neutrinos in the reaction γγ → π 0 → ν LνR . The reaction channel γγ → π 0 → nn −1 → ν LνR may also lead to substantially larger contribution to the emissivity than that related to the right-handed neutrinos (depending on the value of the Γ(π 0 → ν RνL )) in the reaction γγ → π 0 → ν RνL which we have considered. Besides, the process π 0 → nn −1 → ν LνR going on the left-handed massless neutrinos is possible going on the thermal pion quasiparticle. The latter rate can be calculated in complete analogy to that computed for the massive pseudo-Goldstone (photon) mode in [20]. The processes involving pion but going via nn −1 −νν coupling (with usual V −A coupling of N to ν LνR ) exist only due to the medium modification of the vertices. Another possibility, which will be discussed below, is that due to the breaking of the Lorentz invariance in matter there might appear two pion decay coupling constants, the temporal one, f T , and the space one, f S , [21,22]. Their finite difference would also lead to a contribution to the π 0 coupling with the left-handed neutrinos. Although all these relevant processes are related to each other, they are characterized by quite different phase-space volumes, kinematics and/or vertices. Having selected the γγ → π 0 → νν channel among others, we considered the enhancement of the rate due to the particular kinematics of this process (massless particles in initial states). But, as we have shown, the presence of not as small pion width in the hadron matter significantly suppresses the rate.
Concluding this section we once again stress that although the temperature dependence of the energy output in the process γγ → π 0 → νν is quite different from those for the other processes, where also enters the πνν coupling, the absolute value of the rate is proved to be strongly suppressed due to the presence of not as small pion width in the dense and heated nucleon matter. Thus, due to the width effects, the process has a minor role in the cooling of the dense hadron matter.
3 Emissivity of the process γγ → π 0 → νν from the color-flavor-locked phase There is another interesting possibility in connection with the process under consideration. As has been recently shown, the interiors of the most massive neutron stars may contain dense quark cores which are high temperature color superconductors with the critical temperature T c ≃ 0.6∆ q < ∼ 50 MeV, where ∆ q is the pairing gap between colored quarks, see the review papers [23,24]. Also the possibility of selfbounded strange quark stars, being the diquark condensates, is not excluded, see [25]. The diquark condensates may exist in different phases. The most symmetric phase of dense quark matter is the so called the color-flavor-locked phase. This phase becomes to be energetically preferable in the large density limit. The neutrino processes are significantly suppressed in this phase due to presence of the large diquark gap and absence of the electrons [26,27]. On the other hand, it was shown, cf. [28,24], that this phase contains low-lying excitations with the pion, kaon and η, η ′ quantum numbers (m * π 0 , m * K 0 , m * K 0 , m * η , m * η ′ are in the range (1 − 100) MeV, all in-medium masses are indicated by m * ). Excitation spectra, as they are calculated in the mentioned works, contain no widths effects, the i-meson Green function is given by D i = 1 (ω 2 −k 2 /3−m * 2 i ) . However, in any case there is a width contribution at least from the process π 0 → γγ. Then −ImΠ R ≃ Γ(γγ → π 0 ) ∼ Γ vac (γγ → π 0 ). Although more involved effects may also simulate the corresponding width terms, one may expect that the width effects are, nevertheless, rather suppressed due to the nature of the superconducting phase with the large gap. In this situation the pion pole mechanism could become an efficient cooling mechanism. Also other meson poles may essentially contribute.
For the process under consideration the emissivity of the superconducting medium Q SC is given by expressions similar to Eqs. (3) to (5). We take and replace it into (1) instead of (2). Other quantities in (1) are assumed to be unchanged. Then the energy loss rate is presented as now with the extra factor ( mπ m * π ) 4 and the integral I(τ ) replaced by τ = T /m * π . Again we further take κ = 0. For τ ≪ 0.1 we get that not essentially differs from the estimation given by Eq. (7), ζ(8) = 1.004.
The value (21) is independent on m * π for τ ≪ 0.1, and Q SC ∝ T 11 (m * π ) −4 , Q SC /Q vac ∼ (m π /m * π ) 4 , cf. (8). For [m * 2 π /(−10 2 ImΠ R )] 1/4 > ∼ τ > ∼ 1/a we roughly estimate I SC ∼ 0.1[−m * −2 π ImΠ R ] −1 τ −8 (e −1/(2 τ ) |ln(2 τ )| + 1)e −1/(2 τ ) , a ∼ 10 2 for −ImΠ R ∼ Γ vac , and Q SC ∝ T 3 (m * π ) 6 e −m * /(2T ) /(−ImΠ R ). For still higher temperatures I SC ≃ 8/ τ 4 and Q SC ∝ T 7 being almost independent on m * π , Q SC ≃ 1.3 · 10 29 (m −1 π Γ(π 0 → νν)) T 7 9 , erg/(cm 3 · s). Interpolation estimation being roughly valid for all temperatures produces The emissivity (Q SC ) computed with the above interpolation formula (Eq. (22)) and for the total pion width given by the vacuum one is shown in Fig.2. The value Γ(π 0 → νν) is assumed to be equal to its experimental upper limit. This figure serves as a guide for the numerical calculation of Eq. (20), showing the very fast increase of the emissivity as we approach temperatures near the pion pole one and demonstrating a slow increase as we go to temperatures above the effective pion mass scale. The interpolation formula and the full numerical calculation of the emissivity agree with high accuracy for small T (compared to m * π ) and differ by a factor < ∼ 2 for T ≈ m * π . In Figs. 3, 4 we show the numerically calculated emissivity of the process under consideration for the superconducting media, Q SC , in the temperature range from 5 to 50 MeV for m * π = 10 MeV and m * π = 50 MeV, respectively. These curves were obtained assuming Γ(π 0 → νν) equal to its experimental upper limit. In the given temperature interval (from 5 to 50 MeV ) the emissivity curves are scaled up for m * π = 50 MeV compared to m * π = 10 MeV in accordance with above estimations of I SC and interpolation equation (22). The pole asymptotic manifests itself even in the case when the width is rather suppressed (solid curves). It happens for T < (15 ÷ 20) MeV in case m * π = 10 MeV and in the whole temperature interval ( from 5 to 50 MeV) in case m * π = 50 MeV. For m * π = 10 MeV solid, dash and dotted curves reach the high temperature asymptotic for T > 20 MeV. For m * π = 50 MeV the solid curve achieves the high temperature asymptotic for T > 50 MeV. The smaller is the width the more pronounced is the pole mechanism. This is clearly seen if we compare Fig.2 (the result of the interpolation formula computed for the vacuum width) with Figs. 3, 4 where we assumed that more involved effects may simulate a width larger than the vacuum one.
Compared to the hadron case, in the color-flavor-locked phase of the superconducting quark medium the effective pion mass is much smaller and the width is suppressed. In this case the resonance effect appears with a larger amplitude (due to very small width) and at smaller temperatures. Thus we argue that for the star core in the color-flavor-locked phase at T < T SC opac , i.e. beyond the neutrino opacity regime, in absence of any other efficient cooling reaction channels the pion pole mechanism could become an efficient cooling mechanism even if Γ(π 0 → νν) being by many orders of magnitude below the value determined by the modern experimental upper limit. Here the value of T SC opac is different from that for the usual proto-neutron star being determined by the condition that the neutrino mean free path in the quark core becomes to be comparable with the size of the core. In the neutrino opacity regime, T > T SC opac , the reaction π 0 → νν delays the neutrino transport from the quark core to the hadron shell. To come to these conclusions we used only that the effective pion mass in the CFL phase is essentially smaller than the vacuum one, the pion width is small and the pion momentum is shifted by In some relevant temperature interval for the width Γ π ∼ Γ vac π and for not too small effective pion mass the rate Q SC may even exceed the emissivity of the most efficient direct Urca process being Q DU ∼ 10 39 (T /10 MeV) 6 erg/(cm 3 · s) for the non-superfluid (hadron) neutron star matter. Also at very low temperatures the emissivity of the process decreases with the temperature according to the power low (Q SC ∝ T 11 ) rather than exponentially as other relevant processes in the CFL phase.
Therefore in spite of a low absolute value of the rate at such low temperatures, the process is the dominating process. Now we may compare the emissivity of the process γγ → π 0 → νν with the estimation for the emissivity (17) of the process π 0 → νν in the quark matter discussed in [26,21,22]. The emissivity of the later process is given by Eq. (17) with v 2 π = 1/3 for the color-flavor-locked phase. We see that the ratio (19) to (17) is larger than unit in a wide temperature interval of our interest, k B T < ∼ 7 MeV for m * π ∼ 70 MeV and k B T < ∼ 0.2 MeV for m * π ∼ 10 MeV. In this estimation we again used that the total pion width is almost exhausted by the γγ decay, Γ(γγ → π 0 ) ≃ −ImΠ R and, as before, we took Γ(γγ → π 0 ) ≃ Γ vac π . If the width Γ π were larger as the consequence of the presence of some processes which were not considered up to now, then the pole contribution could be more suppressed (see dotted, dash and solid curves in Figs 3,4) and the process γγ → π 0 → νν would be relevant only for low temperatures T < ∼ (0.1 ÷ 1) MeV. Notice that the ratio of the reaction rates for γγ → π 0 → νν and π 0 → νν does not depend on the value of Γ(π 0 → νν) and it always becomes larger than unit with the decrease of the temperature (however the critical temperature when the rate reaches the unit depends on the values of the parameters).
Refs [21,22] found an interesting possibility that in the color-flavor-locked phase the process π 0 → νν is allowed also in the standard model with left-handed neutrinos even if the neutrino mass is zero. The important point that was noticed is that the temporal and thermal components of the pion decay constant need not be the same in the dense medium due to the violation of the Lorentz invariance. The calculation of [21] yields µ q is the quark chemical potential. We see that for m * π ∼ k B T ∼ 10 MeV the value (23) is by three-four orders of magnitude smaller than the value of the experimental upper limit for Γ(π 0 → νν) for the right-handed neutrinos. Therefore both possibilities should be studied.
Ref. [21] also evaluated the value Γ(γγ → π 0 ) for the corresponding decay in the color-flavor-locked phase. This estimation only by the factor ∼ 1 differs from the value Γ(γγ → π 0 ) ≃ Γ vac π which we used in our estimations.
Conclusion
In conclusion, (i) we evaluated the cooling rate of the proto-neutron stars via the pion pole mechanism taking into account the nuclear medium effects and we showed that with the width effects included this mechanism gives no constraints on the corresponding π 0 → ν RνL (ν LνR ) decay width. (ii) We also discussed possible con-tribution of this mechanism to the cooling of different astrophysical systems, as is the case of neutron stars with dense quark cores. We found that the process γγ → π 0 → νν is proved to be the most efficient process in the color-flavor-locked superconducting core at temperatures T < ∼ (0.1÷10) MeV depending on the effective pion mass and the pion decay width. Depending on what the future tells us about Γ(π 0 → νν) and on the possibility of the color superconductivity in neutron stars we believe that these questions deserve a further more detailed study (perhaps even further studying of a dependence on medium effects of all the terms appearing in the cross section of γγ → π 0 → ν RνL (ν LνR )).
Acknowledgments
We would like to thank A. C. Aguilar for the help in the numerical procedures and
A Optical theorem formalism
In [29] and then in [30], see [11] for a review, there was developed the optical theorem formalism in terms of full non-equilibrium Green functions to calculate the reaction rates including finite particle widths and other in-medium effects. Applying this approach, e.g., to the antineutrino-neutrino production we can express the transition probability in a direct reaction in terms of the evolution operator S, where we presented explicitly the phase-space volume ofνν states; lepton occupations of given spin are put equal to zero for ν andν which are supposed to be radiated In graphical form the general expression for the probability of the neutrino and anti-neutrino production is as follows p)) (n F are fermionic occupations, for equilibrium n F = 1/[exp((ε−µ F )/T )+1]), and the cut eliminating the energy integral thus requires clear physical meaning. In this way one establishes the correspondence between closed diagrams and usual Feynman amplitudes although in the general case of finite fermion width the cut has only a symbolic meaning. Next advantage is that in the quasiparticle approximation any extra G −+ F , since it is proportional to n F , brings a small (T /ε F ) 2 factor to the emissivity of the process. Dealing with small where ImΣ R is small and the quasiparticle approximation is valid for the fermion Green functions, is usually much wider.
If one is interested only in the processes related to the νν coupling with the π 0 , the hatched block in (25) is reduced to the exact D −+ π 0 Green function. This Green function satisfies the exact Dyson equation. Only if in a specific region of the pion energies and momenta (ω and k) the pion Green function D −+ π 0 can be approximated by the δ-function, integrating over this region one may use the quasiparticle approximation for the pion. In such a way one usually calculates the rate of the π 0 → νν process. Certainly, in other regions of pion energies and momenta the pion Green function contains the width relating to different channels of the pion decay. E.g. the polarization operator of the π 0 contains the " − +" γγ loop. This term corresponds to the contribution D −− π 0 G −+ γ G +− γ D ++ π 0 . Within the quasiparticle approximation to the γ one cuts the G −+ γ G +− γ lines and gets in this way the contribution of the γγ → π 0 → νν process which we discuss in this paper.
When considering right-handed neutrinos we do not know the explicit expression for the π 0 → νν vertex. Several different expressions can be used. Therefore we will express the result of the integration in the νν states in (24) via the phenomenological value of the width Γ(π 0 → νν) for which there exists the experimental upper limit.
If we knew the coupling we could present an explicit calculation, as one usually does for the left-handed neutrinos.
|
2019-04-14T01:35:52.178Z
|
2002-08-20T00:00:00.000
|
{
"year": 2002,
"sha1": "0f526581eb6d97404a8953ae607eb6909cca65c5",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "0f526581eb6d97404a8953ae607eb6909cca65c5",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
234365594
|
pes2o/s2orc
|
v3-fos-license
|
Genomic and Pathological Characterization of Multiple Renal Cell Carcinoma Regions in Patient With Tuberous Sclerosis Complex: A Case Report
Tuberous sclerosis complex is a genetic disorder characterized by facial angiofibromas, intellectual disability, epilepsy, and tumor formation in multiple organs, including the kidney. Renal cell carcinoma occurs in 2%–4% of patients with tuberous sclerosis complex, often developing multiply and bilaterally. Renal cell carcinoma associated with this genetic disorder may include complex tumor heterogeneity caused by the spatially different mutational landscape. Herein, we report the case of a female patient with tuberous sclerosis complex who developed multiple renal tumors. A 44-year-old female patient with tuberous sclerosis complex developed three different histological types of tumor—angiomyolipoma, clear cell renal cell carcinoma, and papillary renal cell carcinoma—in the left kidney at first renal cell carcinoma recurrence. The papillary renal cell carcinoma was morphologically atypical, indicating that its occurrence was associated with the genetic disorder. Furthermore, whole-exome sequencing revealed distinct patterns of somatic mutation in the three tumor types, and the atypical papillary renal cell carcinoma possessed a different mutational landscape than that of typical papillary renal cell carcinomas. Our findings indicate that tumors associated with tuberous sclerosis complex may be diagnosed with careful pathological examination. Furthermore, somatic mutation profiles of these tumors revealed their unique features, providing important information for further understanding the mechanism of multiple tumor development in patients with tuberous sclerosis complex.
Tuberous sclerosis complex is a genetic disorder characterized by facial angiofibromas, intellectual disability, epilepsy, and tumor formation in multiple organs, including the kidney. Renal cell carcinoma occurs in 2%-4% of patients with tuberous sclerosis complex, often developing multiply and bilaterally. Renal cell carcinoma associated with this genetic disorder may include complex tumor heterogeneity caused by the spatially different mutational landscape. Herein, we report the case of a female patient with tuberous sclerosis complex who developed multiple renal tumors. A 44-year-old female patient with tuberous sclerosis complex developed three different histological types of tumorangiomyolipoma, clear cell renal cell carcinoma, and papillary renal cell carcinoma-in the left kidney at first renal cell carcinoma recurrence. The papillary renal cell carcinoma was morphologically atypical, indicating that its occurrence was associated with the genetic disorder. Furthermore, whole-exome sequencing revealed distinct patterns of somatic mutation in the three tumor types, and the atypical papillary renal cell carcinoma possessed a different mutational landscape than that of typical papillary renal cell carcinomas. Our findings indicate that tumors associated with tuberous sclerosis complex may be diagnosed with careful pathological examination. Furthermore, somatic mutation profiles of these tumors revealed their unique features, providing important information for further
INTRODUCTION
Tuberous sclerosis complex (TSC) is a rare autosomal dominant genetic disorder with manifestations such as facial angiofibromas, intellectual disability, and epilepsy occurring in 1 of every 6,000 births (1)(2)(3). This disorder is associated with mutations in TSC1 or TSC2; these genes encode proteins (hamartin and tuberin) that act as a complex involved in tumor suppression and regulation of the rapamycin (mTOR) signaling pathway mammalian target.
Disorders affecting the mTOR pathway comprise clinical features indicating a predisposition to tumor development in multiple organs, including the kidney. Specifically, renal tumors are found in 70%-80% of patients with TSC (4). The three major types of renal manifestations occurring in these patients are angiomyolipoma (AML), renal cyst, and renal cell carcinoma (RCC). TSC-associated RCC occurs in 2%-4% of patients with TSC (5), an estimated incidence rate higher than that in the general population. Moreover, TSC-associated RCC often occurs in the younger individuals, requiring close monitoring for recurrent RCC throughout their lifetime (5,6). TSC-associated RCC is also characterized by multiple occurrences in the same patient (7,8). This renal tumor occurs bilaterally in approximately 30% of cases and often comprises several types of morphology, including clear cell, papillary, and chromophobe RCC, as well as benign AML (5,7,9).
Herein, we describe a case of a patient with TSC who presented with three types of tumors-clear cell RCC, papillary RCC, and AML-in the same kidney. In the present study, we demonstrated that immunohistochemical analysis is an important tool to identify the occurrence of RCC associated with TSC, especially when the patient was not previously diagnosed with this genetic disorder. Moreover, we examined the somatic mutation profiles of the tumors, highlighting their unique features and mutational landscapes, which may contribute to understanding the mechanism involved in multiple tumor formation in patients with TSC.
CASE PRESENTATION
A 44-year-old Japanese woman was referred to our hospital for treatment of a recurrent tumor in the left kidney. Five years prior to this referral, the patient underwent right-kidney nephrectomy for RCC and received a histopathological diagnosis of clear cell RCC (pT1aN0M0) at another institution. Two years after this, computed tomography (CT) imaging identified three tumors in her left kidney; the patient underwent left-kidney partial nephrectomy for these tumors ( Figures 1A-C). Histopathological examination determined that the tumors were AML, clear cell RCC (pT1a), and papillary RCC (pT1a) ( Figures 1D-F). A periodic CT examination 3.5 years later revealed the tumor recurrence in her left kidney.
Upon initial visit to the Osaka University Hospital, abdominal CT scan showed a renal mass (diameter: 22 mm) with early enhancement in the left kidney ( Figure 2A). Additional screening tests revealed the presence of lung cysts and calcifications in the left ventricular wall of the brain ( Figures 2B, C), leading to the suspicion of TSC. Moreover, physical examination revealed five major (ungual fibromas, shagreen patches, lymphangioleiomyomatosis, subependymal nodule, and angiomyolipoma) and one minor (dental enamel pits) TSC manifestations according to clinical and genetic diagnostic criteria (10). Combining these findings, we diagnosed the patient with recurrence of left-kidney RCC and TSC.
Considering the high recurrence rate of TSC-associated RCC, the patient received CT-guided percutaneous cryoablation for the left-kidney recurrent tumor to maintain maximal renal function. Tumor biopsy performed after cryoablation identified the tumor as clear cell RCC by immunohistochemical staining. To evaluate kidney function, we calculated the estimated glomerular filtration rate (eGFR) before and 3 mo after cryoablation. The rate of kidney functional deterioration was 3.5%. The patient remained recurrence-free for 3 years without renal function deterioration.
Histopathological Features of Renal Cell Carcinoma
Upon the diagnosis of a second RCC recurrence, we retrospectively examined the three tumors that were identified at first recurrence considering that TSC-associated RCC has several unique features. We observed prominent papillary architecture lined by clear cells with delicate eosinophilic cytoplasmic thread-like strands that occasionally appeared more prominent and aggregated to form eosinophilic globules in the papillary RCC sample (Figures 3A, B). Immunohistochemical analysis revealed that CK7 and CD10 were positive, whereas succinate dehydrogenase subunit B (SDHB) and a-methylacyl-CoA racemase (AMACR) were negative ( Figures 3C-F). These findings demonstrated that the characteristics of the papillary RCC in our patient were consistent with those of TSC-associated papillary RCC, which was recently reported as a new type of papillary tumor occurring in patients with TSC (11).
Somatic Mutations and Alterations in Cancer-Related Genes
To characterize the intra-tumoral genetic heterogeneity of this case, we performed whole-exome sequencing using genomic DNA extracted from the tumors surgically resected at first recurrence. We obtained an average sequencing depth of 82.3× per base and identified 221 non-silent mutations and insertions/ deletions (indels) (124-154 non-silent mutations per tumor, Additional Table 1). We found that 36.7% of these somatic mutations-including cancer driver genes such as PABPC1 and DICER1, which are common in parental clones of many cancer types-were shared among the three tumors (common mutations, Figure 4). Some mutations were uniquely observed in one or two tumors (unique mutations), which may have been acquired during individual tumor formation, contributing to the high intra-tumoral genetic heterogeneity. Interestingly, in our patient's papillary RCC sample, 37.1% of common mutations and 25.5% of unique mutations were not previously reported as non-silent mutations in the Cancer Genome Atlas database (Additional Figure 1). Regarding TSC1 and TSC2 mutations, TSC-associated papillary RCC harbored frameshift TSC1 mutation (c.2142del, p.Asn715fs), a pathogenic variant for patients with TSC reported in the ClinVar database. Conversely, TSC1 and TSC2 germline mutations were not found in our patient, implying that she may possess the phenotype with mosaic forms of TSC.
DISCUSSION
The occurrence of RCC in patients with TSC has been recognized for several decades. Unlike typical RCC, TSC-associated RCC has several unique features, including early onset (around 40 years old), predominance in female patients, and multiple and bilateral tumors with distinct pathological characteristics (1,2,5,8). Therefore, because chronic kidney disease is a common cause of death in patients with TSC, physicians need to carefully determine therapeutic strategies for TSC-associated RCC to avoid renal function impairment (4). Herein, we described a case of TSC-associated RCC and identified distinct patterns of pathological findings and mutational landscapes among clear cell RCC, papillary RCC, and AML occurring in the same kidney, leading to several important implications. First, upon immunohistochemical analysis, we identified several TSC-associated papillary RCC characteristics that differed from typical papillary RCC, including prominent papillary architecture, abundant clear cell cytoplasm, uniformly deficient SDHB expression, and negative staining for AMACR (11). These findings strongly indicate the presence of TSC, especially in patients displaying fewer clinical features associated with this disorder. Considering that TSC-associated RCC may show multiple and bilateral recurrence, the timely recognition of this atypical form of RCC using immunohistochemical analysis may allow treatment with local therapy instead of radical nephrectomy, possibly avoiding the development of chronic kidney disease in these patients.
Second, we identified that each of the tumors occurring in the same kidney had unique somatic mutations, contributing to their different morphologies. So far, genomic characterization of multifocal renal tumors in TSC patients have not well been Figure 4).
Considering that 10%-15% of patients with TSC have no mutation in TSC1 or TSC2 as in our case, the acquisition of somatic mutations may also lead to the occurrence of multiple renal tumors with distinct phenotypes in these patients. These findings may contribute to further understanding the various aspects of TSC-associated RCC, although more cases are needed to fully elucidate this phenomenon.
In conclusion, our case report indicates that immunohistochemistry analysis is an important tool to diagnose TSC-associated papillary RCC. Moreover, our findings demonstrate that the accumulation of somatic mutation profiles is important to further understand the occurrence of TSC-associated RCC.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material. Further inquiries can be directed to the corresponding author.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by Institutional Review Board of Osaka University (approval number: 668-5). The patients/participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
TY performed data analysis and drafted the article. TK planned the entire project, performed data analysis, and completed the article. MU planned, supervised the entire project and completed the article. NN provided the study design and the working hypothesis and completed the article. KK conducted experiments, performed data analysis, and completed the article. MK and EM conducted experiments and completed the article. KH, AK, TU, SF, HK, RI, NI, and KF conducted data analysis and provided scientific advice. All authors contributed to the article and approved the submitted version.
|
2021-05-12T13:24:29.604Z
|
2021-05-12T00:00:00.000
|
{
"year": 2021,
"sha1": "8ff0306c3e5b4258154ad0fce1c60680b4ae425e",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2021.691996/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8ff0306c3e5b4258154ad0fce1c60680b4ae425e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
252529391
|
pes2o/s2orc
|
v3-fos-license
|
The effectiveness of a physiatrist-led acute hospital based postoperative hip fracture inpatient rehabilitation program: A single-center retrospective study
Background Postoperative hip fracture rehabilitation in Singapore has historically been carried out in both acute and community hospitals (CH). An increasing majority of patients with hip fractures now receive inpatient rehabilitation in CH, and it is often believed that Acute hospital (AH) - based rehabilitation may be less cost-effective than their CH counterparts. Objective: This retrospective study aims to review the effectiveness of an AH-based hip fracture postoperative rehabilitation program. Methods This study retrospectively reviewed the database of postoperative hip fracture patients who underwent a physiatrist-led AH-based inpatient rehabilitation from Jan 2010 to Dec 2016. The primary outcomes were the functional improvement assessed by functional independence measure (FIM) and FIM efficiency. The secondary outcome included the length of stay (LOS), successful discharge to home rate, mortality rate, and complication rate. Results A total of 293 cases were included in the study. After participation in the inpatient rehabilitation program, the mean total FIM increased from 83.9 ± 12.7 (mean ± SD) to 93.9 ± 16.2 (p < .001). The motor FIM increased from 47.1 ± 10.9 to 56.1 ± 10.1 (p < .001). 269 (91.8%) patients were successfully discharged home. Inpatient mortality was 0.3% (1/293). The complications rate during inpatient rehabilitation was 16.0% with urinary tract infection being the most frequent complication (10.2%). The median LOS for inpatient rehabilitation was 19 days (15, 28). Conclusions After completing a physiatrist-led postoperative hip fracture inpatient rehabilitation program in an acute hospital, patients demonstrated significant functional improvement (p < .0001). The inpatient rehabilitation program has a high discharge home rate and low in-hospital mortality.
Introduction
Hip fracture is a disabling condition that is associated with significant morbidity and mortality. Hip fractures negatively affect patients' independence in activities of daily living (ADLs), leading to higher institutionalization rates and a substantial loss of healthy life expectancy in patients. [1][2][3] Early integrated care after hip fracture surgery has been shown to improve clinical outcomes for patients with hip fractures. [1][2][3][4][5] The importance of postoperative rehabilitation has been emphasized through several studies with the potential to maximize post-operative recovery, improve independence in ADLs, and enhance quality of life. 1,2,6 Various postoperative rehabilitation care models have been developed to improve clinical outcomes while maintaining cost-effectiveness. [6][7][8][9] However, few studies have evaluated how healthcare institutions can better strategize in enhancing functional outcomes specifically post-hip fracture surgery. [10][11][12] Similarly in Singapore, various programmes have been developed to improve postoperative care . [13][14][15] Postoperative hip fracture rehabilitation in Singapore has historically been carried out in both acute (AH) and community hospitals (CH). With the development of new hip fracture pathways, an increasing majority of patients with hip fractures now receive inpatient rehabilitation in CH, and it is often believed that AH-based rehabilitation may be less costeffective than their CH counterparts. However, the effectiveness of an AH-based postoperative hip fracture rehabilitation in Singapore has been poorly evaluated. This retrospective study aims to review the effectiveness of an AH-based hip fracture postoperative rehabilitation program. We hypothesized that such program is effective in reducing complications and improving functional outcomes in patients with hip fracture.
Study design and participants
A retrospective analysis was performed on patients who were admitted to the Department of Orthopedic Surgery (OTO) in an acute hospital with a diagnosis of hip fracture, completed hip fixation surgery, and transferred to the Department of Rehabilitation Medicine (RMD) for inpatient rehabilitation from January 2010 to December 2016. The admission criteria to the inpatient rehabilitation program were as follows: (i) patients were admitted to OTO in this acute hospital with the primary diagnosis of hip fracture(s), and they underwent surgical fixation during the same admission; (ii) They were allowed full weight-bearing or partial weight-bearing after the surgery; (iii) they were medically stable and fit for inpatient rehabilitation based on the assessment from a physiatrist. The exclusion criteria were as follows: (i) Patients with hip fractures were admitted to other disciplines rather than OTO; (ii) Patients did not go for surgical fixation; (iii) The weight-bearing status was non weight-bearing after surgery; (iv) Patients were medically not stable.
Demographic information such as age, gender, race, premorbid ADLs reported by patient or family, diagnoses (types of hip fractures), types of surgical fixations, and weight-bearing status after surgery were collected. preexisting co-morbidities (hypertension, diabetes, previous stroke, ischemic heart disease, cancer, peripheral vascular disease, osteoporosis) were also collected. The study design was approved by the hospital's institutional review board (IRB). Data were manually extracted from the hospital's computer record system.
The inpatient rehabilitation program
All patients were assessed by ward therapists within 24 hours (24 h) after surgery. Patients who were allowed to weight-bear postoperatively were identified for early mobilization and rehabilitation, while patients of non-weight bearing status were taught bedside exercises. Within 48 h after surgery, patients were reviewed by a physiatrist or a rehabilitation-trained advanced practice nurse (APN) to assess the patients' suitability for the inpatient rehabilitation program. Once patients were accepted, they were transferred to a dedicated rehabilitation ward for the postoperative rehabilitation program.
The rehabilitation program was provided based on the best practice guidelines and tailored to each patient's needs and tolerance. [16][17][18][19] This physiatrist-led team consisted of a multidisciplinary team including rehabilitation nurses, physiotherapists, occupational therapists, medical social workers and dietitians. The medical team would conduct a medical ward round daily to review issues encountered during therapy. They would then discuss with therapists how to further plan subsequent sessions, which comprised a one-on-one therapy session by therapists specializing in musculoskeletal disorders. The duration of physio-and occupational therapy were minutes (min) each, for a minimum of 5 days per week. Reviews by dietitians were done regularly to assess each patient's nutritional status. Medical social workers evaluated and facilitated care arrangements as well as provided psychosocial support to these patients.
Baseline functional assessment was carried out by the rehabilitation team within the first 3 days of RMD transfer using the Functional independence measure (FIM). 20 The FIM scores were charted weekly to monitor rehabilitation progress and guide the subsequent therapy plans. A weekly multidisciplinary team meeting was held to discuss each patient's rehabilitation goals, progress, nutritional status, and discharge plans so as to develop a holistic rehabilitation program personalized to their individual needs.
Outcome measures
The primary outcomes were (i) the total and motor FIM upon transfer to RMD, and upon discharge from RMD; and (ii) the FIM efficiency (FIM gain divided by length of stay). The secondary outcome included the length of stay (LOS), successful discharge to home rate, mortality rate, and complication rate. Postoperative complications analyzed included urinary tract infection, pneumonia, deep vein thrombosis, wound infection, stroke, and acute myocardial infarction
Statistical analysis
Descriptive analyses were used to summarize patient characteristics. Normality of the continuous variables was examined using the Shapiro-Wilk's test and histogram. Mean (standard deviation, SD) was presented for normally distributed variables and median (interquartile range, IQR) was presented for continuous variables with skewed distribution. Frequencies and percentages were used to summarize categorical data. The paired t-test was used to analyze if there was a difference in the FIM upon transfer to RMD and upon discharge from RMD. The proportions of patients whose FIM change scores have exceeded the minimally clinically important difference (MCID; defined as 22 for total FIM gain and 17 for motor FIM gain) were computed, 21 together with Wilson 95% confidence intervals. All analyses were done using R 3.4.2, and a two-sided p < 0.05 was used to declare statistical significance.
Baseline characteristics
From January 2010 to December 2016, a total of 293 hip fracture patients were enrolled into the postoperative inpatient rehabilitation program in this acute hospital. Patient demographics are summarized in Table 1.
Outcome measures
After participation in the inpatient rehabilitation program, the mean total FIM increased from 83.9 ± 12.7 (mean ± SD) to 93.9 ± 16.2 (p < .001). The motor FIM increased from 47.1 ± 10.9 to 56.1 ± 10.1 (p < .001). Based on a previous study in a stroke population, the minimal clinically important difference (MCID) for total FIM and motor FIM are 22 and 17 respectively. 21 The proportion of patients whose total FIM change exceeded MCID was 10.9%, and the proportion of patients with motor FIM change exceeded MCID was 16.1%. The detailed results are displayed in Table 3.
Discussion
This is a retrospective study assessing the effectiveness of a physiatrist-led AH-based early integrated hip fracture inpatient rehabilitation program. This study revealed that the functional status of patients have improved after participation the program. High discharge-to-home and low in-hospital mortality rates were observed. In Singapore, some studies have evaluated hip fracture pathways and the effectiveness of CH-based rehabilitation programs. 15,22-24 Doshi HK et al. described an integrated care pathway model which involved timely admission, review, surgery, rehabilitation, and transfer (ARSRT) leading to positive clinical outomes. 22 Tan AK et al. analyzed the effectiveness of an CH-based rehabilitation. 15 However, few studies evaluate hip fracture programs in the AH setting. There are a few learning points derived from this study. Firstly, the described rehabilitation program in our study was provided by a physiatrist-led multidisciplinary team in the rehabilitation ward in an acute hospital. Multidisciplinary team care for hip fracture patients has been shown to improve clinical outcomes in previous studies. 6,25,26 A randomized controlled trial showed early multidisciplinary daily geriatric care reduces in-hospital mortality and medical complications in elderly patients with hip fractures. 25 To successfully develop and implement a hip fracture rehabilitation program, strong physician leadership is necessary. The previous study done in Japan showed that the participation of board-certificated physiatrists is associated with good rehabilitation outcomes in patients with hip fractures. 20 In this physiatristled study, significant functional improvement has been achieved in patients with hip fractures. Having a physiatrist as the team leader enables the most updated exercise guidelines to be administered and tailored to the individual's need, and facilitates more effective communication with the therapists.
Secondly, this program emphasized early initiation of rehabilitation, intensive rehabilitation, and continuity of care. The input from a physiatrist or a rehabilitation medicine trained APN was obtained within 48 h after surgery, and the rehabilitation program commenced as soon as patients were medically fit. Continuous care was ensured throughout the rehabilitation journey as the patients continued to receive input from RMD after transfer to inpatient rehabilitation. Although the study by Doshi HK et al. also emphasized early rehabilitation, the rehabilitation program was conducted by the therapists without any input from a physiatrist. The input from physiatrists is important in terms of adherence to the exercise guidelines and tailoring to patients' needs.
Lastly, serial functional assessment with FIM allowed for dynamic monitoring and feedback to guide the planning of an individualized rehabilitation program. Various outcome measures used in hip fracture have been discussed in previous literature. 27 Among these instruments, FIM is a well- recognized functional assessment tool to assess activity and participation and has been used as the outcome measurement in previous hip fracture studies. 20,28-31 By using FIM, which is a widely acceptable outcome measure, the study could evaluate the effectiveness of a rehabilitation program. The trend of the FIM score was reviewed for every patient and the rehabilitation program would be further modified based on the feedback from the FIM improvement. Of note, although the MCID of FIM scores has been established in stroke populations, it has not been established in the hip fracture populations yet. 21 Hence, the proportion of patients whose FIM change exceeded MCID needs to be interpreted prudently.
There are several limitations of this study design. Firstly, there was no control group to provide a direct comparison with patients who received inpatient rehabilitation in the CH setting. Although there is local data on hip fracture rehabilitation in CH published which showed longer LOS than our study, there could be selection bias as the study group could be different. Without a control group in this study, it is not possible to exclude the possibility that such significant functional gain was from natural recovery and usual care. Secondly, this study did not evaluate the cost-effectiveness of conducting rehabilitation programs in AH. One concern of keeping patients in AH for rehabilitation is costeffectiveness. Without cost-effectiveness analysis, it would be difficult to compare with rehabilitation in community hospital settings. Thirdly, the data only analyzed the patients that were transferred to RMD for inpatient rehabilitation, but not those who were potentially eligible. Data on the actual duration of each therapy session was not captured. Furthermore, the number of cases with dementia or cognitive impairment was not collected, and there was no postdischarge follow-up of the study group. These are important information factors for rehabilitation programs.
Conclusion
After completing a physiatrist-led postoperative hip fracture inpatient rehabilitation program in an acute hospital, patients demonstrated significant functional improvement (p < .0001). The inpatient rehabilitation program has a high discharge home rate and low in-hospital mortality.
Author contributions
CY was involved in study design, protocol development, gaining ethical approval. RYP and XHY were involved in data collection and data analysis. CJ wrote the first draft of the manuscript. BCW reviewed and edited the manuscript. All authors reviewed and approved the final version of the manuscript.
Availability of data
The datasets generated and/or analysed during the current study are available from the corresponding author
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Ethical approval
The study design was approved by the institutional review board (IRB).
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
Informed Consent
This is a retrospective study and informed consent was not taken.
|
2022-09-26T15:03:19.644Z
|
2022-06-01T00:00:00.000
|
{
"year": 2022,
"sha1": "b9aa0910232689ad65182d09ef376bfe6ae48a1b",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1177/20101058221129713",
"oa_status": "GOLD",
"pdf_src": "Sage",
"pdf_hash": "2a105cf2181f0cc6106e4d0ef45d391bb268a8eb",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
109279709
|
pes2o/s2orc
|
v3-fos-license
|
Development of new HTS-SQUID and HTS current sensor for HTS-SQUID beam current monitor
. Two years ago, a prototype of a highly sensitive beam current monitor with a high-temperature superconducting (HTS) SQUID, an HTS current sensor and an HTS magnetic shield, that is, an HTS-SQUID monitor, was installed in the beam transport line of the RIKEN ring cyclotron (RRC). As a result, the beam intensity of a sub- µ A beam was successfully measured by the prototype HTS-SQUID monitor. In fact, the intensity of a sub- µ A 40 Ar 15+ (63 MeV/u) beam was successfully measured with a 500 nA resolution. However, the current resolution of the prototype HTS-SQUID monitor is not sufficient to measure the current of a uranium beam, which is accelerated in a new radioactive isotope (RI) beam facility called H RI Beam Factory I (RIBF). A minimum current resolution of 1 nA is required for the measurement of the uranium beam. Therefore, we are developing a new HTS-SQUID monitor so as to improve the current resolution. This new monitor consists of three parts, the HTS SQUID, an HTS current sensor and an HTS magnetic shield, and these parts have been separately developed this year. The high-permeability core that is installed in the two input coils of the HTS-SQUID is an extremely important part in this new HTS-SQUID monitor. A 50-fold improvement in gain was successfully realized using the high-permeability core compared with that obtained without the high-permeability core. Another key factor is the substrate of the HTS current sensor. A MgO ceramic tube was used for the substrate of the HTS current sensor in the prototype HTS-SQUID monitor. However, it was difficult to form the bridge circuit using the MgO ceramic substrate in the new HTS-SQUID monitor, because the bridge circuit that magnetically connects the HTS current sensor and the HTS-SQUID has to be three-dimensional. To solve this problem, silver (Ag) of 99.9% purity was adopted for the substrates of the HTS current sensor in the new HTS-SQUID monitor. Then the surfaces of the substrates were coated by a thin layer (70 µ m) of Bi 2 -Sr 2 -Ca 1 -Cu 2 -O x (Bi 2212), which is an HTS material. We report the results of this development.
Introduction
The RIBF project to accelerate all elements from hydrogen to uranium up to an energy of 440 MeV/u for light ions and 350 MeV/u for very heavy ions started in April 1997 [1]. Figure 1 shows a schematic layout of the RIBF facility. The research activities in the RIBF project are based on the heavy-ion accelerator complex, which consists of one linac and four Figure 1. Schematic bird's-eye view of the RIBF facility. The research activities in the RIBF project are based on the heavy-ion accelerator complex, which consists of one linac and four ring cyclotrons.
ring cyclotrons, i.e., a variable-frequency linac (RILAC), the RIKEN ring cyclotron (RRC), a fixed-frequency ring cyclotron (fRC), an intermediate-stage ring cyclotron (IRC) and a superconducting ring cyclotron (SRC). Energetic heavy-ion beams are converted into intense RI beams via the projectile fragmentation of stable ions or the in-flight fission of uranium ions using a superconducting isotope separator, BigRIPS [2]. The combination of these accelerators and BigRIPS will greatly expand our knowledge of the nuclear world into the presently inaccessible region on the nuclear chart. We succeeded in accelerating a uranium beam to 345 MeV/u in March 2007, and 125 Pd, a new RI, was discovered in July 2007.
During the beam commissioning, it is essential to keep the beam transmission efficiency as high as possible, because the production of the RI beam requires an intense primary beam, and activation produced by beam loss should be avoided. In this facility, to evaluate the beam transmission efficiency, Faraday cups are used. When an accelerated particle hits the surface of a Faraday cup, secondary electrons are always generated. If these electrons leave the insulated cup area, the reading of the beam current will be wrong by the number of lost electrons. Thus, preventing the escape of secondary electrons from the cup is very important for measuring the beam current precisely. Usually, this can be done by applying a high voltage close to the entrance of the cup. However, since the electrical field on the beam axis is lower than that on the edge, it is impossible to completely prevent the escape of the high-energy secondary electrons that are produced by high-energy heavy-ion beams such as uranium beams. To resolve this technical issue, we have developed an HTS-SQUID monitor at RIKEN [3,4,5]. As a result, a beam intensity of 10 µA 40 Ar 15+ (63 MeV/u) was successfully measured with a 500 nA resolution by the prototype HTS-SQUID monitor, shown in Figure 2, where a 1 µA beam produced a magnetic flux of 6.5×10 −6 Φ 0 of at the input coil of the HTS-SQUID [6]. Because a minimum current resolution of more than two orders of magnitude higher (1 nA) is required for the measurement of the fainter heavy-ion beams generated in the RIBF project, we have developed new devices to improve the sensitivity.
New HTS current sensor and HTS magnetic shields
This year, both a new HTS current sensor and new HTS magnetic shields have been developed. Their schematic drawing is shown in Figure 3, and a photograph of the Ag substrates used in the new HTS-SQUID monitor is shown in Figure 4. A MgO ceramic tube was used for the substrate of the HTS current sensor in the prototype HTS-SQUID monitor. However, it was difficult to form the bridge circuit using the MgO ceramic substrate in the new HTS-SQUID monitor, because the bridge circuit that magnetically connects the HTS current sensor and the HTS-SQUID has to be three-dimensional. To solve this problem, silver (Ag) of 99.9% purity was adopted for the substrates of the HTS current sensor in the new HTS-SQUID monitor. Before fabricating the HTS current sensor and HTS magnetic shields, to compare the characteristics of the HTS material between Bi 2223 coated on the MgO substrate and Bi 2212 coated on an Ag substrate, small samples were produced. Using an electron probe٩ x-rayٺmicroanalyzer (EPMA) at RIKEN, it was clearly observed that the surface of the HTS material Bi 2212 was smooth and that it adhered more strongly to the Ag substrate than Bi 2223. The Ag tube used as the current sensor was coated with a thin layer of Bi 2212 on both the inner and outer walls of the tube. While a beam passes through the tube, a shielding current produced by the Meissner effect flows in the opposite direction along the wall, so as to screen the magnetic field generated by the beam (Figure 5(a)). Because the outer surface is designed to have a bridge circuit, the current is concentrated in the bridge circuit and forms an azimuthal magnetic field Φ. The HTS-SQUID is located close to the bridge circuit and can detect the azimuthal magnetic field. Figure 5(b) shows a close-up view of the improved bridge circuit. The high-permeability material is placed in the hole in the bridge (c) and an HTS-SQUID with a high-permeability core is placed on the bridge circuit (d). Finally, both materials are fixed using a high-permeability cylinder (e). The magnetic field generated by the beam is completely surrounded by the high-permeability materials. The HTS magnetic shields that operate on the basis of the Meissner effect consist of coaxial magnetic shields, a cylindrical magnetic shield and also the current sensor. The current sensor plays an important role not only as a current detector but also as magnetic shielding. Thus, the SQUID is almost completely surrounded by the HTS magnetic shields, which strongly shield it from environmental magnetic noise. In the fabrication process, we fabricated the following parts: (1) two inner cylinders, two outer cylinders and two disks for the substrates of the two coaxial magnetic shields, and (2) another cylinder and the bridge circuit for the substrate of the current sensor. After the fabrication, we welded both the inner and outer cylinders to the disks, and the bridge circuit to the other clynder by electron-beam welding. The measured accuracy of the parts after the electron-beam welding was within ±100 µm. All substrates were coated with a thin layer of Bi 2212. Figure 6(a) shows the Ag substrate Ib Ib (Input coil) H-P cylinder H-P : High-Permeability Figure 5. Schematic drawing of the improved bridge circuit of the current sensor. While a beam passes through the tube, a shielding current produced by the Meissner effect flows in the opposite direction along the wall, so as to screen the magnetic field generated by the beam. Using this improved current sensor, the magnetic field generated by the beam is completely surrounded by the high-permeability materials. used for the current sensor and (b) the substrate coated with the thin layer of Bi 2212. The current sensor and cylindrical magnetic shield were fabricated without any difficulties. However, some pinholes of 0.5 mm diameter were formed after coating the HTS material on the substrates of the coaxial magnetic shields. Also, Ag crystals were discovered in the center of the pinholes using an optical microscope. To prevent the formation of pinholes, we attempted several methods such as grinding the surface of the Ag substrates, and changing the baking temperature and thickness of Bi 2212. Even though coating and etching processes were repeated 7 times using the same Ag substrates under various conditions, we could not prevent the formation of pinholes. The reason why the pinholes were formed is thought to be that the disks for the substrates of the coaxial magnetic shields were fabricated using a rolling mill. Thus, the substrates were then refabricated by casting the silver. By adopting this method, we could successfully coat Bi 2212 on the substrates of the coaxial magnetic shields without forming pinholes.
3. Improvement of sensitivity using new HTS-SQUID and high-permeability core We developed a new HTS-SQUID and a high-permeability core that is installed in the two input coils of the HTS-SQUID to improve sensitivity. The core is composed of 80% Ni and Mo, Re and Fe. The measured inductance of the core using 20 turns of coil was 128 µH at the temperature of liquid nitrogen and 202 µH at room temperature. From these values, the calculated relative permeability is 2529 at the temperature of liquid nitrogen and 3991 at room temperature. The core is a very important part in the new current sensor of the HTS-SQUID monitor. A test in which a current wire was used to simulate a beam current showed a 50-fold improvement in gain, because the newly installed high-permeability core and the HTS-SQUID improved the transfer coupling efficiency of the magnetic field induced by the beam current.
NewSQUID : 1V / 1 A Prototype : 0.02V / 1 A Figure 7. Measured signal of the new HTS-SQUID with a high-permeability core. The sensitivity obtained was 1 V/1 µA, which was 50-fold higher than that obtained without the highpermeability core (0.02 V/1 µA). Figure 7 shows the measured signal of the new HTS-SQUID with the high-permeability core and the sensitivity obtained was 1 V/1 µA, which was 50-fold higher than that obtained without the high-permeability core (0.02 V/1 µA).
Conclusions and outlook
In this study, we have developed a new HTS-SQUID and a high-permeability core that is installed in the two input coils of the HTS-SQUID to improve the sensitivity. These are extremely important parts in the new current sensor of the HTS-SQUID monitor. A test using a current wire to simulate a beam current showed a 50-fold improvement in gain. Furthermore, both a new HTS current sensor and new HTS magnetic shields have been developed. A MgO ceramic tube was used for the substrate of the HTS current sensor in the prototype HTS-SQUID monitor.
However, it was difficult to form the bridge circuit using the MgO ceramic substrate in the new HTS-SQUID monitor, because the bridge circuit that magnetically connects the HTS current sensor and the HTS-SQUID has to be three-dimensional. To solve this problem, silver (Ag) of 99.9% purity was adopted for the substrates of the HTS current sensor in the new HTS-SQUID monitor. Then the surfaces of the substrates were coated by a thin layer (70 µm) of Bi 2 -Sr 2 -
|
2019-04-12T13:58:56.521Z
|
2008-02-01T00:00:00.000
|
{
"year": 2008,
"sha1": "c9488262442b8f4213ceb9b1c833438e42ef68a6",
"oa_license": null,
"oa_url": "http://iopscience.iop.org/article/10.1088/1742-6596/97/1/012248/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "76a03d0c7ef5d0bdc55b0b7ff6fd2d9bc5b3ad17",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
89609700
|
pes2o/s2orc
|
v3-fos-license
|
Cosmic Strings as Emitters of Extremely High Energy Neutrinos
We study massive particle radiation from cosmic string kinks, and its observability in extremely high energy neutrinos. In particular, we consider the emission of moduli --- weakly coupled scalar particles predicted in supersymmetric theories --- from the kinks of cosmic string loops. Since kinks move at the speed of light on strings, moduli are emitted with large Lorentz factors, and eventually decay into many pions and neutrinos via hadronic cascades. The produced neutrino flux has energy $E \gtrsim 10^{11} \rm{GeV}$, and is affected by oscillations and absorption (resonant and non-resonant). It is observable at upcoming neutrino telescopes such as JEM-EUSO, and the radio telescopes LOFAR and SKA, for a range of values of the string tension, and of the mass and coupling constant of the moduli.
We study massive particle radiation from cosmic string kinks, and its observability in extremely high energy neutrinos. In particular, we consider the emission of moduli -weakly coupled scalar particles predicted in supersymmetric theories -from the kinks of cosmic string loops. Since kinks move at the speed of light on strings, moduli are emitted with large Lorentz factors, and eventually decay into many pions and neutrinos via hadronic cascades. The produced neutrino flux has energy E 10 11 GeV, and is affected by oscillations and absorption (resonant and non-resonant). It is observable at upcoming neutrino telescopes such as JEM-EUSO, and the radio telescopes LOFAR and SKA, for a range of values of the string tension, and of the mass and coupling constant of the moduli.
I. INTRODUCTION
Theories with spontaneous symmetry breaking usually have topologically non-trivial vacuum configurations. Depending on the topology of the vacuum after the symmetry breaking, stable relics called topological defects -such as monopoles, strings or domain walls -could be formed in the early universe [1]. Strings can form if the vacuum manifold is not simply connected. Although monopoles and domain walls are generally problematic for cosmology, cosmic strings are compatible with the observed universe, provided that their tension is not too large (Sec. II; see, e.g., Refs. [2][3][4][5][6] for reviews). Cosmic strings are predicted in grand unified theories (GUTs) and superstring theory, and their existence can be revealed through their effects on the cosmic microwave background (CMB), large scale structure and 21 cm line observations, and -more directly -by detecting their radiation, such as gravitational waves and cosmic rays.
Since cosmic strings have GUT or superstring scale energy densities in their core, they can be significant sources of ultra high energy (E 10 11 GeV) cosmic rays [7][8][9][10][11][12][13], either as isolated objects, or possibly in combination with other topological defects, like in monopole-string bound states [14][15][16][17][18]. Among the cosmic rays, neutrinos are especially interesting. Their weak coupling to matter makes them extremely penetrating, so they are the only form of radiation (together with gravitational waves) that can reach us from very early cosmological times, namely, all the way from redshift z ∼ 200 (see Sec. IV B). Moreover, in the spectral region of interest, E > ∼ 10 11 GeV, the neutrino sky is very quiet, since this region is beyond the range of neutrinos from even the most extreme hadron accelerators (gamma ray bursts, supernova remnants, active galactic nuclei, etc.). Therefore, even a low statistics neutrino signal beyond this energy would * Cecilia.Lunardini@asu.edu † Eray.Sabancilar@asu.edu constitute a clean indication of a fundamentally different mechanism at play, such as a top-down scenario involving strings or other topological defects. Experimentally, the technologies to detect ultra high energy neutrinos are mature: they look for radio or acoustic signals produced by the neutrinos as they propagate in air, water/ice, or rock. After the successful experiences of ANITA [19], FORTE [20], RICE [21] and NuMoon [22] -the latter using radio waves from the lunar regolith via the so called Askaryan effect [23] -a new generation of experiments is being planned, that can probe neutrinos from cosmic strings with unprecedented sensitivity. Of these, the space based fluorescent light telescope JEM-EUSO [24], and radio telescopes LOFAR [25] and SKA [26] seem especially promising. One of the distinguishing effects of cosmic strings as cosmic ray emitters is that they can produce bursts from localized features called cusps and kinks (Sec. II), where ultrarelativistic velocities are reached. The radiation from cusps and kinks is very efficient, whereas the emission from cusp/kink-free string segments is exponentially suppressed. This enhanced emission has been studied in connection with gravitational waves [27][28][29], and electromagnetic radiation [27,28,30,31] like gamma ray bursts [32][33][34] and radio transients [35][36][37], as well as neutrino bursts [10].
Among the several scenarios considered, there are a few that predict cosmic ray and neutrino fluxes at an observable level, e.g., Refs. [10][11][12][13]. One of these, Ref. [13], involves the decay of moduli -massive scalar fields that arise in supersymmetric and superstring theories -, that can have various masses and couplings to matter. Moduli with coupling stronger than gravity are fairly natural [38][39][40][41][42] and relatively unconstrained due to their very short lifetimes [43], compared to gravitationally coupled ones [44][45][46]. By decaying into hadrons, the moduli eventually generate a neutrino flux. In Ref. [13] the emission of such moduli from string cusps, and the corresponding neutrino flux were discussed.
In this paper, we elaborate on the theme of modulimediated neutrino production from strings, and study modulus emission from kinks. We show that the emission from kinks is very efficient, and is the dominant energy loss mechanism for the cosmic string loops for a wide range of the parameters. We calculate the neutrino flux expected at Earth after a number of propagation effects, mainly absorption due to resonant (Z 0 resonance channel) and non resonant neutrino-neutrino scattering. We find that the flux might be observable at near future surveys, JEM-EUSO, LOFAR and SKA, depending on the parameters.
The structure of the paper is as follows. After discussing some generalities on strings and kinks in Sec. II, the modulus emission from a cosmic string kink is calculated in Sec. III. In Sec. IV, we discuss the decay of moduli, the properties of the hadronic cascade initiated by their decay into gluons, and propagation of extremely high energy neutrinos in the universe. In Sec. V, estimates are given for the kink event rate, the neutrino flux, and its detectability by the existing and future neutrino detectors. We also discuss the constraint from high energy gamma ray observations. Finally, in Sec. VI, we give our conclusions.
II. COSMIC STRINGS
Much of the phenomenology of a cosmic string depends on its tension (or mass per unit length), µ. It is often expressed in Planck units, as Gµ, where G is the Newton's constant. Several cosmological and astrophysical observations place upper limits on Gµ; we briefly review them here.
Although their contribution to the density perturbations is small, strings can still have effects on the early structure formation [64], early reionization due to early structure formation [65,66], formation of dark matter clumps [67], and might yield detectable signal in the 21 cm measurements [68][69][70][71][72]. Cosmic strings also produce gravitational waves [73] in a wide range of frequencies, both as localized bursts and as stochastic background, which can be detected by LIGO, eLISA and pulsar timing array projects [29,[74][75][76][77]. The most stringent bound comes from the pulsar timing measurements, which put an upper bound on the long wavelength stochastic gravitational wave background, h 2 Ω GW 5.6 × 10 −9 yielding the constraint Gµ 4 × 10 −9 [74]. However, this upper bound is obtained by ignoring the kinetic energy of the cosmic string loops, and by assuming that cosmic strings only decay by emitting gravitational waves. Thus, the pulsar timing bound is expected to be somewhat relaxed by taking these effects into account.
Cosmic string loops can emit moduli efficiently in the early universe when the length of the loop is of the order of the Compton wavelength of the emitted particle [44]. If moduli are gravitationally coupled to cosmic strings, very stringent cosmological constraints can be put on the string tension, Gµ, and the mass of the modulus, m [44][45][46]. On the other hand, if their coupling is stronger than gravitational strength, modulus radiation becomes the dominant energy loss mechanism for the loops, and the lifetime of moduli becomes a lot shorter. These relax the cosmological constraints on moduli significantly [43]. In this paper, we shall adopt the parameter space consistent with all the constraints mentioned above.
Cosmic strings are born as smooth objects, but afterwards they undergo crossings and self crossings, which lead to truncations and successive reconnections. Every crossing produces a kink on the string after reconnection. The result of such processes are string loops with a few kinks [78]. Kinks are discontinuities in the vector tangent to the worldsheet characterizing the string motion, and gravitational and particle radiation is very efficient at kinks yielding waveforms with power law behavior in the momenta of the emitted particles [27,29]. There are also transient features on loops called cusps, where a part of the string doubles on itself, that reach the speed of light momentarily. Cusps also produce radiation in bursts, with waveforms that have a similar power law behavior. On the other hand, radiation from cosmic string loops with no cusps or kinks is exponentially suppressed, leaving the kink and cusp radiation as an interesting window on the observable effects from strings. In the next section, we shall study massive particle radiation from cosmic string kinks.
III. MASSIVE PARTICLE RADIATION FROM KINKS
The free part of the action has Nambu-Goto term for a string of tension µ, and the massive scalar field term for the modulus of mass m where g is the determinant of the spacetime metric g µν and γ is the determinant of the induced metric on the worldsheet, X µ (σ, τ ), given by γ ab = g µν X µ ,a , X ν ,b . The interaction Lagrangian for the modulus field and the string has the form [13] where α is the modulus coupling constant, µ is the string tension and m p is the Planck mass. Ignoring back reaction effects, the equation of motion for the worldsheet in the flat background g µν = η µν = diag(−1, 1, 1, 1), and in the conformal gauge, where σ 0 = τ and σ 1 = σ, is with the gauge conditionsẊ · X = 0 andẊ 2 + X 2 = 1. The general solution can be obtained in terms of the right and left moving waves as where σ ± ≡ σ ± τ , and the gauge conditions are now given by where the prime refers to the derivative with respect to the corresponding light cone coordinate σ ± . The total power of particle radiation is where the power spectrum P n can be calculated by using [44,79]: where G = m −2 p is the Newton's constant, α is the modulus coupling constant, k is the momentum of the emitted particle, ω n = 4πn/L = √ k 2 + m 2 is the energy, L is the loop length, and T (k, ω n ) is the Fourier transform of the trace of the energy momentum tensor of the cosmic string loop given by Using Eq. (4) and the lightcone coordinates σ ± , Eq. (8) can be factorized as The integral in Eq. (9) is exponentially suppressed 1 for a smooth loop of cosmic string of length L >> 1/m [44]. However, the phase become stationary if the string has cusps -saddle points on the worldsheet where the derivative of the phase vanishes -or kinks -points where the vector tangent to the worldsheet has a discontinuity. The cusp case has been studied for massive particle emission in Ref. [13]. At the cusp both Φ ± are saddle points, hence their derivatives with respect to the corresponding lightcone coordinates vanish. On the other hand, for the case of a kink, either Φ + or Φ − has a saddle point, and the other one has a discontinuity. In what follows we assume that Φ + has a saddle point and Φ − has a discontinuity. Then, assuming the kink is at σ ± = 0, worldsheet can be expanded about σ ± = 0 as follows: Using the gauge conditions (5), one can show that n 1 ,n 2 and X (1) The curvature of the string can be approximated as |X (2) + | ∼ 2π/L if the string is not too wiggly. Using the expansions (10) and (11), and the gauge conditions (5), the phases Φ + and Φ − can be obtained as where s 1 , s 2 are constants of order 1, |k| ≡ k and we assumed that k // X (1) + . It can be shown that [29][30][31] when moduli are emitted at a small angle rather than being parallel to the direction of X + at the saddle point, the expansion (10) still applies provided that the angle satisfies otherwise, leading to exponentially suppressed power. The term in the integrand of Eq. (9) can be found as where c and c are constants of order 1, which we will take as 1 in what follows. Using Eq. (15), to the leading order we obtain where These integrals can be written explicitly by using Eqs. (12) and (13) as follows After a change of variables, one obtains [13] where u ≡ Lk ωn k − 1 3/2 . The imaginary part of the integral vanishes, and the real part is given in terms of the modified Bessel function of order 1/3 The function K 1/3 (u) exponentially dies out at large u, and it can be approximated as a power law in the limit u << 1 as K 1/3 (u) ≈ u −1/3 . This limit corresponds to k >> m, and in this regime, we can write u ≈ Lm 3 /16k 2 . Then, we obtain where this formula is valid when u 1, i.e., k k c , where For smaller values of k, I + is exponentially suppressed, thus we are only interested in the above regime for practical purposes. Using Eqs. (13) and (17), the integral I − can be similarly written as which results in where the sharpness of a kink is defined as Using Eqs. (21) and (24), we find the power spectrum from Eq. (7) as Integrating over solid angle gives a factor where θ k given by Eq. (14) is used. Then, the total power can be obtained as where we used the cutoff for upper limit for the integral over momenta [12,80] and the lower limit k min ∼ k c from Eq. (22). Note that for typical values of the modulus mass m and the string tension µ, the logarithmic factor is about 20. Then, we can simply write the total power as where we defineᾱ ≡ √ ψα. Number of particles emitted from a kink with momenta k in the interval (k, k + dk) can be found from Eq. (26) as (31) In addition to moduli, cosmic string loops also produce gravitational radiation with the power [2] It is convenient to define the power as where The dominant energy loss mechanism for loops determines the lifetime of a loop as Then, the minimum loop size that survives at cosmic time t is In the next section, we shall discuss the decay of the moduli produced from cosmic string kinks into neutrinos via hadronic cascades, and the propagation of these neutrinos in the universe.
IV. PARTICLE DECAY AND PROPAGATION
For simplicity, throughout this paper we will assume a matter dominated flat universe model, which lets us carry out the calculations analytically. We assume cosmological constant Λ = 0, and the total density parameter Ω m + Ω r = 1 has matter and radiation components. We use the following values of the cosmological parameters: age of the universe t 0 = 4.4 × 10 17 s, time of radiationmatter equality t eq = 2.4 × 10 12 s, 1 + z eq = 3200 [62]. The scale factor in the radiation and matter eras are respectively given by a r ∝ t 1/2 and a m ∝ t 2/3 . Using a/a 0 = 1/(1 + z), a 0 ≡ 0, the cosmic time can be written in terms of redshift as t = t 0 (1 + z eq ) 1/2 (1 + z) −2 in the radiation era, and t = t 0 (1 + z) −3/2 in the matter era.
A. Modulus decay
The decay channel for moduli with the largest branching ratio is the decay into gauge bosons with the interaction of the form [39] for a modulus field φ and a gauge field of field strength F µν . For the gauge bosons in the standard model, the modulus lifetime is estimated as Since most of the moduli are emitted from kinks with momenta k ∼ m √ mL/4 [because of the decreasing power law given by Eq. (31)], their lifetime is boosted by a factor of γ ∼ √ mL/4. For the fiducial values of the parameters, the Lorentz factor of a modulus emitted at redshift z and survive at present epoch is given by where we have used the fact that loops of size L min given by Eq. (36) yield the dominant contribution to the observable events, the factor of (1 + z) in the denominator takes into account the redshifting of the energy of the moduli emitted at epoch z. Thus, the ratio of the lifetime of a modulus emitted at redshift z z eq and decaying at redshift z d to the cosmic time at epoch z d is Note that Γ 50 from Eq. (34) and z d z. Hence, moduli will decay in the same epoch, z d z, as they are produced. Therefore, we assume that all the moduli decay before they reach the Earth.
The most efficient channel for neutrino production from modulus decays is the decay into gauge bosons. In particular, gluons decaying into hadrons produce neutrinos with the largest multiplicity [10,13]. The interaction of a modulus with a gluon field is of the form (37), and the hadronic cascade from these gluons produces numerous pions of either sign, which eventually decay into neutrinos and antineutrinos. For both, we expect a flavor composition in the ratio ν µ :ν e :ν τ = 2 : 1 : 0, from the pion decay chain.
The number of neutrinos per unit energy can be found by using the Dokshitzer-Gribov-Lipatov-Altarelli-Parisi (DGLAP) method. Monte Carlo simulations for the hadronic decay of a very massive particle show a power law behavior in energy as E −n with n = 1.9 [81]. For simplicity, we approximate the index as n ≈ 2. Then, the fragmentation function has the form [10,13] where k/(1 + z) and E are the modulus and neutrino energies at the present epoch respectively. Here E min < E < E max [13], where and E max ∼ 0.1 k. We take GeV ≡ /(1GeV) ∼ 1 [13]. Since the neutrino spectrum has the form E −2 , most of the neutrinos will have the energy E ∼ E min . This introduces a lower bound on the redshift, below which no neutrinos are produced with a given energy E E min . For our estimates, we are interested in energies E 10 11 GeV corresponding to the minimum redshift in the matter era where E 11 ≡ E/(10 11 GeV). Since the maximum redshift from which neutrinos can propagate to us is set by the neutrino horizon z ν ∼ 200 (see Sec. IV B), requiring z min z ν , we have the constraint on the parameters
B. Neutrino propagation
The neutrino flux at Earth is affected by a number of propagation effects: the redshift of energy, flavor oscillation, quantum decoherence and absorption. The redshift of energy will be included as we carry out the flux calculation in the next sections; the other effects, instead, warrant a separate discussion, which is the subject of this section.
The oscillations of very high energy neutrinos have been discussed in detail (see, e.g., [82]). Oscillations in vacuum are a good approximation, as the refraction potentials due to the intergalactic gas and to the cosmological relic neutrino background (which is assumed to be CP-symmetric here) are negligible [83]. For the large propagation distances we consider, the flavor conversion probabilities are energy independent, as the energydependent oscillatory terms average out [84]. For our predicted initial flavor composition, ν µ :ν e :ν τ = 2 : 1 : 0 (Sec. IV A), the effect of oscillations is to equilibrate the flavors [82], therefore the composition at Earth should be ν µ :ν e :ν τ = 1 : 1 : 1, for both neutrinos and antineutrinos.
A neutrino oscillates as long as its wavepacket remains a coherent superposition of mass eigenstates. Depending on the size of the produced wavepacket, decoherence can occur as the neutrino propagates, due to the different propagation velocities of the mass states. Dedicated analyses [84,85] have shown that neutrinos of the energies of interest here remain coherent over cosmological distances, therefore we do not consider decoherence effects.
Absorption effects are largely dominated by scattering on the relic cosmological background [86,87], with negligible contribution from other background species. In first approximation, absorption can be modeled as a simple disappearance of the neutrino flux; secondary neutrinos generated by scattering are degraded in energy and therefore they are negligible compared to primary flux.
The survival probability for the primary neutrinos, of observed energy E and production redshift z, is defined as [86][87][88][89]: where the optical depth for the relic neutrino background is and in the matter era. Here n ν (z) = 56 (1 + z) 3 cm −3 is the number density of relic neutrinos in each of the six species (neutrinos and antineutrinos of each flavor), and σ νν (E, z) is the neutrino-neutrino cross section, evaluated at the production energy E = E(1 + z), and summed over all the neutrino species in the background. For the energies of interest here, and at the leading order, this cross section is the same for neutrinos and antineutrinos, and is practically flavor-independent: [σ(ν α + ν β → any) + σ(ν α +ν β → any)] .
In the limit of massless neutrinos, m ν ≈ 0, the Z 0resonance effects can be ignored and the maximum cross section is attained at E 10 11 GeV [13]: where N ∼ 10 − 15, G F = 1.17 × 10 −5 GeV 2 , and m W 80.39 GeV. Using Eq. (49) in Eq. (46), and requiring τ ν = 1 for absorption, the neutrino horizon -the maximum redshift from which the neutrinos with energy E can propagate to us -is given by [86,87] for energies E 10 11 GeV. In this regime, P (E, z) can be approximated as a step function which becomes handy when estimating the neutrino flux analytically.
If the Z 0 resonance is realized in the annihilation channel ν α +ν α → any, at a redshift z res along the neutrino 54) is only indicative, due to thermal effects influencing the resonance. The horizontal shaded area refers to the interval of neutrino masses where the neutrino mass spectrum is strongly degenerate. We also mark the value mν 0.05 eV, which is the highest mass expected for non-degenerate (hierarchical) mass spectrum.
path, a pronounced dip in the neutrino spectrum is expected at the resonance energy due to the strong enhancement of the cross section [84,[87][88][89][90][91]: where s 2m ν E(1+z res ) if the background neutrinos are not relativistic. The effect of the resonance is especially transparent in this case [87,88]; we discuss it here in its essentials. Considering their momentum, p ν (z) 6.104×10 −4 (1+ z) eV, the cosmological neutrinos are non-relativistic today for masses exceeding ∼ 10 −3 eV, and throughout the interval of redshift of interest, z < ∼ z ν , if m j p ν (z ν ) 0.1 eV. From the data of oscillation experiments (see, e.g., [92]) it is known that, above this value, the neutrino mass spectrum becomes degenerate: m 1 m 2 m 3 . Therefore, we can reason in terms of a single neutrino mass value, m ν , and take m ν = 0.3 eV as reference. The degenerate case is optimal for the observability of the resonance effect, because the dip in the spectrum occurs at the same energy for all neutrinos and has a sharp shape. Furthermore, it is located in the region of the spectrum, ∼ 10 11 − 10 13 GeV, where experiments have good sensitivity [84] (see Fig. 1).
For a neutrino of energy E at Earth, the Z 0 resonance is realized at redshift z res if with M Z 91.19 GeV the mass of the Z 0 boson. It follows that the flux of neutrinos of observed energy E = 6.9 × 10 10 − 1.4 × 10 13 GeV 0.3 eV m ν , is affected by the resonance between z = z ν and the present epoch (see Fig. 1), and therefore should be strongly suppressed compared to the flux at energies outside this interval, where the smaller, non-resonant, absorption cross section is at play. Following the detailed discussion in Ref. [89], we calculated P (z, E) and used it to obtain the neutrino flux expected at Earth from all sources at all redshifts. This flux is calculated by convolving the flux per unit of production redshift with the probability P (z, E); it exhibits the characteristic suppression dip in the interval given in Eq. (54), as expected (see Sec. V C).
The absorption pattern is more complicated if the neutrino mass spectrum is not degenerate, i.e., m 1 < ∼ m 2 m 3 0.05 eV (or, m 3 m 1 < ∼ m 2 0.05 eV). For this configuration the probability P (z, E) has three distinct dips of resonant suppressions at separate resonance energies [84,89], corresponding to the three masses. These dips are broadened, in energy, by the integration over the production redshift, and, most importantly, by thermal effects, which are important for in this range of neutrino masses [84,89,91]. We postpone a discussion of these effects to a forthcoming publication [93].
V. NEUTRINO FLUX AND DETECTION
As kinks move along a loop of cosmic string, they emit particles in a fan-like pattern, and scan a ribbon of solid angle Ω ∼ 2πθ k [see Eq. (14)]. Thus, one can analogously visualize the radiation from a kink as a source of light emitted from a lighthouse passing by. An observer who happens to be within the beam direction sees particles as a burst event provided that the flux is detectable. In this section, we make order of magnitude estimates for the event rate for bursts and neutrino flux, and compare it with the existing and future neutrino experiments.
A. Loop distribution
The distribution of cosmic string loops has been studied both analytically [94][95][96][97][98] and in simulations [99][100][101][102][103][104][105][106][107]. Although there seems to be a consensus on the distribution of subhorizon size large loops, there is still not a good understanding of the small loop distribution. In what follows, we use the results from the latest simulation that has the largest dynamical range up to date for the evolution of the cosmic string network [107], where it has been confirmed that the large loops form with size βt, where β ∼ 0.1, and t is the cosmic time at which the loop is chopped off the network of long cosmic strings.
The density of long strings is ρ ∼ ζµ/t 2 , with ζ ∼ 16. Using this framework, we can estimate the number density of loops of length (L, L + dL) that are formed in the radiation era and still survive in the matter era as n(L, t)dL ∼ p −1 ζ(βt eq ) 1/2 t −2 L −5/2 dL, (55) where ΓGµt L βt eq and p is the reconnection probability. There are also loops formed in the matter era, however, we have verified that their number density is negligible compared to the loops surviving from the radiation era given by Eq. (55). The dependence of loop density on reconnection probability has not been resolved yet, however, it is expected that the loop density increases for decreasing reconnection probability as discussed in Refs. [29,108]. For ordinary cosmic strings, p = 1, and it has been estimated as 10 −3 p 1 for cosmic F-and D-strings [109]. Note that the most numerous loops have size of order L min ∼ ΓGµt. As we shall see in Sec. V, those will give the most dominant contribution to the observable effects, such as the diffuse neutrino flux.
When a loop of size βt eq is formed, it will decay by the time [see Eq.
This loop can survive until epoch z < z eq provided that where µ −8 ≡ Gµ/10 −8 , Γ 50 is given by Eq. (34), and we used t = t 0 (1 + z) −3/2 . Hence, we can conclude that even for the maximum Gµ allowed by the current bounds, the loops can survive all the way to the very recent epochs from which we can get observable effects unless Γ is too large.
B. Burst rate
The number of kink bursts per unit time can be estimated as [10,13] where n(L, t)dL/(L/2) is the frequency of a kink event per physical volume per unit loop length, L/2 is the oscillation period of a loop of length L, dΩ/4π ∼ θ k /2 is the probability that an observer lies within the solid angle of kink radiation, and dV (z) is the physical volume in the interval (z, z + dz) in the matter era, given by To find the total burst rate, we integrate Eq. (58) over L and z. Integral over L is dominated by its lower limit L min given by Eq. (36), and the integral over redshift is dominated by its upper limit z ν ∼ 200. Numerically, we obtain the total event rate aṡ Remember that Γ 50 is given by Eq. (34). Since experiments run for a few years, the event rate should be at least ∼ 1 per year to get observable events. Requirinġ N 1 yr −1 yields the constraint on the parameters whereṄ yr ≡Ṅ /(1 yr).
C. Neutrino flux
The diffuse neutrino flux is obtained using the flux from a single kink on a loop, and summing over all the loops in a volume constrained by the neutrino horizon z ν ∼ 200. It can be estimated by where is the physical distance to the source at redshift z in the matter era, dṄ is the kink event rate defined by Eq. (58) and Ω k is the solid angle into which moduli are emitted given by Eq. (27). Here ξ(E, z) is the fragmentation function given by Eq. (41), dN (k) is the number of moduli emitted from a kink with momenta k in the interval (k, k + dk) given by Eq. (31), and P (E, z) is the survival probability of neutrinos defined in Eq. (45). Putting everything together, we obtain Note that the integral over k gives a logarithmic factor ln(k max /k min ) ∼ ln(µ 1/2 /m) 3/2 ∼ 20 from Eqs. (22) and (29), and the integral over L is dominated by its lower bound L min given by Eq. (36). The integral over z can be done numerically. However, it is useful to see the limiting case m ν ≈ 0, where we can ignore the Z-resonance effects, and carry out the integral over redshift analytically. Using the approximate form of P (E, z) given by Eq. (51) in Eq. (64), the neutrino flux can be calculated as Using z min from Eq. (43), the predicted diffuse neutrino flux in the m ν ≈ 0 limit is (66) Taking into account the neutrino mass in the survival probability P (E, z), and fixing m ν = 0.3 eV and reconnection probability p = 1, we evaluate Eq. (64) numerically. In Fig. 2 we show the predicted flux for a few different values of the parameters Gµ andᾱ, together with the detectability limits of the current and future neutrino experiments.
D. Diffuse gamma ray background constraint
As moduli decay via hadronic cascades, the pions from this process also decay into photons and electrons. These high energy photons and electrons interact with the CMB photons and extra galactic background light, producing an electromagnetic cascade, whose energy density is constrained by the measurements of diffuse gamma ray background [110]. The strongest upper bound on the cascade energy density comes from the highest energy end of the observed spectrum. The most recent data from Fermi-LAT observations reach E ∼ 100 GeV [111]. The cascade photons with energy E E abs will be strongly absorbed due to interaction with the CMB photos, where E abs due to pair production can be estimated as ∼ 5.6 × 10 5 1 1 + z GeV, (67) where CMB = 2.35×10 −4 eV and m e = 0.511 MeV. This implies that a cascade photon of energy above E cas ∼ 100 (1 + z) GeV, is efficiently absorbed at redshift The electromagnetic energy density of the cascade from cosmic string kinks is [10,13] where f π is the fraction of energy transferred to pions in the hadronic cascade initiated by a modulus decay, 1/2 is the fraction of energy transferred to electron-positrons and photons from the pion decays, dP (k) is the power emitted from a kink, given by Eq. (26). Integrating over L, k and z [similar to the diffuse flux in Eq. (64)], and integrating over z up to z cas ∼ 70, we have where Γ 50 is given by Eq. (34). The maximum value of ω cas allowed by Fermi-LAT diffuse gamma ray data is ω max cas ∼ 5.8 × 10 −7 eV/cm 3 [112]. Therefore, ω cas ω max cas is satisfied for Note that ω cas ω max cas is not strictly ruled out. This bound only constrains the observed highest energy diffuse gamma ray photons at E γ ∼ 100 GeV that are originated at very large redshifts z 70. The constraints on the energy density of cascade photons produced at redshifts larger than z cas ∼ 70 are much more weaker since they are more efficiently absorbed. Besides, the radiation from kinks is not homogenous, but confined to be in a narrow ribbon of width 2πθ k 1. Unless the cosmic magnetic fields are strong enough, the beamed electromagnetic radiation from cosmic string kinks might not diffuse efficiently, hence, the constraint might be relaxed significantly. Nevertheless, the examples given in Fig. 2 respect the cascade upper bound given by Eq. (71).
E. Neutrino bursts from individual kinks
Before closing, we comment briefly on the possibility to identify the neutrino emission of individual kinks, i.e., bursts, rather than the diffuse flux. The signature of a burst would be two or more time-coincident events in a detector 2 . Time coincidence at arrival is expected for neutrinos from a single burst, because the emission occurs with a very short, practically vanishing, time scale of order [32,36], for the fiducial values of the parameters, where k min and γ are respectively given by Eqs. (22) and (39). The time lag due to the spread in neutrino velocities is negligible for the Lorentz factors of interest here, γ 10 11 [see Eq. (42)].
The fluence of neutrinos with energy above E, from a kink on a cosmic string loop, can be estimated as [10,13] Using Eqs. (27), (31), (41), (45) and (63), and integrating over momenta yields where we take the loop length to be L min given by Eq. (36) since these loops are the most numerous, as was discussed in Sec. V C, hence it is more likely to get a burst from such loops.
We can now estimate how many neutrinos might be detected at a detector of effective area where the nucleon mass, m N ∼ 1 GeV, M is the target mass, and σ νN ∼ 10 −31 cm 2 is the neutrino-nucleon cross section. The reference cross section is from recent calculations at E ∼ 10 12 GeV (see e.g., [113][114][115]) and is a reasonable approximation for higher energies as well, due to the slow rise of σ νN at these energies (less than ∝ E 1/2 ). A typical value of the target mass for neutrino detection is M ∼ 10 21 g, which applies to JEM-EUSO in its nadir mode at energy E ∼ 10 11 GeV [24]. We model a best case scenario by choosing the closest distance to the source, z ∼ z min (neutrinos can only come from z min < z < z ν ), and the regimeᾱ 2 50 (where Γ ∼ᾱ 2 ), for which the the number of emitted neutrinos is larger.
The number of events in a detector due to a burst can be estimated as: where A 16 ≡ A det /(10 16 cm 2 ). The fact that N ν 1 means that, for our parameters of reference, the identification of a burst by time coincidence of multiple events is possible in principle, although in practice instrumental backgrounds might be an obstacle.
Requiring N ν > ∼ 1, implies a minimum value of Gµ: This has to be combined with the maximum value imposed by the condition that z min z ν ∼ 200 [see Eq. (44) Note also that a detector's capability to see bursts depends on its energy sensitivity: for most of the parameter space, the neutrino emission is concentrated above the JEM-EUSO peak sensitivity, E ∼ 10 11 GeV, therefore detection at JEM-EUSO might be hard. However, LO-FAR and SKA are expected to surpass the JEM-EUSO sensitivity at higher energies (see Fig. 2), and therefore are more promising burst detectors. The detection of a burst would be an important signature of cosmic string kinks or cusps (see Refs. [10,13] for bursts from cusps), complementary to a possible diffuse flux observation. It would also help breaking the degeneracy between the two parameters,ᾱ and Gµ, since the detected number of neutrinos from a burst, (76), and the diffuse flux, (66), have different dependences on the parameters. Besides, even if only single neutrinos are detected, the rate of events, (60), can be used to help distinguish cosmic strings as the source, and break the degeneracy of the parameters.
VI. SUMMARY AND DISCUSSION
Cosmic strings loops form as a result of reconnection of long strings, and self-intersection of large loops. Kinks arise naturally as a result of these processes. We studied how kinks can radiate moduli, particles that arise in the supersymmetric models of particle physics, and that can have various masses and couplings to matter. The decay of moduli into pions via hadronic cascades produces a flux of neutrinos, which can be observable depending on the parameters.
Specifically, we considered the string tension Gµ, the modulus coupling constant α and mass m as free parameters, and showed that neutrinos with energies E 10 11 GeV can be easily produced by cosmic string loops via this mechanism, with flux GeV cm 2 s sr .
The hadronic cascade stops producing pions at the modulus rest frame energy of order ∼ 1 GeV. In the rest frame of the loop, this energy is boosted by the Lorentz factor γ, so that the the minimum observed energy of the neutrinos is: GeV. (79) The neutrino flux is shown in Fig. 2 for representative sets of parameters; the termination of the flux at E min appears clearly. The figure also gives the flux sensitivity of various experiments, showing that the predicted flux is within reach for the next generation neutrino detectors such as JEM-EUSO, LOFAR and SKA. A distinctive feature of radiation from cosmic string kinks is that particles are emitted in a fan-like pattern, confined into a narrow ribbon, hence bursts from individual kinks can possibly be identified by timing and directional coincidence. In Eq. (76), we estimated the number of neutrinos emitted by a kink, and the corresponding number of events in a detector of a given effective area. We found that, for the fiducial values of the parameters used in our analysis, multiple neutrinos can be seen in the field of view of the detector.
If ultra high energy neutrinos are observed at future experiments, what would be possible to learn? Topdown mechanisms would offer natural explanations, and, among those, cosmic strings would be a favored candidate. Even in the framework of cosmic strings, however, data analyses will necessarily be model-dependent, and various models would have to be considered. Our scenario involving moduli is a possibility among many, and other intermediate states leading to neutrino production are possible, e.g., modulus emission from string cusps [13] and heavy scalar particle emission from cusps of superconducting strings [10]. Another possible generation mechanism of extremely high energy neutrinos could be the KK mode emission from cusps and kinks of cosmic F-and D-strings. The emission of KK modes of gravitons from cusps was studied in Refs. [116,117], and various cosmological constraints have been put on the cosmic superstring tension. Depending on the parameters, observable neutrino fluxes might be produced by this mechanism as well.
A discrimination between different models will require the combination of complementary data, probably the detection of gravitational wave/electromagnetic counterparts of neutrino signals [118,119]. The identification of point-like sources of extremely energetic neutrinos (bursts) would favor cosmic string kinks or cusps as sources, a hypothesis that would be substantiated further by the observation of accompanying gravitational wave and/or gamma ray bursts. To distinguish between kinks and cusps could be possible since the event rate is larger for kinks for the given values of the parameters.
In addition to a possible discovery of topological defects, detecting a flux of ultra high energy neutrinos might reveal new pieces of the still incomplete puzzle of neutrino physics. Most interestingly, if the data show a Z 0 resonance dip, we might gather information on the neutrino mass and have another, perhaps more direct, evidence of the existence of the cosmological relic neutrinos. The information on the neutrino mass might be especially important if at least one neutrino is light enough to evade a direct mass measurement in the laboratory.
It is important to consider, however, that the extraction of any information from data would be complicated by many theoretical uncertainties. Let us comment on the uncertainties and simplifying assumptions of our calculation. First of all, we worked in a flat matter dominated universe, and ignored the recent accelerated expansion period of the universe, whose effect can be at most about a factor of a few in our final estimates. We also approximated the neutrino fragmentation function for the moduli decays as dN/dE ∝ E −n , and used n = 2, whereas the numerical calculations yield n ≈ 1.9 [81]. In our estimates we take into account the reconnection probability p. For cosmic strings of superstring theory, namely, F-and D-strings [109], p << 1, whereas for ordinary field theory strings p = 1. The flux, event rate and the chance of getting neutrino bursts is expected to be enhanced for cosmic superstrings with p 1, compared to ordinary cosmic strings. We ignored the backreaction of modulus emission from kinks on the evolution of kinks. Since the total power from a kink is only logarithmically divergent [see Eq. (28)], the effect of radiation is expected to smooth out the sharpness of a kink slowly. Finally, our treatment of the neutrino absorption due to resonant scattering on the neutrino backround is limited to relatively large masses, m ν > ∼ 0.1 eV, for which thermal effects on the background are negligible. The generalization to include these effects is forthcoming [93].
|
2012-06-13T20:00:01.000Z
|
2012-06-13T00:00:00.000
|
{
"year": 2012,
"sha1": "caff16cae115d43fab83b9cba60edca71ecaa541",
"oa_license": "publisher-specific, author manuscript",
"oa_url": "https://link.aps.org/accepted/10.1103/PhysRevD.86.085008",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "caff16cae115d43fab83b9cba60edca71ecaa541",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
23218288
|
pes2o/s2orc
|
v3-fos-license
|
Hyperbaric Oxygen Alleviates Secondary Brain Injury After Trauma Through Inhibition of TLR4/NF-κB Signaling Pathway
Background The aim of this study was to investigate the efficacy of hyperbaric oxygen in secondary brain injury after trauma and its mechanism in a rat model. Material/Methods A rat model of TBI was constructed using the modified Feeney’s free-fall method, and 60 SD rats were randomly divided into three groups – the sham group, the untreated traumatic brain injury (TBI) group, and the hyperbaric oxygen-treated TBI group. The neurological function of the rats was evaluated 12 and 24 hours after TBI modeling; the expression levels of TLR4, IκB, p65, and cleaved caspase-3 in the peri-trauma cortex were determined by Western blot; levels of TNF-α, IL-6, and IL-1β were determined by ELISA; and apoptosis of the neurons was evaluated by TUNEL assay 24 hours after TBI modeling. Results Hyperbaric oxygen therapy significantly inhibited the activation of the TLR4/NF-κB signaling pathway, reduced the expression of cleaved caspase-3, TNF-α, IL-6 and IL-1β (P<0.05), reduced apoptosis of the neurons and improved the neurological function of the rats (P<0.05). Conclusions Hyperbaric oxygen therapy protects the neurons after traumatic injury, possibly through inhibition of the TLR4/NF-κB signaling pathway.
Background
With the rapid development of the economy and society, the incidence of TBI in China is rising year by year, and its mortality and morbidity rates remain high, making it a great burden for the patients' families both mentally and economically [1]. It has been revealed that TBI can be divided into two phases: the initial injury caused by violence, which is inevitable, and the secondary injury within hours or days after initial injury, caused by the inflammatory response, oxidative stress, calcium overload, and a series of other pathological processes, which is the major target of current interventions [2]. Previous studies have demonstrated the effectiveness of hyperbaric oxygen therapy in treatments for secondary brain damage after trauma [3], but the mechanism has not been fully clarified. In this study, we investigated the efficacy of hyperbaric oxygen for secondary brain injury after trauma, with a focus on the TLR4/NF-kB signaling pathway, in an effort to clarify the mechanism of its protective effect and to provide guidance for the safer and more efficient clinical use of hyperbaric oxygen.
Grouping of experimental animals
Sixty healthy adult male SD rats were randomly divided into 3 groups: the sham group, the untreated TBI group, and the hyperbaric oxygen-treated TBI group. A rat model of TBI was constructed using the modified Feeney's free-fall method. The rats were anesthetized using chloral hydrate at 4 mg/kg and fixed on a bracket. After skin preparation, a 5 mm opening was made with an orthopedic drill at 3 mm to the right of the coronal suture and 3 mm behind the sagittal suture, keeping the dura intact. Then, a 40 g object was dropped from 15 cm high and vertically crashed into the exposed dura to make a 3 mm deep and 4 mm diameter hole. The sham group was only drilled but not injured by the falling object. All of the experimental program and operation procedures in this study were approved by the experimental animal ethics committee. All the rats were free to eat and to drink water.
Hyperbaric oxygen therapy
Hyperbaric oxygen therapy was performed 2 h after TBI, as previously reported [4]. The rats were placed into the animal chamber, which was purged with pure oxygen for 10 min to ensure that the oxygen fraction in the chamber was >95%. The pressure was then steadily increased to 0.12 MPa and maintained for 60 min. Next, the pressure was steadily decreased to normal pressure over 20 min. Hyperbaric oxygen therapy was performed twice with a 10 h interval. The rats' behavior in the high-pressure chamber was closely monitored. The sham control group and untreated TBI group were also placed in the same chambers and were subjected to the same experimental procedures, only without the hyperbaric oxygen treatment.
Western blot analyses
At 24 h after injury, the rats were anesthetized, and 100 ml of normal saline was infused through the cardiac apex. The tissue around the trauma was resected and stored at -80°C for later use. Nuclear and cytoplasmic protein was extracted using a kit as instructed by the manufacturer. Protein concentration was determined by Bradford assay, and 1/4 volume of 5× loading buffer was added, followed by a boiling water bath for 20 min for denaturation. A sample consisting of 35 μg of protein was loaded, separated by electrophoresis, transferred to membrane, blocked, and incubated with antibodies to TLR4, IkB, P65, cleaved caspase-3, GAPDH, or H3 (1:200, purchased from Santa Cruz, USA) at 4°C on a shaker overnight, then washed, incubated with the corresponding HRP-conjugated secondary antibody, and washed again. Finally, ECL solution was added to reveal the bands, whose gray value was analyzed by Image J software.
ELISA to determine TNF-a, IL-6 and IL-1b concentration ELISA assay kits for TNF-a, IL-6 and IL-1b were all purchased from Unitech Biotechnology, and experiments were conducted according to the manufacturer's instructions.
TUNEL assay to determine cell apoptosis
Paraffin-embedded brain tissue was cut into 4 μm sections and analyzed by the TUNEL assay to determine apoptosis of the neurons. The TUNEL assay kit was purchased from Roche, USA, and the experiment was conducted strictly according to the manufacturer's instructions. The TUNEL assay result for the tissue around the trauma was observed under an optical microscope, and 10 random visual sections under 400× magnification were evaluated for the percentage of TUNEL-positive cells.
Neurological function evaluation of the rats after TBI
The neurological function of the rats was evaluated by the Neurological Severity Scores (NSS) [5], including motor function, sensory function, balance ability, physiological reflex defect, and abnormal movement. For each of the 18 projects, the inability to complete a task or lack of corresponding response was scored 1 point. Cases with scores of 13-18 points in total are regarded as severe injury, 7-12 points as moderate injury, and 1-6 points as mild injury.
Statistical analysis
The SPSS 15.0 software was used for statistical analysis, and the data are presented as means ± standard deviation (x±S). The neurological function scores were compared by the Kruskal-Wallis test, and comparisons among multiple groups were performed using one-way analysis of variation.
The effect of hyperbaric oxygen on the expression of TNF-a, IL-6 and IL-1b Compared to the sham group, the levels of TNF-a, IL-6 and IL-1b in the untreated TBI group were significantly increased ( The effect of hyperbaric oxygen on apoptosis of rat neurons after TBI Apoptotic neurons with condensed nuclei were stained brown in the TUNEL assay (Figure 3), while normal cells were large, round, and not stained. In the sham group, few apoptotic neurons were observed; in the untreated TBI group, significantly more apoptotic neurons were observed ( Figure 3, Table 2, P<0.05); and after hyperbaric therapy, the number of peritrauma apoptotic neurons was significantly reduced (P<0.05).
Discussion
Numerous studies have shown that hyperbaric oxygen can increase oxygen concentration, increase oxygen diffusion distance, and induce a variety of mechanisms to correct the acidosis neuroprotective effect. This study showed that hyperbaric oxygen can significantly reduce the rate of neuronal apoptosis after traumatic brain injury, significantly improving neurological function in rats, which is consistent with previous studies. Recent studies have also found that hyperbaric oxygen could suppress the inflammatory response after traumatic brain injury to exert neuroprotective effects. This study focused on the inflammatory response mediated by the TLR4/NF-kB signaling pathway. TLR4 is an important member of the TLR family, a group of Type I transmembrane molecules, consisting of an extracellular segment, a transmembrane segment and an intracellular TIR segment. It plays an important role in hypoxicischemic brain injury, cerebral hemorrhage, spinal cord injury, and other acute injury of the central nervous system [6][7][8].
High expression of TLR4 has been observed in the tissue around brain trauma in both animal models and clinical studies [9], while TLR4 knockout mice showed significantly alleviated secondary brain injury after brain trauma [10,11], and a specific TLR4 inhibitor also significantly alleviated brain damage, suggesting that therapies targeting TLR4 have great potential in improving the prognosis of TBI. In this study, TLR4 protein expression significantly increased after TBI, and hyperbaric oxygen therapy significantly inhibited TLR4 expression, reduced apoptosis of neurons after TBI and improved the neurological function of the rats, suggesting that down-regulation of TLR4 is an important part of the neuron-protective effect of hyperbaric oxygen therapy. When bound by its receptor during TBI, TLR4 interacts with downstream myeloid differentiation factor 88 and activates IKK, which phosphorylates IkB and leads to the degradation of IkB. Then, NF-kB p65 protein is released from IkB and translocates from the cytoplasm to the nucleus to interact with the promoters of its target genes, regulating the expression of a series of inflammatory cytokines. NF-kB is the master regulator of a series of inflammatory cytokines and is significantly elevated in the tissue surrounding brain trauma, and specific NF-kB inhibitors significantly reduced the expression of inflammatory cytokines after TBI and relieved secondary brain injury [12]. In addition, recent studies have shown that hyperbaric oxygen therapy significantly inhibited microglia activation and inflammatory cytokine production in the central nervous system [13]. In this study, we found that p65 expression was significantly increased in peri-trauma tissue after TBI, which is consistent with previous studies, and hyperbaric oxygen therapy significantly inhibited p65 expression in the nucleus. Taken together, it is speculated that the inhibition of NF-kB signaling may be the basic mechanism in the neuroprotective effect of hyperbaric oxygen therapy.
Conclusions
Hyperbaric oxygen therapy significantly reduces apoptosis in peri-trauma tissue after TBI, inhibits the expression of inflammatory cytokines, and significantly improves the neurological function of rats after TBI. These effects are closely related to the inhibition of the TLR4/NF-kB signaling pathway.
|
2018-04-03T00:51:42.066Z
|
2016-01-26T00:00:00.000
|
{
"year": 2016,
"sha1": "dba1752e777fc9f656979725a681e138235be911",
"oa_license": "implied-oa",
"oa_url": "https://europepmc.org/articles/pmc4734681?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "dba1752e777fc9f656979725a681e138235be911",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
269789894
|
pes2o/s2orc
|
v3-fos-license
|
Male Reproduction in Spinal Muscular Atrophy (SMA) and the Potential Impact of Oral Survival of Motor Neuron 2 (SMN2) Pre-mRNA Splicing Modifiers
Spinal muscular atrophy (SMA) is a neuromuscular disease caused by deletions or mutations in the survival of motor neuron 1 (SMN1) gene resulting in reduced levels of SMN protein. SMN protein is produced by cells throughout the body, and evidence suggests that low SMN protein can have systemic implications, including in male reproductive organs. However, a paucity of research exists on this important topic. This article will discuss findings from non-clinical studies on the role of SMN in the male reproductive system; additionally, real-world observational reports of individuals with SMA will be examined. Furthermore, we will review the non-clinical reproductive findings of risdiplam, a small-molecule SMN2 splicing modifier approved for the treatment of SMA, which has widespread distribution in both the central nervous system and peripheral organs. Specifically, the available non-clinical evidence of the effect of risdiplam on male reproductive organs and spermatogenesis is examined. Lastly, the article will highlight available capabilities to assess male fertility as well as the advanced reproductive technologies utilized to treat male infertility. This article demonstrates the need for further research to better understand the impacts of SMA on male fertility and reproduction.
INTRODUCTION
Spinal muscular atrophy (SMA) is a progressive neuromuscular disease that affects individuals with a broad age range and spectrum of disease severity [1].SMA is caused by reduced levels of survival of motor neuron (SMN) protein due to deletions and/or loss of function mutations in the SMN1 gene [1,2].Most humans carry a second gene, SMN2, which has a single-base substitution that can cause exclusion of exon 7 during splicing, leading to the production of SMNΔ7 [3], a shortened version of SMN protein which is unstable and rapidly degrades [4].Approved disease-modifying therapies (DMTs) for SMA aim to increase the level of SMN protein by either increasing the amount of functional SMN protein produced by SMN2 or delivering a copy of SMN1 [5][6][7].
SMN protein is ubiquitously produced by human cells, and reduced levels of SMN protein throughout the body are thought to play a vital role in the disease pathophysiology of SMA [8,9].While the effect of reduced SMN protein on motor neurons is well established, SMA is considered to be a systemic disease with widespread implications [10,11].Non-neuromuscular phenotypes have been observed in individuals with SMA, including those specific to the cardiovascular, gastrointestinal, metabolic, and reproductive systems [11,12].It is important to note that high levels of SMN protein are produced in the male reproductive system of mice [13][14][15] and humans [16,17].
With more patients living longer because of the availability of DMTs for SMA, the number of individuals considering family-building options will likely increase [18].How SMA may affect fertility, particularly in male individuals, is not well understood, nor has it been thoroughly investigated.A variety of genetic and acquired variables, as well as lifestyle and environmental risk factors, may impact male fertility [19].A recent meta-analysis reported the worldwide prevalence of infertility in individuals of reproductive potential across the general population as ranging from 12.6 to 17.5% [20].The limited data on the fertility status of men with SMA make it difficult to understand the impact that DMTs may have on the male reproductive system.
The authors assess the available published literature on the potential effects of SMA on male fertility, the non-clinical effects of selected oral SMN2 splicing modifiers on male fertility, and the tools and technologies available for fertility assessment and treatment.
Compliance with Ethics Guidelines
This article is based on previously conducted studies and does not contain any new studies with human participants or animals performed by any of the authors.
Experimental Studies
Although the relationship between SMA and male fertility in humans is not well established, animal models suggest that SMA does negatively impact male reproductive competency.In experimental mouse models that exhibit a deficiency in SMN protein, a broad range of negative reproductive consequences have been observed [13,14,21].In the most severe animal models, low SMN expression was linked to impairments in male reproductive organ development (reduced testis size), defective sperm maturation, degenerated seminiferous tubules, and a reduction in sperm count [15].Reduced male fecundity was observed, with only two out of eight male SMA mice able to successfully sire a litter compared with all wild-type male mice mated [15].It has been suggested that high expression of SMN protein in the testis is due to its critical role in the development and maintenance of male germ cells, in particular for spermatogonia and the initial stages of sperm development [14,15].
Experiments conducted in a mouse model of SMA in which mice have a knockout of the smn gene but express a human SMN2 transgene reported higher expression of full-length SMN2 mRNA in testis compared with other tissues, indicating there may be a specific mechanism facilitating the inclusion of exon 7 in SMN2 mRNA in the testis [15,22]; however, this has not been confirmed in men with SMA.The presence of a specific mechanism for SMN2 splicing in testes might indicate that the SMN2 gene and the SMN protein produced from SMN2 may have a unique and pivotal role in human spermatogenesis.This specific SMN gene is evolutionarily isolated to humans and shows high expression in the testes [16,17].
In humans, an in vitro analysis of pluripotent stems cells derived from men with the most severe form of infertility, azoospermia (i.e., a sperm count of zero), found low expression of SMN1 compared with stem cells from healthy controls [21], identifying a link between SMN1 expression and azoospermia.Expression of SMN protein in pluripotent stem cells derived from patients with azoospermia resulted in upregulation of germ-cell markers and induced differentiation into primordial germ cell-like cells [21].This in vitro human study provides additional evidence that SMN protein plays an important role in human spermatogenesis.
Observational Studies
An analysis of insurance claims in the US healthcare system has identified a higher prevalence of testicular hypofunction and male infertility in individuals with adult-onset SMA compared with matched controls [23].Lipnick et al. reported that males living with SMA who were diagnosed between the ages of 21 and 65 (N = 196) (suggestive of milder disease) were more likely to have had a diagnosis of testicular hypofunction (odds ratio = 2.4, p = 0.02) or male infertility (odds ratio = 5.1, p = 0.01) [23].This analysis was based upon insurance claims covering a period (January 2008 to October 2015) primarily before the approval of DMTs.
In a recent retrospective study by Ribault et al. from the Lyon Neuromuscular Disease Center, patients with SMA were followed across a period covering the availability of DMTs (2012-2022) [24].Among seven adult males who attempted to undergo sperm cryopreservation prior to initiating risdiplam therapy, three (43%) had a complete absence of sperm in the ejaculate, a condition termed azoospermia.This extraordinarily high prevalence rate of azoospermia (43%) is far beyond the incidence of 1% for the male general population or the rate of 10% observed in men with infertility [25].
Two real-world observational reports of individuals with SMA have reported high rates of cryptorchidism (undescended testes) in patients with Type 1 or 2 SMA [26,27].Brener et al. assessed 27 male patients with Types 1-3 SMA, reporting bilateral cryptorchidism in 60% of patients with Type 1 SMA (n = 10) and 30% in patients with Type 2 SMA (n = 10), with a mean age at diagnosis of cryptorchidism of 6.4 ± 4.4 (range 1.2-14.3)years [26].All patients with Type 3 SMA (n = 7) had descended testes.In a study of patients with Type 1 SMA, Bach et al. reported bilateral cryptorchidism in 52% (13 of 25) of male patients whose testes were examined, and unilateral cryptorchidism was observed in an additional two male patients [27].The prevalence of cryptorchidism in males with SMA in observational studies is significantly higher than the incidence (2-4%) reported in the general population for full-term males [28].
Testicular descent during embryonic development requires intra-abdominal muscles to provide adequate pressure to facilitate the migration of the testes from the abdomen into the scrotum [28,29].It is suspected that weakness in the intra-abdominal muscles could inhibit or prevent the testes from properly descending, thus leading to cryptorchidism.As muscle weakness is more profound in severe forms of SMA, the risk of cryptorchidism is significantly higher in males with early-onset SMA.
Effects of Cryptorchidism on Fertility
Properly descended testes maintained in a cooler environment within the scrotum (~ 4 °C below body temperature) are necessary for optimal spermatogenesis [28,29].Elevated testes temperatures are associated with male infertility and impaired semen quality [28,29].In patients with cryptorchidism, the undescended testes remain near or at body temperature.If the testes do not descend by 6 months of age, then surgical intervention to bring the testis into the scrotum is ideally recommended within 12-18 months [30].Without surgical correction, children with bilateral cryptorchidism will become permanently sterile.Therefore, in children with uncorrected bilateral cryptorchidism who have become sterile, the impact of any SMA DMT on male fertility may not be relevant.Cryptorchidism is also associated with a significantly increased risk (3.7-7.5 times higher) for testicular cancer [31], and surgical placement of the undescended testis in the scrotum is often performed to preserve testosterone production as well as providing access for the ongoing physical examinations required for long-term cancer screening.As men with SMA are expected to live longer with DMTs, it is imperative that urological surveillance and monitoring of undescended male gonads for testis cancer with advanced imaging be undertaken.
Summary of SMA and Male Fertility
These studies collectively demonstrate the critical nature of SMN protein in the male reproductive system.In animal studies, low levels of SMN protein in the testis showed a detrimental effect on the development of male organs, a reduction in fertility, and diminution of the spermatogonia germ-cell population [13][14][15]21].
In a study of healthcare usage, significantly higher rates of infertility and testicular hypofunction were noted in men with adult-onset SMA where symptoms are usually milder compared with infantile-onset SMA.Furthermore, in a small cohort of seven adult men with SMA who attempted sperm cryopreservation prior to initiating risdiplam therapy, 43% had no sperm in the ejaculate.In more severe forms of SMA, there is also a markedly increased prevalence of cryptorchidism.Collectively, these findings demonstrate the potential for a direct and negative impact of SMA on the male reproductive system, and men with SMA are at a higher risk for reduced fertility.
Risdiplam and Male Fertility
Prior to the development of DMTs, treatment of SMA was focused primarily on the management of disease symptoms driven by the loss of motor neurons and improvements in standard of care [32].Risdiplam is one of three DMTs available for SMA and is approved for the treatment of pediatric and adult patients with SMA [5,33].Risdiplam is an oral pre-mRNA splicing modifier that promotes the inclusion of exon 7 in SMN2 mRNA to produce stable SMN protein [34,35].As a small molecule, risdiplam was specifically designed to distribute evenly in the body, including the central nervous system.This leads to increased levels of functional SMN protein in tissues throughout the body [36], i.e., in the central nervous system and elsewhere, including the testes.Another small molecule with a similar mechanism of action, known as RG7800, was evaluated in patients with SMA when risdiplam was still in preclinical development [37].In trials of healthy adults and patients with SMA, RG7800 increased SMN protein levels; however, studies in patients were put on hold because of safety findings in animal toxicology studies [38].Due to improvements in drug metabolism (suitable half-life and wide tissue distribution), improved in vitro potency on SMN2 splicing, and a favorable preclinical safety profile, risdiplam was selected for subsequent clinical development [35,39].Risdiplam shows high selectivity to two binding sites in exon 7 of SMN2 pre-mRNA, namely the exonic splicing enhancer 2 and a 5' splice site [39,40].The combination of binding to exonic splicing enhancer 2 and the 5' splice site provides risdiplam high selectivity to SMN2 pre-mRNA [40].
During the drug development process, nonclinical toxicologic findings were reported in male germ cells of animals exposed to risdiplam and RG7800.An analysis of risdiplam, RG7800, and related SMN2 gene splicing compounds identified splicing events in genes other than SMN2 [35] as animals do not have the SMN2 gene.In a few genes, similar exon inclusion events were seen in the mRNA transcripts as with SMN2 in human cells at drug concentrations relevant for the male germ-cell target organ and other tissues with toxicologically relevant findings in animal studies.Although it remains difficult to attribute such tissue-specific effects to any single off target, it was plausible to focus on Forkhead Box M1 (FOXM1) and MAP kinaseactivating death domain protein (MADD) among the affected genes, as the toxicologically relevant features were seen exclusively in proliferating and/or self-renewing tissues [35,39].Of the affected genes and their known exon inclusion variants, only FOXM1 and MADD are associated with regulation of the cell cycle and apoptosis [35,39].FOXM1 can be considered of particular relevance to male fertility due to its expression pattern in male reproductive tissues and across a specific stage of spermatogenesis [41,42].
Effects of SMN2 Splicing Modifiers on Secondary Splice Targets
The FOXM1 gene produces several splice variants, which are present in animals and humans (see Table 1).These include a transcriptionally inactive FOXM1a variant with exon A2 and transcriptionally active variants FOXM1b and FOXM1c, which lack exon A2 [43][44][45].FOXM1 is a transcription factor that regulates genes that control G1/S (interphase of the cell cycle).Thus, the FOXM1b/c isoforms promote the cell cycle, whereas the FOXM1a variant results in a stop of the cell cycle.Depending on the stage of the cell cycle, interference of FOXM1 or these splice variants can impact mitosis and meiosis.Accordingly, changes in FOXM1 splice variants are expected to interfere with spermatogenesis.An additional splice target, MADD, is likely responsible for the induction of apoptosis observed as cytoplasmic vacuoles because of increased expression of a pro-apoptotic splice variant known as IG20 [35,46].
Small-molecule SMN2 pre-mRNA splicing modifiers can interact with the FOXM1 mRNA transcript by upregulating the FOXM1a variant via exon inclusion with concomitant downregulation of the FOXM1b/c variants.In vitro experiments of SMN2 splicing modifiers using mouse cell lines and human cells derived from patients with SMA showed evidence of an increased frequency of micronucleated cells and an increase
FOXM1b
Transcriptional activator Promote cell cycle activity [45] FOXM1c MADD IG20 Pro-apoptotic Induction of apoptosis [35,46] in apoptosis (manifested as cells with large cytoplasmic vacuoles potentially due to incomplete apoptotic processes), which likely indicated changes in the expression of FOXM1 and MADD splice variants [35].Experiments using cells derived from patients with SMA demonstrated splicing impacts in the mRNA transcripts of secondary splice targets in human genes after treatment with SMN2 splicing modifiers [35].
In non-clinical in vivo experiments of rats and mice with both RG7800 and risdiplam, similar concomitant up-and downregulation of different FOXM1 splice variants were observed which caused mitotic arrest and appeared as micronucleated cells [35].In monkey studies with RG7800, FOXM1b/c was downregulated in a single monkey at the highest dose; however, no effects were observed at lower doses [47].In another study, FOXM1b/c was downregulated in five of seven monkeys after treatment with RG7800, and FOXM1a was upregulated in the remaining two monkeys [47].These changes in the expression of splice variants of this specific cell cycle gene likely resulted in damage to sperm-producing tissues of rats and monkeys in non-clinical experiments with RG7800 and risdiplam [47], with stage-specific and reversible effects as outlined below.
Effects of SMN2 Splicing Modifiers on Spermatogenesis
SMN2 splicing modifiers can impact spermatogenesis, which is the process where new sperm are produced in the seminiferous tubules of the testes [48,49].Spermatogenesis can be divided into three stages: the mitotic stage, where spermatogonia (2n) undergo mitosis to maintain the stem cell pool and differentiate into primary spermatocytes; the meiotic stage (which begins at puberty), where primary spermatocytes (2n) divide into secondary spermatocytes (n); and spermiogenesis, where mature sperm are produced [48].SMN1 is critical for the preservation of spermatogonia and paramount for prepubertal and postpubertal germ cell survival.In contrast, FOXM1 is highly expressed in spermatocytes during meiosis 1, a process which only occurs postpuberty, where primary spermatocytes differentiate into secondary spermatocytes (Fig. 1).Interactions of SMN2 splicing modifiers with FOXM1 during meiosis 1 can interrupt the development of mature sperm and thus negatively impact male fertility [50].
Evidence of Stage-specific Degeneration During Spermatogenesis
A staging study of monkeys exposed to RG7800 was conducted to determine whether treatment affected specific cells and/or specific stages of spermatogenesis [35].Microscopic analysis of testicular tissue samples showed evidence of stage-specific degeneration in the seminiferous tubules and the presence of micronucleated cells and cytoplasmic vacuoles but mature sperm cells were still observed (Fig. 2) [47].The presence of these micronucleated cells (and vacuoles) were similar to those described in in vitro experiments reported by Ratni et al. [35].Testicular degeneration was observed to be specific to germ-cell maturation, and the arrest occurred during stages of spermatogenesis after meiosis 1 (postpuberty), with no impact on spermatogonia observed [35,47].More specifically, germ-cell degeneration was observed almost exclusively in spermatocytes during meiosis 1, a stage when FOXM1 is highly expressed, and in tissue where alternative splicing of FOXM1 was observed [47].
Evidence of Reversibility of SMN2 Splicing Modifiers
The impact of RG7800 and risdiplam on male reproductive tissues was further investigated in non-clinical studies in which animals were exposed to the drug and then assessed after a drug-free recovery period.In non-clinical studies of rats, after a 28-to 56-day recovery period, full reversibility of germ-cell degeneration was observed in half of the rats assessed following exposure to either risdiplam or RG7800 [47].Consistent with the reversibility of germ-cell degeneration, male rats exposed to risdiplam for 13 weeks and allowed to recover for 8 weeks were able to breed successfully and did not show impaired fertility when paired with non-treated females [47].Monkeys dosed with RG7800 and then allowed to recover for at least 55 days did not show any impaired testicular findings and did not demonstrate alternative splicing of FOXM1 and MADD [47].No testicular degeneration was observed in a longer-term study where monkeys were exposed with RG7800 for 39 weeks and allowed to recover for 22 weeks [47].No testicular findings were reported in any studies involving immature prepubertal monkeys [47].
The period of recovery following drug withdrawal occurs over a short timescale relative to the length of the spermatogenic cycle (56 days in rats [51]; 42 days in monkeys [52]).This contrasts with the long-lasting or at times irreversible effects on fertility observed following exposure to cytotoxic agents used in chemotherapy or radiotherapy [53].The short recovery period observed with SMN2 splicing modifiers is consistent with the proposed stage-specific mechanism of action, which implies minimal or no damage to the primary germ-cell population of spermatogonia.Damage induced by cytotoxic agents, such as chemotherapy or radiotherapy, usually damages all stages of spermatogenesis including spermatogonia and thus results in long-term or permanent damage that can reduce sperm counts, often to azoospermic levels [54].The time course of damage from cytotoxic agents depends on the cell types impacted, with the spermatogonia stem cells being most sensitive to damage and their loss resulting in the most severe and long-lasting damage [54].It is important to note that damage to spermatogonia was not evident in any of the non-clinical studies of SMN2 splicing modifiers reported by Mueller et al., including studies in prepubertal monkeys [47].
Fig. 1 Proposed mechanism of off-target effects of SMN2 splicing modifiers on spermatogenesis.Spermatogenesis takes place over three stages: the mitotic stage, where spermatogonia (2n) undergo mitosis to maintain the stem cell pool and differentiate into primary spermatocytes; the meiotic stage (which beings during puberty), where primary spermatocytes (2n) divide into secondary spermatocytes (n); and spermiogenesis, where mature sperm are produced.SMN protein is highly expressed in spermatogonia and is essential for normal spermatogenesis.The impact of SMN2 pre-mRNA splicing modifiers is proposed to impact spermatocytes during meiosis 1 as a result of splicing changes in FOXM1.FOXM1 Forkhead Box M1, SMN survival of motor neuron
Summary of Findings from Non-clinical Animal Studies
These findings provide evidence that the effects of oral SMN2 pre-mRNA splicing modifiers were reversible in non-clinical animal experiments.As SMN2 is absent in animals, the effects are attributed to off-target splicing events.The stage-specific nature of the damage to spermatocytes in meiosis 1, with no evidence of damage to spermatogonia, suggests the drug exposure did not impact the spermatogonia stem cell line.Following cessation of drug exposure, the impact on FOXM1 splicing is removed, and spermatogonial division and sperm maturation appear to resume unimpaired.
Treatment and Technological Advancements
With the availability of multiple therapies for SMA, many patients are now living longer and have or will consider family-building opportunities.As the effects of SMA disease progression or SMN2 splicing modifiers are not fully elucidated, it has been recommended for male patients to consider options to screen and preserve fertility.It is also important to highlight that infertility in the general population has been estimated as high as 17.5% worldwide [20].This section provides an overview of several advancements, recommendations, and options to assess and optimize fertility, particularly in male patients who may be on treatment or have experienced infertility.
Infertility is defined by the failure to achieve pregnancy following a 12-month period of regular unprotected sexual intercourse [55], which can be caused by a variety of factors in either the male or female reproductive system.The American Urological Association/American Society for Reproductive Medicine recommends evaluating both partners concurrently per their guidelines for infertility [56].For males, the American Urological Association/American Society for Reproductive Medicine evaluation incorporates both a male reproductive history and a semen analysis that measures key parameters including semen volume and sperm concentration, motility, and morphology.This initial assessment should guide the physician on determining baseline male fertility status, consider additional testing if needed, and provide the most appropriate and targeted therapy.
Men with male factor infertility now have many options for assessing and managing their fertility [56].Historically, a standard semen analysis would require the patient to attend a fertility clinic and produce an ejaculate onsite.For many men this activity was psychologically challenging and created a barrier and delay for a proper fertility evaluation [57].This coupled with financial constraints and lack of adequate access to fertility care can make navigating infertility challenging for couples.Male infertility has traditionally been overlooked and often stigmatized and may be a barrier to individuals accessing available resources [58,59].In addition, people with disabilities may experience greater difficulties in accessing reproductive services, and specifically for men with SMA upper limb weakness may limit their ability to produce an ejaculate.
Fortunately, recent advances in fertility telehealth, home male fertility testing, and sperm cryopreservation techniques have emerged, and their utilization and acceptance have dramatically accelerated.It should be noted that there are wide differences in the availability of and access to services both within and between countries.These advances have included convenient technologies to test semen parameters at home which can provide a preliminary general assessment of sperm concentration as well as newer technologies that provide a real-time video and determine motile sperm concentration [60].Home-collected samples can also be maintained using optimized containers and buffered with media which then enables them to be shipped to a centralized andrology laboratory for a complete semen analysis and possible cryopreservation [61].These diagnostic advances have been complemented by the broad acceptance and proven benefit of fertility telehealth services.These developments are uniquely applicable to the specific population of men with SMA where mobility and access to fertility care are relevant and emerging issues.
For male patients who find it difficult to produce a semen sample, particularly among men who are experiencing physical weakness, techniques such as vibratory stimulation can be utilized [62].For patients who are unable to ejaculate or upon semen analysis have no viable or detectable sperm in their semen, numerous surgical sperm retrieval techniques can be used to extract mature sperm which can then be utilized in conjunction with in vitro fertilization (IVF) and intracytoplasmic sperm injection (ICSI) [63].
When appropriate, sperm cryopreservation is considered the gold standard for male fertility preservation [64], and options for semen samples to be collected at home and then transported to centralized facilities are available.Cryopreservation of multiple vials as well as a post-thaw test of the sample is recommended at the time of initial cryopreservation to establish confidence that the sperm will survive future cryo-thaw cycles.Testing for communicable diseases is performed at the time of cryopreservation to enable the sperm to be utilized in the future with assisted reproductive technologies (ARTs).A discussion or meeting between the patient and a fertility specialist is recommended to discuss fertility preservation options and implications as well as review the current quality and viability of the cryopreserved semen samples.
ARTs are the mainstay and have been successfully utilized to treat male factor infertility.These include intrauterine inseminations and IVF with ICSI, which can be used to markedly improve the chances of successful conception.ICSI has become the gold standard for treating male factor infertility during in vitro fertilization as it offers the profound benefit of requiring only a single viable sperm cell to be directly injected under a microscope and using advanced reproductive technologies into every retrieved oocyte [65].
Depending on the resources available to patients and healthcare professionals, in this population of men with SMA, a pre-pregnancy genetic carrier screening panel for both partners and genetic counseling has been recommended for couples considering pregnancy [66].If both partners have mutations in SMN1 and IVF is utilized, the embryos created can subsequently undergo preimplantation genetic testing for monogenic/single gene defects (PGT-M).Embryos that are homozygous or compound heterozygous for SMN1 mutations and at risk of developing SMA can be screened out [67].PGT-M for single gene disorders is limited by the incredibly small amount of DNA obtained in an embryo biopsy where the usual techniques for analysis of the SMA gene are not possible.As a result, PGT-M for SMA requires the creation of a custom genetic test using linkage analysis prior to beginning IVF treatment.To establish informative linkage markers, DNA from both the sperm and oocyte contributors and their biological parents (or offspring, if any) is typically required to facilitate test creation.It is now also more common for couples to consider pre-emptively undergoing IVF/ICSI, PGT testing, and embryocryopreservation to optimize current and future family-building opportunities [68,69].
DISCUSSION
There are robust animal data highlighting the importance of SMN protein for normal spermatogenesis, male reproductive organ development, and male fertility [13][14][15]21].Evidence from animal models of SMA demonstrates abnormalities in male reproduction which affect both sperm development and fertility [15].There is limited information on how SMA may impact the male reproductive system in humans.Higher rates of male infertility have been reported among healthcare claims from patients with SMA [23], and higher prevalence of azoospermia has also been observed among men with SMA [24].Markedly higher rates of cryptorchidism have been observed in males with Type 1 and Type 2 SMA, and, when bilateral and not surgically corrected, results in future sterility [26].
Effects of SMN2 splicing modifiers on male sperm cell production were identified in nonclinical toxicologic studies in animals [35,47].Impacts on spermatogenesis isolated to postpubertal sperm maturation differentiation are proposed to be the result of off-target effects from oral SMN2 splicing modifiers primarily via a cell cycle controlling gene, FOXM1, during spermatogenesis.This insult appears to be cell stage specific to the postpubertal maturation event of meiosis 1 and does not appear to impair the spermatogonia germ-cell line [47].Consistent with the proposed mechanism of action, these effects were observed to be reversible following cessation of exposure to SMN2 splicing modifiers.Furthermore, in non-clinical experiments, exposure to SMN2 splicing modifiers did not result in a complete arrest of sperm production.Further research is needed to explore if there is a benefit from restoration of and/or increasing the levels of SMN protein in testicular tissues by SMN2 splicing modifiers that can reach the male testis.
Implications for Human Patients
In the absence of any clinical data from human patients, the effects of SMN2 splicing modifiers on fertility in male patients are not fully understood.Due to the conserved nature of these secondary splice targets between species, the testicular impacts and the reversible nature observed from the animal studies would be expected to be translatable to humans [47].It is important to note these effects of risdiplam are relevant to post-pubescent males with SMA, as the observed impairments in spermatogenesis are specific to postpubertal meiosis 1.Although these data suggest normal fertility function might be restored within 4 months (encompassing the length of the human sperm cycle, drug transit time, and six half-lives of the drug), the US Prescribing Information and European Summary of Product Characteristics contain recommendations that male patients with SMA who desire to father a child may consider sperm preservation before commencing risdiplam treatment [5,33].Physicians should be aware of these recommendations and refer to the following section for a discussion of procedures that can assess or preserve sperm in patients with SMA.Although the full impact of SMN2 splicing modifiers on the male reproductive system in humans is unknown, at the time of this writing the authors have been made aware of three reports of men with SMA who have conceived while treated with risdiplam.
CONCLUSIONS
Current understanding of how SMN protein impacts the male reproductive system in individuals with SMA is limited by the lack of research into this aspect of the disease; however, there are robust animal data demonstrating the importance of SMN protein in spermatogenesis and testicular function.Evidence on the impact of SMA on male fertility in humans primarily comes from observational reports that did not seek to identify the underlying mechanistic reasons.Yet, some additional credible mechanistic/ experimental evidence with human male germ cells in vitro is available to attribute a significant role of SMN protein in human spermatogenesis.
Male patients of reproductive age should be counseled about the potential effects of treatment and may consider sperm preservation prior to starting treatment or after a sufficient treatment-free period of 4 months [33].With increased access to male fertility testing via telehealth and home-collection capabilities, there are more options for men with SMA to explore their fertility status.Significant advances in assisted reproductive technologies such as IVF, ICSI, and PGT-M will enable many men with SMA and male factor infertility to achieve their hope of parenthood.
Further research is needed to better understand the effects of SMA on the human male reproductive system and male fertility and the potential impact resulting from available DMTs.
Medical Writing and Editorial Assistance.
Medical writing support was provided by Jack Curran, PhD, of Nucleus Global, an Inizio Company, and was funded by F. Hoffmann-La Roche Ltd, Basel, Switzerland, in accordance with Good Publication Practice (GPP) 2022 guidelines (http:// www.ismpp.org/ gpp-2022).
Author Contributions.Natan Bar-Chama, Bakri Elsheikh, Channa Hewamadduma, Carol Jean Guittari, Ksenija Gorni, and Lutz Mueller were involved in the concept for the article, participated in manuscript development and writing, and approved the final version for submission.
Funding.The Rapid Service Fee was sponsored by F. Hoffmann-La Roche Ltd, Basel, Switzerland.Ethical Approval.This article is based on previously conducted studies and does not contain any new studies with human participants or animals performed by any of the authors.
Declarations
Open Access.This article is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, which permits any non-commercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material.If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.To view a copy of this licence, visit http:// creat iveco mmons.org/ licen ses/ by-nc/4.0/.
Fig. 2
Fig. 2 Histology of testicular tissue in a non-clinical monkey study.a A schematic of the process of spermatogenesis within the seminiferous tubules.Image adapted from Allais-Bonnet A and Pailhoux E (2014) Role of the prion protein family in the gonads.Front.Cell Dev.Biol.2:56.b Testis sample from a control animal; note a layer of spermatocytes (arrows).c Testis tissue from a cynomolgus monkey dosed at 6 mg/kg/day of RG7800.The seminiferous tubules depict an insult to the specific spermatocyte layer; Conflict of Interests.Natan Bar-Chama is the recipient of an Investigator-Initiated Study with Genentech, Inc. and F. Hoffmann-La Roche Ltd and has served on a Medical Advisory Board for WINFertility.Bakri Elsheikh received research funding from Alexion Pharmaceuticals, Avidity, Biogen, Genentech, Inc., NMD Pharma, and Pharnext; and served as a consultant for Argenx, Biogen, and Genentech, Inc. Channa Hewamadduma has received speaker and advisory honoraria from Biogen and Roche.His clinical studies are supported by the NIHR BRC Neuroscience centre grant.He conducts patient-reported outcome measures-based natural history studies in SMA via Adult SMA REACH UK funded via Biogen and Roche.Carol Jean Guittari is an employee of Genentech, Inc. and a shareholder of F Hoffmann-La Roche Ltd.Ksenija Gorni and Lutz Mueller are employees and shareholders of F. Hoffmann-La Roche Ltd.
Table 1
Impact of key secondary splice targets FOXM1 Forkhead Box M1, MADD MAP kinase-activating death domain protein
|
2024-05-17T06:17:42.612Z
|
2024-05-16T00:00:00.000
|
{
"year": 2024,
"sha1": "f614cd422a3d33bbda87b23aa237a2d3bfe31324",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40120-024-00626-5.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "966974b12b6e27f57b102b21901ad68f35e146e4",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
16540629
|
pes2o/s2orc
|
v3-fos-license
|
Probiotic-based strategies for therapeutic and prophylactic use against multiple gastrointestinal diseases
Probiotic bacteria offer a number of potential health benefits when administered in sufficient amounts that in part include reducing the number of harmful organisms in the intestine, producing antimicrobial substances and stimulating the body’s immune response. However, precisely elucidating the probiotic effect of a specific bacterium has been challenging due to the complexity of the gut’s microbial ecosystem and a lack of definitive means for its characterization. This review provides an overview of widely used and recently described probiotics, their impact on the human’s gut microflora as a preventative treatment of disease, human/animal models being used to help show efficacy, and discusses the potential use of probiotics in gastrointestinal diseases associated with antibiotic administration.
Microbial Ecology of the Human Gastrointestinal Tract
The human intestinal microbiota is a complex ecosystem with considerable impact on human health and well-being, contributing to maturation of the immune system and providing a direct barrier against pathogen colonization (Doré and Corthier, 2010). It consists of bacteria, archaea, some protozoa, anaerobic fungi, and different bacteriophages and viruses, and it has been estimated that more than 1000 species of microbes inhabit the human intestine (Tuohy et al., 2012). The presence of a great number of microbes (up to 5 × 10 11 bacterial cells per gram of intestinal contents) suggests strong regulatory effects on the human host, and recent findings suggest that gut microbiota can have a considerable impact on both our weight and mood (Duca et al., 2014;Naseribafrouei et al., 2014). The composition and function of human microbial populations associated with various body sites have been studied with the help of metagenomic tools as part of two recent initiatives -the NIH Human Microbiome Project (HMP) and the European Metagenomics of the Human Intestine (metaHIT) project (NIH HMP Working Group et al., 2009;Dusko Ehrlich and MetaHIT Consortium, 2011). These massive molecular approaches have already revealed the presence of three different clusters, or enterotypes, which correspond to one of three most abundant genera of human intestine -Bacteroides, Prevotella, and Ruminococcus (Arumugam et al., 2013).
Bacteria that initially colonize the large gut of an infant are facultative anaerobes, such as Escherichia coli and Streptococcus sp. These species metabolize oxygen in the gut, thereby creating anaerobic conditions. Subsequent colonization largely depends on food profile and environmental factors (i.e., sanitary conditions). After the full formation of the gastrointestinal microflora, its composition has been shown to include such genera as Bacteroides, Bifidobacterium, Eubacterium, Clostridium, Lactobacillus, Fusobacterium, and various Gram-positive cocci (Fooks et al., 1999;Wallace et al., 2011).
Within the gastrointestinal tract (GIT), the microbiota provides various functions, such as digestion of essential nutrients and maturation of intestinal epithelial cells. Studies on mice have shown a number of significant effects of microbiota on the host: in ex-germ-free reconventionalized mice, their intestinal epithelium was thicker, kinetics of enterocytes -faster, short-chain fatty acids were produced at significantly higher concentrations, and there was a normal level of immunological activity present, compared to germ-free animals (Aureli et al., 2011). Microbes also have the ability to affect physiologic parameters, providing systemic effects on blood lipids and generally influencing the immune system, as well as inhibiting harmful bacteria (Mikelsaar, 2011). Pathogen inhibition by human intestinal microbiota may provide significant human health benefits through protection against infection as a natural barrier against pathogen exposure in the GIT (Wallace et al., 2011). Factors such as food contamination by pathogens, as well as the high load of antibiotics in soil and animal feed, can influence the microbial ecology of human GIT (Sapkota et al., 2007). Using molecular genetic tools, it has been shown that antibiotics could induce significant alterations in the dominant colonic microbiota that are not detectable using bacteriological (culture-based) techniques, with effects lasting for up to 2 months (Mangin et al., 1999). Several more specific disorders involve disruption of the human microflora ecology: acute gastroenteritis, Clostridium difficile infection (CDI), necrotising enterocolitis in neonates, irritable bowel syndrome and Helicobacter pylori infection (Kotzampassi and Giamarellos-Bourboulis, 2012). Probiotics are currently being examined for their potential treatments of these aforementioned disorders.
Probiotic Bacteria
According to the popularized definition by the Food and Agriculture Organization/World Health Organization, and as grammatically modified by Hill et al. (2014), probiotics are defined as "Live microorganisms that, when administered in adequate amounts, confer a health benefit on the host" (FAO/WHO, 2001). The most common probiotics include representatives of lactobacilli, enterococci, bifidobacteria, and yeasts (Table 1). In addition, bacterial mixtures may be used to achieve the complex beneficial effect of probiotics (Caballero-Franco et al., 2007).
Presumed health benefits of probiotics include reducing harmful organisms in the intestine, producing antimicrobial factors, and stimulating the body's immune response (Collado et al., 2007;Foligné et al., 2010;Konieczna et al., 2013). Some of the beneficial effects of probiotics (e.g., lowering of cholesterol level) are yet to be substantiated by well-controlled clinical trials. However, there are a growing number of studies providing data on effects of probiotic bacteria on the human immune system and on microflora of the GIT (Holzapfel and Schillinger, 2002;Foligne et al., 2007;Verdú et al., 2009;Wen et al., 2012). Increasingly, reports of the human/animal microbiome playing a central role in other key aspects of health functionality are emerging, including beneficial impacts on the treatment of metabolic disorders, such as obesity and type 2 diabetes, improvement of bowel function in patients with colorectal cancer, potential cognitive, and mood-enhancing benefits, antidepressant, and anxiolytic (antianxiety) activity (Desbonnet et al., 2008;Bravo et al., 2011;DiBaise et al., 2012;Lee et al., 2014a;Owen et al., 2014). The latter anxiolytic effect has even led to the emergence of the new term, psychobiotic, coined by Dinan et al. (2013) as a "live organism that, when ingested in adequate amounts, produces a health benefit in patients suffering from psychiatric illness." Products containing probiotic bacteria generally include supplements and foods. Live probiotics are commonly available in fermented dairy products and probiotic-fortified foods. These bacteria are added into numerous foods and beverages, ranging from yogurts to breakfast cereals. There are also tablets, capsules, powders, and sachets containing probiotics in freeze-dried form. Functional foods, defined as food preparations with various health-related properties, often include bacterial strains with declared probiotic properties (Turroni et al., 2011). The scientific interest in probiotics is growing exponentially: the search for published papers featuring the keyword "probiotic" in NIH PubMed database revealed 7265 articles for the period from 2000 to 2010, with 953 of them being clinical trials. Within the following 5 years (up to May 20th 2015), the frequency of publications doubled with 7979 papers being published, including 778 clinical trials.
Lactic Acid Bacteria
Lactic acid bacteria (LAB) are Gram-positive, non-spore forming cocci, coccobacilli, or rods, which generally have non-respiratory (fermentative) metabolism and lack true catalase. Unlike bifidobacteria, which are active in lower parts of the colon, lactobacilli are prevalent in the upper GIT (Turroni et al., 2011). This group is also a normal member of the human microflora, found in the oral cavity, the small intestine, and the vaginal epithelium, where it is thought to play beneficial roles (Gomes and Malcata, 1999). Among the beneficial effects, lactobacilli can improve digestion, absorption, and availability of nutrients (Wallace et al., 2011). Furthermore, LAB are capable of hydrolyzing compounds that limit the bioavailability of minerals, like tannin and phytate, due to tannin acylhydrolase and phytase activities (Turpin et al., 2010). In addition, it was shown that some lactobacilli strains could enhance mineral absorption in Caco-2 cells and improve the nutritional status of the host by producing B-group vitamins. More recently, the role of lactobacilli in energy homeostasis, particularly in obese patients, has been the object of an increased interest (Guo et al., 2010;Mikelsaar, 2011). A further potential positive impact of LAB is their ability to inhibit or kill H. pylori, which is now regarded as the major cause of gastritis and peptic ulcers and is a risk factor for gastric malignancy (Hamilton-Miller, 2003). In addition, both Lactobacillus sp. and Bifidobacterium sp. strains can reduce the side effects of H. pylori eradication therapy (Canducci et al., 2002). Pediococci are also related to the LAB group and are utilized in industrial fermentations of foods and silage (Raccach, 2014). Pediocin-producing Pediococcus sp. strains are of potential interest to food safety (Raccach, 2014), with three of them potentially possessing probiotic properties -Pediococcus pentosaceus, P. parvulus, and P. acidilactici. Osmanagaoglu et al. (2010) comprehensively studied the potential of a human P. pentosaceus isolate for probiotic use, and reported that the strain produced an anti-Listerial bacteriocin, had excellent autoaggregation characteristics and was also able to co-aggregate with Salmonella enterica serotype typhimurium and enterotoxigenic Escherichia coli (Osmanagaoglu et al., 2010). Antagonistic activity against Listeria monocytogenes was also discovered in P. acidilactici (Guerra and Pastrana, 2002). Clinical trials employing another strain of Pediococcus sp. revealed that the administration of P. parvulus decreased the serum cholesterol levels and increased counts of fecal Bifidobacterium sp. (Mårtensson et al., 2005).
Another group of LAB promoted as probiotics are enterococci, which reportedly help in the maintenance of normal intestinal microflora and stimulate the immune system (Bhardwaj et al., 2008). Studies of potential probiotic properties of E. faecium showed its efficacy in reducing the recovery period of acute diarrhea (Benyacoub et al., 2003). Another study by Pieniz et al. (2014) showed that E. durans possessed antimicrobial activity and antioxidant ability and was resistant to simulated gastric juice and bile salts. Though enterococci have probiotic potential, they are considered opportunistic pathogens for humans as they might cause nosocomial infection and are also known to possess resistance to vancomycin (Tambyah et al., 2004). Due to these controversial properties, the use of enterococci as probiotics remains under debate.
Bifidobacteria
Bifidobacteria are major constituents of the GIT microbiota of animals and humans. They are Gram-positive, non-motile anaerobic saccharolytic bacteria (Gomes and Malcata, 1999). In the gut environment, bifidobacteria have a commensal relationship with their hosts, and contribute to host nutrition by utilizing complex carbohydrates, which are important sources of carbon and energy, but are not degraded in the stomach or intestine (Biavati, 1994). These substances include plantderived dietary fiber and diet-related carbohydrates, such as starch, galactan, sucrose, amylopectin, and pullulan (Ventura et al., 2007(Ventura et al., , 2012. The capacity of bifidobacteria to metabolize non-digestible host dietary carbohydrates (prebiotics) can be used for selective stimulation of certain strains colonizing the intestinal tract. Bifidobacteria used as probiotics include strains belonging to species of Bifidobacterium lactis, B. bifidum, B. animalis, B. thermophilum, B. breve, B. longum, B. infantis, and B. adolescentis (Table 1). These bacteria have been shown to inhibit the adherence of enterotoxigenic E. coli, enteropathogenic E. coli, and C. difficile to intestinal epithelial cells, an important trait for use of these bacteria as probiotics (Tsai et al., 2008). Additional beneficial effects of bifidobacterial strains include the prevention or alleviation of infectious diarrhea and the improvement of inflammatory bowel disease symptoms (Sanz, 2007). Bifidobacteria have also been shown to modulate the host's immune response against other indigenous microflora (e.g., B. adolescentis down-regulates humoral immunity to Bacteroides thetaiotaomicron; Scharek et al., 2000). Some bifidobacterial strains suppress H. pyloriinduced genes in human epithelial cells (Shirasawa et al., 2010) while other Bifidobacterium sp. cells and culture supernatants exerted inhibitory effects against Streptococcus mutans and Streptococcus sobrinus, important etiological agents in human dental caries (Lee et al., 2011).
Yeasts
Saccharomyces boulardii is one of the best-studied probiotic species, with a long history of successful use in treatment of multiple gastrointestinal disorders. The administration of this probiotic in lyophilized form was found effective in cases of diarrhea by decreasing the duration of the disease, regardless of its cause (McFarland, 2007;Dinleyici et al., 2012;Shan et al., 2013). It has also been reported that S. boulardii prevented and treated relapses of inflammatory bowel disease, including moderate cases of ulcerative colitis (Guslandi et al., 2000;Guslandi et al., 2003;Choi et al., 2011). Interesting results have also been reported by Lim et al. (2015), suggesting that yeasts can enhance the growth of other probiotics under acidic conditions: Saccharomyces cerevisiae EC-1118 was found to significantly enhance the viability of the probiotic strain Lactobacillus rhamnosus HN001 at pH 2.5-4.0. The use of S. boulardii in reduction of C. difficile infection relapse is still under debate due to controversial results of clinical trials (Flatley et al., 2015). Among other yeasts species, Torulaspora delbrueckii, Debaromyces hansenii, Yarrowia lipolytica, Kluyveromyces lactis, Kluyveromyces marxianus, and Kluyveromyces lodderae have shown strong antagonistic effect against pathogenic bacteria and high acid tolerance (Kumura et al., 2004;Psani and Kotzekidou, 2006;Chen et al., 2010). Despite an excellent record of safe use, yeasts may still be the cause of localized infections in immunocompromised patients (Thygesen et al., 2012).
Akkermansia muciniphila
Another recently described microorganism with possible probiotic potential is Akkermansia muciniphila -a mucindegrading bacterium that resides within intestinal mucus layers (Derrien et al., 2004). According to several studies, obese patients have significantly lower amounts of this bacterium in their GIT (Collado et al., 2008;Karlsson et al., 2012). The genome sequence of A. muciniphila suggests the ability of this bacterium to metabolize a variety of complex carbohydrates, as well as synthesize multiple amino acids, vitamins, and cofactors (van Passel et al., 2011). Its influence on metabolic processes in the GIT is not fully investigated; however, it has already been shown that this bacterium may be a potential treatment for type II diabetes. Shin et al. (2014) have shown that oral administration of A. muciniphila to mice induced Foxp3 regulatory T cells in the visceral adipose tissue, which attenuated adipose tissue inflammation. Based on these results it has been suggested that pharmacological manipulation of the gut microbiota in favor of A. muciniphila might be beneficial in the treatment of diabetes.
Faecalibacterium prausnitzii and Other Clostridia
Another bacterium that has been demonstrated to have a considerable impact on human gastrointestinal microbiota is Faecalibacterium prausnitzii of the Clostridium sp. cluster IV. This microorganism accounts for 5-15% of the total fecal microbiota, making it one of the most abundant butyrateproducing bacteria in the GIT (Hold et al., 2003;Flint et al., 2012). Since butyrate is a primary energy source for intestinal epithelial cells, it is essential for maintenance of epithelial barrier integrity. Multiple beneficial effects of butyrate for health also include reduction of cancer progression, protection against pathogens, and stimulation of the immune system (Macfarlane and Macfarlane, 2011). The reduction of F. prausnitzii counts in fecal and biopsy samples has been observed in multiple studies of inflammatory bowel disease (especially, ileal Crohn's disease and ulcerative colitis), suggesting that the presence of this species is important for normal GIT function (Wang et al., 2007;Swidsinski et al., 2008;Andoh et al., 2012). The first gnotobiotic rodent model with F. prausnitzii showed that it could influence gut physiology through the production of mucus O-glycans, thereby affecting the quality and quantity of produced mucus (Wrzosek et al., 2013). Though F. prausnitzii dysbiosis might be an important marker in the development of disease, routine diagnostic tools have not been developed mainly due to the extreme sensitivity of this species to oxygen.
Other bacteria of the class Clostridia might also find use as potential probiotics, since they are highly abundant in human GIT microbiota and may play an important role in metabolism and immune system function. Atarashi et al. (2013) have shown that a mixture of 17 strains of Clostridium sp., belonging to clusters IV, XIV, and XVIII, were able to suppress experimental colitis in mice through induction of interleukin-10-producing regulatory T cells. A similar mechanism of colitis suppression, via IL-10 production by induced macrophages, was observed using strain C. butyricum MIYAIRI 588 (Hayashi et al., 2013). According to another recent study, when mixed with B. infantis, C. butyricum was effective in treatment of experimentallyinduced antibiotic-associated diarrhea in mice, and the beneficial effect of the mixture was superior to single strains (Ling et al., 2015). However, though clostridia have potential for use as probiotics, there is still not enough evidence to support their medical efficacy and safety for humans.
Use of Probiotics in Prevention and Treatment of Antibiotic-Associated Diseases
Although most antibiotics are generally safe, some have the potential to cause life-threatening side effects. Antimicrobial side effects are adverse drug reactions involving one or more organ systems. Moreover, even a short-term course of antibiotics may have a long-term negative impact on the normal human gut microbiota (Jernberg et al., 2010). The most commonly used classes of antibiotics include penicillins, cephalosporins, aminoglycosides, fluoroquinolones, macrolides, and tetracyclines; each of these compounds can cause their own specific side-effects (Cunha, 2001). In fact, most traditionally used antibiotics are able to cause health problems in the GIT, and are commonly related to disturbances in microflora composition caused by survival and spread of resistant strains. For instance, penicillins, which are known for having the least-frequent and -severe side effects, may cause diarrhea, and nausea, vomiting, and upset stomach. Fluoroquinolones are also considered relatively safe, but may similarly induce nausea, vomiting, diarrhea, and abdominal pain (Bertino and Fish, 2000). Side-effects of macrolides include GIT-associated nausea, vomiting, and diarrhea, whereas adverse effects of the tetracyclines depend on the concentration of the antibiotic in the affected organ. Their common side-effects include cramps or burning of the stomach, diarrhea, sore mouth, or tongue (Rubinstein, 2001). Research in this field is ongoing and has already provided evidence for efficacy of probiotic use for prevention of health problems emerging as a result of antibiotic use. Examples of such diseases are antibiotic-associated diarrhea (AAD) and C. difficile-associated diarrhea (CDAD; pseudomembranous colitis).
Antibiotic-associated diarrhea is defined as "otherwise unexplained diarrhea that occurs in association with the administration of antibiotics" (Friedman, 2012). However, mild cases of C. difficile infection are sometimes also considered as the cause of AAD (Kelly et al., 1994). The disease comes as one of the most frequent side effects of antibiotic use: 5-39% of patients, depending on the type of antibiotic (e.g., certain β-lactam antibiotics are more likely to cause diarrheal sideeffects than cephalosporins) and is associated with increased length and cost of hospitalization (Videlock and Cremonini, 2012). There are several mechanisms of antibiotic effect on humans that can result in AAD. These include osmotic diarrhea, caused by suppression of anaerobic bacteria and a reduction in carbohydrate metabolism, disruption of protective effect of commensal bacteria and reduction of colonic mucosal resistance to pathogenic opportunistic bacteria. Full restoration of the normal gut microbiota may take several weeks or even months (Friedman, 2012;Kaier, 2012).
Many studies have been conducted assessing the efficacy of probiotics in the treatment of AAD and have provided data supporting the usage of both single-strain and mixedprobiotics for diarrhea treatment (Surawicz, 2003;Szajewska et al., 2006;McFarland, 2009). A meta-analysis by Hempel et al. (2012) revealed 82 studies that provided evidence of probiotic efficiency in treatment of AAD. Microorganisms used in these studies included the genera Lactobacillus, Bifidobacterium, Saccharomyces, Streptococcus, Enterococcus, and Bacillus. According to Friedman (2012), several mechanisms of action of probiotics contribute to the prevention and treatment of diarrhea: enhancing mucosal barrier function by secreting mucins, increasing tight junctions in epithelial cells, providing colonization resistance, producing bacteriocins, increasing production of secretory lgA, producing balanced T-helper cell response, increasing production of IL-10 and transforming growth factor beta. Collectively, these factors contribute to the restoration of a normal gastrointestinal balance following damage by antibiotics (Friedman, 2012).
Clostridium difficile-associated diarrhea or pseudomembranous colitisis is an inflammation of the intestine walls caused by toxins produced by C. difficile. CDAD is one of the most common hospital-acquired infections and is a frequent cause of morbidity and mortality among elderly hospitalized patients. Complications include shock, need for colectomy, toxic megacolon, and in severe cases, perforation of the colon wall. C. difficile colonizes the GIT after the alteration of normal gut flora by antibiotic therapy (Bergogne-Bérézin, 2000;Ndegwa and Nkansah, 2008). Extremely high rates of CDAD have been reported in Quebec from 2002 to 2005, totaling 14000 cases (a 4.5-fold increased incidence compared with 1991), with evidence suggesting the emergence of a highly-virulent strain of C. difficile (Pepin et al., 2004(Pepin et al., , 2005. Several studies have shown that probiotics aid in prevention and treatment of CDAD. Gao et al. (2010) reported lower risk of disease occurrence after intake of a preparation based on two Lactobacillus strains. S. boulardii has also been successfully used for treatment of CDAD (McFarland et al., 1994). However, a large multi-center study is needed to build sufficient evidence in support of probiotic use as a treatment for C. difficile-associated infections.
Problems Associated with Transfer of Antibiotic Resistance Determinants
Many probiotic strains have naturally acquired resistance toward one or several antimicrobial agents ( Table 2). Though intrinsic resistance of probiotic bacteria to certain antibiotics might offer benefits for their use in the prevention and treatment of AAD, the issue of possible transfer of resistance determinants has been raised (Pflughoeft and Versalovic, 2012), particularly for strains that carry plasmids. Courvalin (2006) specified two distinct types of acquired antibiotic resistance in bacteria: (i) initially non-transferred resistance that occurred as a result of one or several mutations in indigenous gene(s), and (ii) transferred resistance, acquired from a different organism by horizontal gene transfer. Antibiotic resistance (both intrinsic and acquired) can occur as a result of three major mechanisms: (i) altering the outer-and/or inner membrane permeability and transport activity, which leads to lower accumulation of the antibiotic within the cell, (ii) using enzymes to detoxify the antibiotic, and (iii) modifying the antibiotic target site (Guardabassi and Courvalin, 2006). The gene responsible for acquisition of antibiotic resistance often resides on a plasmid or transposon, which might be easily transferred (Bennett, 2008). In fact, transposon-mediated transfer of genetic material between species was recently described as the most frequent mechanism contributing to the spread of antibiotic resistance in bacteria (Wozniak and Waldor, 2010).
Multiple studies have already shown that antibiotic resistance can be transferred between different bacterial species that reside in the human GIT. For instance, it has been reported that both Lactobacillus lactis and Streptococcus thermophilus are able to transfer erythromycin resistance [erm(B) gene, located on a plasmid] to L. monocytogenes under in vitro conditions (Toomey et al., 2009). Another study provided the evidence of in vivo transfer of ampicillin resistance between two strains of E. coli co-residing in human gut: it was demonstrated that a plasmid carrying a β-lactamase gene had been transferred from an ampicillin-resistant E. coli strain to an initially susceptible strain (Karami et al., 2007). Devirgiliis et al. (2009) reported the transfer of a tet(M) gene (tetracycline resistance; located on broad host range Tn916 transposon) from L. paracasei to E. faecalis in vitro. In another set of experiments, the erythromycin resistance pLFE1 plasmid of L. plantarum strain M345 was successfully transferred to five different species: L. rhamnosus, Lc. lactis, Listeria innocua, E. faecalis, and L. monocytogenes (Feld et al., 2009). These and other examples raise a safety concern; strains to be used as probiotics should be carefully selected, and only those free of transferrable antibiotic-resistance determinants ought to be considered safe (Radulovic et al., 2012).
In Vitro and In Vivo Systems Used to Study Probiotic Effects
Novel probiotic-based strategies for therapeutic and prophylactic use against multiple GIT diseases are gaining popularity worldwide. Their effectiveness has been predicted by numerous animal model studies and proven by extensive research involving humans. However, the initial step in confirming probiotic effects is the extensive characterization of a bacterial strain to be used as a probiotic, which is usually performed under in vitro conditions by studying bacterial acid resistance, bile resistance, carbon source utilization, and aggregative properties, or ex vivo for their ability to adhere to mammalian cells (Kotikalapudi et al., 2010;Wood et al., 2012). Similarly, probiotic delivery methods, such as lyophilization or encapsulation, are also tested for their protective potential in vitro under simulated gastric conditions (Klemmer et al., 2011;Wood et al., 2012;Khan et al., 2013;Wang et al., 2014Wang et al., , 2015a. The most popular materials used for encapsulation of bacteria are alginate, carrageenans and gums, since they are easy to process, resistant to low pH and freezing, and are generally recognized as safe (Gbassi and Vandamme, 2012). We have recently reported the efficient delivery of B. adolescentis, encapsulated for this purpose in an alginate-pea protein protective matrix, into the lower gut of rats (Varankovich et al., 2015).
Apart from basic synthetic gastric juice solutions (low pH, 37 • C), more complex systems have been developed, such as SHIME (Simulator of the Human Intestinal Microbial Ecosystem), designed to simulate different parts of the human GIT (Cook et al., 2012). Probiotic strains and methods for their delivery, preselected in vitro, are subsequently tested in animal models.
Traditionally used animal models include mice and rats. Larger animals like rabbits, dogs, and pigs are generally considered to have more common features with the physiology and microflora of the human GIT (Kararli, 1995). However, rodents are cheap, standardized, and have short life-cycles; thus, their extensive use in large-scale research. Investigation of probiotic effects on animal microflora may be approached by: (i) examining the quantitative and qualitative characteristics of bacterial microflora in animals using cultivation and/or molecular biology techniques, such as real-time polymerase chain reaction (qPCR), next-generation sequencing (NGS), and fluorescence in situ hybridization (FISH), or (ii) evaluating treatment efficiency indirectly by using it to cure an artificially induced disease.
Distribution of specific species of microorganisms is still being studied in healthy humans and compared with those of patients with various gastrointestinal diseases. Perturbations of microbiota, even in case of alterations in numbers of a single species (i.e., A. muciniphila), might be a cause (and an indicator) of the development of disease (Karlsson et al., 2012). In this case, probiotic treatment might be useful in restoring microbiota balance in the gut. An interesting example of quantitative/qualitative analysis of animal gut microbiota after probiotic administration can be found in the study by Wang et al. (2015b): 454 pyrosequencing of fecal bacterial 16S rRNA genes in obese vs. lean mice showed that the probiotic strains shifted the overall structure of the gut microbiota of obese animals toward that of lean mice fed a normal diet, with significant changes observed in 83 operational taxonomic units. Due to complicated analyses required to understand specific mechanisms of disease development, as well as the mode of action of a certain probiotic microorganism, the use of disease models is generally more widespread.
Rodent Models of GIT Diseases
Generally, in order to establish a disease model, mice are infected with the pathogen or irritant either one time or continuously (Pawlowski et al., 2010;Bhinder et al., 2013). Subsequently, animals are treated with probiotics with concomitant monitoring of the disease symptoms and evaluation of changes in the gut microflora. Following this approach, Verdú et al. (2008) infected mice with H. pylori for 4-6 months to investigate the effect of probiotic therapy on upper gastrointestinal dysfunction induced by chronic H. pylori infection. The authors reported that with probiotic treatment delayed gastric emptying in mice normalized significantly faster post-eradication, compared to control groups, where the dysfunction was observed during 2 months after pathogen administration was ceased. Mice and rats have also been used to evaluate the efficacy of probiotics for the treatment of Salmonella and E. coli O157:H7 infections (Asahara et al., 2001(Asahara et al., , 2004, inflammatory bowel disease (Shiba et al., 2003) and immune suppression (Lollo et al., 2012). Asahara et al. (2001) showed that intestinal growth and subsequent extra-intestinal translocation of orally-infected Salmonella typhimurium in mice were inhibited during administration of probiotic B. breve. Later, the same group reported B. breve was also effective in protecting mice against Shiga toxic-producing E. coli 0157:H7 (Asahara et al., 2004). Extrapolation of results achieved in animal studies and in vitro experiments to humans remains a difficult challenge. Many factors, such as differences in physiology and microflora composition of respective gastrointestinal systems, must be considered before interpreting the outcome.
The majority of in vivo experiments investigating the effects of probiotics on pathogenic bacterial populations use gnotobiotic mice (usually with human microflora systems in their GIT; Bernet-Camard et al., 1997;Aiba et al., 1998;Gill et al., 2001;Pawlowski et al., 2010). For instance, in a study by Shiba et al. (2003), probiotic B. infantis 1222 was found to significantly suppress the systemic antibody response raised by Bacteroides vulgates, a representative pathogenic Bacteroides sp. species, in a gnotobiotic mice model of inflammatory bowel disease. The use of conventional mice as a model for investigating human diseases is more problematic due to significant differences in animal and human gut microflora. Nevertheless, it is possible to use murine-specific organisms as models for the study of human pathogens. For instance, Ge et al. (2001) used H. hepaticus infection as an animal model for examining the pathogenesis of gastrointestinal diseases in humans caused by H. pylori. More recently, Bhinder et al. (2013) described the Citrobacter rodentium mouse model for the study of pathogen and host contributions during infectious colitis. C. rodentium is a murinespecific bacterial pathogen, closely related to enteropathogenic and enterohaemorrhagic strains of E. coli (Borenstein et al., 2008). Several C. rodentium infection studies involving mice models have shown probiotics to reduce the severity of symptoms and prevent death caused by the pathogenic agent (Chen et al., 2005;Gareau et al., 2010;Mackos et al., 2013). Chen et al. (2005) successfully treated C. rodentium-induced murine colitis with probiotic L. acidophilus. Gareau et al. (2010) similarly reported that L. rhamnosus, combined with L. helveticus, were effective in prevention and treatment of the same disease state in mice. Later, another group showed that L. reuteri was able to attenuate the severity of murine colitis caused by C. rodentium (Mackos et al., 2013). Further investigation of host-pathogen and probiotic-pathogen interactions will likely provide better insight into treatment of C. rodentium infection in mice, and possibly E. coli infections in humans. However, confirmation of probiotic benefits and possible side effects will ultimately require human trials.
Human Clinical Trials
Human studies generally take the form of randomized clinical trials involving participants with some type of intestinal disorder. After assessment of eligibility and recruitment, participants Antibiotic-associated diarrhea in children 188 Significant reduction of the incidence of antibiotic-associated diarrhea in children treated with oral antibiotics for common childhood infections. Vanderhoof et al. (1999) 167 The treatment effect on the incidence of diarrhea (95% confidence interval) was −11% (−21−0%). Arvola et al. (1999) B. bifidum Irritable bowel syndrome 122 Overall responder rates (decrease in symptoms severity) were 57% in the treatment group, but only 21% in the placebo group (P = 0.0001).
VSL#3 * Pouchitis 40 Three patients (15%) in the treatment group had relapses of the disease within the 9-months follow-up period, compared with 20 (100%) in the placebo group (P < 0.001).
40 Two of the 20 patients (10%) in the treatment group had an episode of acute pouchitis compared with 8 of the 20 patients (40%) treated with placebo (log-rank test, z = 2.273; P < 0.05). Gionchetti et al., 2003 34 Treatment of patients with mild to moderate stages of disease, not responding to conventional therapy, with probiotic resulted in a combined induction of remission/response rate of 77% with no adverse events.
Ulcerative colitis 124 The efficacy of probiotic was significant (recurrence rate 34.6%, compared with 64.7% on placebo; p = 0.04) in patients with recurrent CDD, but not in patients with initial CDD (recurrence rate 19.3% compared with 24.2% on placebo; p = 0.86).
McFarland et al.
Saccharomyces boulardii
Clostridium difficile-associated diarrhea (CDD) 168 A significant decrease in recurrence of CDD was observed only in patients treated with high-dose vancomycin (2 g/day) and probiotic (16.7%) compared with those who received high-dose vancomycin and placebo (50%; p = 0.05).
Surawicz et al. 211 The mean (+/−SD) duration of diarrhea was 1.69 days (0.6) in patients given probiotic, compared with 2.81 days (0.9) in those given placebo. Buydens and Debeuckelaere (1996) Enterococcus faecium SF68 Antibiotic-associated diarrhea 123 The probiotic was shown to be effective in reducing the incidence of antibiotic-associated diarrhea (AAD) in comparison with placebo (8.7% compared with 27.2%, respectively).
Frontiers in Microbiology | www.frontiersin.org are given either probiotic treatment or a placebo as a control.
Results of these experiments have provided enough evidence for considering probiotics an efficient treatment for multiple GIT-associated diseases, such as acute gastroenteritis (Huang et al., 2002), irritable bowel syndrome (Nikfar et al., 2008) and necrotizing enterocolitis (Alfaleh and Anabrees, 2014). Some trials showing the efficacy of bacteria of interest in the treatment of specific gastrointestinal disorders are listed in Table 3. In one recent trial aimed to assess the efficiency of S. cerevisiae in treatment of irritable bowel syndrome, 179 adults, diagnosed with this condition, were randomized to receive once-daily 500 mg of S. cerevisiae or placebo for 8 weeks. Cardinal symptoms (abdominal pain/discomfort, bloating/distension, bowel movement difficulty) were recorded daily after a 2-week run-in period. The results showed that abdominal pain/discomfort scores were significantly reduced during probiotic intake (Pineton de Chamburn et al., 2015). A major trial involving 362 participants was conducted by Whorwell et al. (2006) in order to study the effect of B. infantis on symptoms of irritable bowel syndrome: probiotic administration lead to improvements in the majority of symptoms by more than 20%, compared to placebo. Another human clinical trial proved the efficacy of Lactobacillus GG in treatment of H. pylori infection: daily administration of the probiotic led to significant reduction in disease symptoms (diarrhea, nausea, and taste disturbances; Armuzzi et al., 2001). In general, data from multiple lines of research involving humans suggests that probiotic bacteria suppress gastrointestinal pathogens by simple competition by prevailing in numbers, and by producing antibacterial factors (bacteriocins and small organic molecules, such as fatty acids). Though more details into the mechanisms of action of probiotics on gut microbiota are essential, the large base of evidence already collected has proven the beneficial role in prevention and treatment of various GIT diseases in humans.
Conclusion
Many strains of genera Lactobacillus and Bifidobacterium, as well as some enterococci and yeasts, have been shown to possess probiotic properties with potential for prophylaxis and treatment of a range of gastrointestinal disorders. The effectiveness of probiotic bacteria in the treatment of these conditions is supported by many clinical trials involving patients of all ages and probiotic organisms chosen based on laboratory research trials. Notably, most of the work in the probiotic field has been conducted in vitro, as it is an essential step in the investigation of bacterial growth, metabolite production, ability to form biofilms, compete with pathogens, co-aggregate, and produce antimicrobials. All of these characteristics are important factors for identification of potential probiotic strains that possess desirable properties along with the ability to establish itself in the human gut. Intrinsic antibiotic resistance and transferability of genetic determinants are two additional factors to account for at the initial stage of a probiotic study. Novel putative probiotic species, such as A. muciniphila, are yet to be tested in both animal and human trials; however, the results achieved to date suggest that they might be beneficial in treatment or diagnosis of GIT diseases.
|
2016-06-17T06:17:17.988Z
|
2015-07-14T00:00:00.000
|
{
"year": 2015,
"sha1": "329a60d710d84e9433c87655cad78dccefb731c8",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2015.00685/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "329a60d710d84e9433c87655cad78dccefb731c8",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
118475953
|
pes2o/s2orc
|
v3-fos-license
|
Stability in Einstein-Scalar Gravity with a Logarithmic Branch
We investigate the non-perturbative stability of asymptotically anti-de Sitter gravity coupled to tachyonic scalar fields with mass saturating the Breitenlohner-Freedman bound. Such"designer gravity"theories admit a large class of boundary conditions at asymptotic infinity. At this mass, the asymptotic behavior of the scalar field develops a logarithmic branch, and previous attempts at proving a minimum energy theorem failed due to a large radius divergence in the spinor charge. In this paper, we finally resolve this issue and derive a lower bound on the conserved energy. Just as for masses slightly above the BF bound, a given scalar potential can admit two possible branches of the corresponding superpotential, one analytic and one non-analytic. The key point again is that existence of the non-analytic branch is necessary for the energy bound to hold. We discuss several AdS/CFT applications of this result, including the use of double-trace deformations to induce spontaneous symmetry breaking.
I. INTRODUCTION
The bulk side of the AdS/CFT correspondence [1][2][3] consists of gravity coupled to various matter fields. In particular, supergravity compactifications relevant to AdS/CFT [4][5][6] often contain tachyonic scalar fields with masses at or slightly above the Breitenlohner-Freedman (BF) bound [7]. In some cases, the bulk theory can be consistently truncated so that the matter content is just scalar fields [8,9].
Such theories of AdS d+1 gravity coupled to scalar fields near the BF bound (sometimes called "designer gravity" [10]) are known to admit a large class of boundary conditions, which can be defined in terms of an arbitrary function W . The scalar fields have slower fall-off than allowed by the standard asymptotically AdS boundary conditions of [11], but nevertheless, the conserved charges have been shown to be finite and well-defined once back-reaction effects are taken into account [12][13][14]. This paper is concerned with the conditions under which the total conserved energy is bounded from below (for other interesting applications, see e.g., [15][16][17][18][19][20][21][22][23][24]).
The derivation of the energy bound proceeds by following a Witten-Nester style argument using a spinor charge [25,26]. For the standard or "Dirichlet" scalar boundary conditions (i.e., when the leading, slower fall-off term in the asymptotic expansion is turned off), it was proven several decades ago that the energy is positive if the scalar potential is generated by a superpotential [27][28][29]. More recently, this proof was extended to the more general slow fall-off designer gravity boundary conditions in [14,30] (based on [31,32]), where it was shown that the theory is stable if W is bounded from below and the scalar potential admits a certain type of superpotential. This minimum energy theorem was then further strengthened to allow stability even in some cases when W is unbounded from below, so long as the full effective potential V (defined below) has a global minimum [33]. This result finally proved a conjecture about stable ground states in designer gravity that was originally given in [10].
However, the stability conjecture of [10] was never proven in the special case where the BF mass bound is saturated. This case requires separate treatment, as the asymptotic behavior of the scalar field develops a logarithmic branch 1 . While the theory is known to be stable if the logarithmic branch is turned off, previous attempts at proving a minimum energy theorem for more general boundary conditions failed due to a logarithmic large radius divergence in the spinor charge [14] . In this paper, we resolve this issue and derive a minimum energy bound, which agrees with the conjecture of [10]. Once again, the main subtlety involves the existence of a suitable superpotential for a given scalar potential.
It is a general principle of AdS/CFT that deformations of the CFT correspond to modi-fications of the AdS boundary conditions. For designer gravity theories with a field theory dual, the boundary conditions given by the function W are related to the addition of a multi-trace potential term d d x W (O) to the CFT action [34][35][36], where O is the operator dual to the bulk scalar. The effective potential V(O) is simply the effective lagrangian of the CFT restricted to constant values of O (and all other fields and currents turned off). (See, for example, [37,38] for discussion of multi-trace deformations and stability from the dual field theory perspective). The interesting point about the result of [33] is that by adding an unbounded potential term to the CFT, it is possible to destabilize the AdS vacuum but still have a stable ground state, leading to spontaneous symmetry breaking. In [39], relevant double-trace deformations were used to create a novel type of holographic superconductor, which, in contrast to previous constructions [40], can exist without a net charge density (see also [41,42]).
When the bulk scalar saturates the BF bound, the dual operator has dimension d/2 in both the standard Dirichlet and alternate Neumann theories, and therefore a double trace term d d x O 2 is classically marginal. As first pointed out in [34], double-trace deformations of the Dirichlet theory lead to a logarithmic running of the coupling. The deformation can be asymptotically free with an infrared Landau pole, or marginally irrelevant with a UV Landau pole, depending on the sign of the coupling.
In the alternate Neumann theory, the double-trace coupling is marginally irrelevant, in the sense that it diverges logarithmically in the UV. At zero temperature and with planar symmetry, we will show below that the AdS vacuum is always unstable to the true ground state with O = 0, independent of the double-trace coupling. At sufficiently high temperature, however, the system returns to the symmetry-preserving state and we demonstrate the existence of a superconducting phase transition at the critical temperature. The energy in the asymptotically Poincaré AdS case is always bounded from below by the energy of the zero-temperature ordered state, which corresponds to the global minimum of the effective potential, verifying our expectation from previous designer gravity work. To summarize the main result of this paper, we prove the energy bound in the Neumann theory is given explicitly by where α = O is the coefficient of the logarithmic term in the asymptotic expansion of the scalar field, and C is a constant which we will specify later. The term in square brackets turns out to be the (zero-temperature) effective potential V( O ) in the large α limit (and is exactly V in the planar case).
This paper is organized as follows. In section II, we give a more detailed introduction to designer gravity and review previous work on minimum energy theorems in these theories. In section III, we find a new branch of superpotential solutions in the case where the BF bound is saturated. We show that this superpotential (if it exists globally) cures the divergent spinor charge encountered in [14] and we derive a lower bound on the energy. Section IV focuses on planar, boost-invariant solutions, which turn out to saturate the bound. We argue in section V that these "fake supergravity" solutions correspond to a certain limit of spherical solitons, which leads to a proof of the stability conjecture of [10]. Several AdS/CFT applications of this result are investigated in section VI, including the generalization to finite temperature. These results refer to deformations of the Neumann theory, so in section VII we briefly examine some of the corresponding issues for deformations of the Dirichlet theory.
We close with a discussion of our results in section VIII.
II. DESIGNER GRAVITY REVIEW
In this section, we briefly review the important features of designer gravity theories. We focus in particular on the stability conjecture of [10] and we describe previous efforts to prove this conjecture.
We consider asymptotically AdS d+1 gravity (d ≥ 3) coupled to a tachyonic scalar field with action where we have set 8πG = 1. Near φ = 0, we assume that the scalar potential V (φ) takes the form where ℓ AdS is the AdS radius. It will be convenient to work in units where ℓ AdS = 1. Unless stated otherwise, we consider only even potentials for simplicity, though our results easily generalize to non-even potentials 2 . In designer gravity theories, we restrict to scalar masses 2 The generalization is given by constructing the critical superpotential (as described below) for both φ > 0 and φ < 0, which will provide two different values of the constant C in (1.1). The bound is then simply where the Breitenlohner-Freedman bound for perturbative stability [7] is We are interested in metrics which asymptotically approach [14,31,32] the metric of exact AdS spacetime in global coordinates, Here dΩ 2 d−1 is the metric on the sphere S d−1 . For most masses in the range (2.3), the scalar field behaves near the AdS boundary (r → ∞) as and the coefficients α, β do not depend on the radial coordinate r. For m 2 = m 2 BF , the roots (2.7) are degenerate and the solution has the asymptotic behavior 3 φ = α log r r d/2 + β r d/2 + . . . . (2.8) Note that in global AdS we use the radius of the boundary S d−1 to define the scale of the logarithm. This means that one should interpret log r = log(r/R S d−1 ), and R S d−1 = ℓ AdS = 1 in our units.
In the mass range (2.3), both the α, β modes are normalizeable, but in order to have welldefined evolution we must impose a boundary condition at the AdS boundary. For example, the standard Dirichlet boundary condition is to fix α = 0. Alternatively, one could choose the Neumann boundary condition β = 0. More generally, it is sufficient to fix a functional relation between α and β, which we express as β ≡ dW dα , (2.9) (1.1) with C = max(C > , C < ). 3 See [13,14] for discussion of additional cases where logarithmic branches may arise. In general, this can occur when λ + /λ − = n, where n is an integer. The present work is concerned with the case n = 1. See also [43].
for some arbitrary smooth function W (α). Note that a general boundary condition W will break the asymptotic AdS symmetry, but conformal invariance is preserved by the choice for some arbitrary constant k. It is worth noting that for m 2 = m 2 BF the Neumann theory W (α) = 0 preserves the conformal symmetry. However, this is not true for m 2 = m 2 BF , since the Neumann boundary condition does not include the logarithmic term in (2.11). (Dirichlet boundary conditions α = 0 of course always preserve the conformal symmetry.) Solitons are nonsingular, static, spherically symmetric solutions of the bulk gravity theory.
We expect the minimum energy ground state of a designer gravity theory to be given by one of these solitons [10,32]. For every choice of φ at the origin, the solutions to the equations of motion behave as in (2.6) or (2.8) for some (constant) values of α, β. By scanning different values for φ(0), we map out a curve in the α, β plane 4 , which we call β 0 (α). The solitons consistent with our boundary conditions are then given by the intersection points, and It was shown in [10] that extrema of V (denoted α = α * ) correspond to solitons satisfying our boundary conditions, and further that the value of V(α * ) gives the total energy of the soliton (up to overall volume normalization).
The above statements translate simply to the field theory side. The bulk scalar is dual to an operator O of conformal dimension ∆ = λ − (which becomes ∆ = d/2 when m 2 = m 2 BF ). Our boundary conditions (2.9) correspond to a deformation of the Neumann theory 5 by 4 For certain scalar potentials, this curve may not be single-valued, so it does not define a function β 0 (α).
For example, the known supergravity truncations containing scalars at the BF bound (see e.g., [17,23]) appear to exhibit this behavior. We will generally not consider such cases in this work, though we do make some further comments in section VIII. 5 An equally valid boundary condition would be to choose α = α(β), which would correspond to a deformation of the Dirichlet theory α = 0, ∆ = λ + . This case will be addressed further in section VII, so for now we restrict our discussion to deformations of the Neumann theory.
adding a term to the action (2.14) The function V is simply the effective potential for the operator, which is minus the effective action restricted to constant field configurations (see e.g., [37]), Every soliton corresponds to an extremum of V with O = α * , and the energy of the state is . Based on this interpretation, it was conjectured in [10] that the theory would be stable if V admits a global minimum. We now briefly review previous work on proving this conjecture.
A. Stability for General Scalar Mass
We first assume m 2 = m 2 BF . The minimum energy bound is derived following a Witten-Nester style proof [25][26][27][28][29], which makes use of the spinor charge where C = ∂Σ is a surface at spatial infinity that bounds a spacelike surface Σ. The covariant derivative is where the "Witten spinor" ψ is required to satisfy a spatial Dirac equation γ i ∇ i ψ = 0 and to asymptotically approach a Killing spinor of exact AdS (see e.g., [30]). Using standard manipulations, it can be shown that Q ≥ 0 if the "superpotential" P satisfies Using the perturbative solutions for small φ, 6 it was shown in [30] that where the integrals are over the unit sphere S d−1 . Here E is the total conserved energy, whose explicit form can be found in [14]. If we use the P + superpotential to construct the spinor charge, the last term in (2.20) diverges, and we do not obtain a bound on the energy (except of course in the Dirichlet theory, α = 0). If instead we use the P − superpotential, the divergent terms cancel and we obtain So the energy is bounded from below if W has a global minimum and the scalar potential can be generated by a real P − -type superpotential that exists for all φ. Note, however, that this result is slightly weaker than the original conjecture of [10]. Furthermore, the linearized analysis of [44] suggested that designer gravity theories could be stable even in some cases where W is not bounded from below.
The result (2.21) was eventually generalized in [33] by noting that the P − branch can be extended to a one-parameter family of solutions [37,45] for any s. Note that the P + solution is isolated from the family P s and does not have an analogous generalization. Once again, P s is required to be real and exist globally, and generally this holds up to some critical value s c > 0. Hence, (2.21) becomes Using scaling arguments, one can show [33] that for large α we have so that if V = W + W 0 is bounded, the right hand side of (2.23) is bounded. This proved the conjecture stated above. The result confirms that it is possible for the theory to be stable even in some cases where W is not bounded from below.
B. Stability at the BF Bound
We now review previous results for m 2 = m 2 BF . In this case, the superpotential branches (2.19) apparently degenerate to a single solution The reason for using the P + notation here will be clear shortly.) Repeating the calculation of the spinor charge produced [14] Q Once again, the explicit expression for the (finite) conserved energy is given in [14]. When the logarithmic mode is turned off, the energy is positive [31]. However, for α = 0, the expression for the spinor charge diverges in r, so this result did not yield an energy bound. Note that this is exactly what happened for the P + branch of superpotentials when m 2 = m 2 BF . Based on this, we might expect that at the BF bound, there also exists a second branch 7 of solutions (analogous to (2.22)) for which the spinor charge would be finite in r. We will now show that this is indeed the case.
III. THE SUPERPOTENTIAL
In this section, we find a second branch of superpotential solutions at the BF bound, which is analogous to the P − branch discussed above. We show that constructing the spinor charge using this type of superpotential does in fact lead to a minimum energy theorem.
We wish to find a new solution to (2.18) which will cancel the divergent term that appears in the spinor charge (2.26). It is straightforward to check that α 2 log r ∼ φ 2 / log φ for large r. With this motivation, we consider general superpotentials of the form The existence of this second branch of superpotential solutions for m 2 = m 2 BF was previously noted in [37], though not all relevant terms in the small φ expansion were given.
Substituting this expansion for P and (2.2) for V into (2.18), we find that the only (p 1 = 0) solution which simultaneously cancels the O(φ 2 ) and O( φ 2 log φ ) terms is i.e., the BF bound is saturated. Continuing to higher order terms in the expansion, we have The parameter s is not fixed by the relation (2.18), and the coefficients of all higher order terms are given in terms of s.
We can now repeat the calculation of the spinor charge using the new small φ solution and higher order terms fall off fast enough at infinity that they do not contribute to the spinor charge. Assuming global existence of the superpotential (see below), the energy bound becomes 8 (log φ) 2 term is finite. In appendix A, we derive this result again using a different method in which we take the limit m 2 → m 2 BF . Recall that for m 2 = m 2 BF , the minimum energy result failed when using the P + type superpotential (2.19), due to a large r divergence in the spinor charge. Further, the P + branch is isolated from the one-parameter family of solutions (2.22), which does lead to a minimum energy theorem. We now see that the situation when the BF bound is saturated is quite similar. The original solution (2.25) is isolated from the new one-parameter family of solutions superpotential (3.3) and does not yield a bound on the energy. Hence, one may think of (2.25) as "P + type." Meanwhile, the solution (3.3) should be considered "P − type," which is consistent with the fact that this superpotential does produce a minimum energy theorem.
A. The Critical Superpotential
The solution (3.3) is always valid perturbatively near φ = 0. However, the derivation of the energy bound requires the existence of a real superpotential for all φ. In general, the full solution to (2.18) can only be found numerically. For this it is convenient to rewrite This can be solved by integrating out from φ = 0 and matching to (3.3) for small φ. The solution fails to exist if the quantity under the square root becomes negative. Similar to [33], we expect P s to exist globally above some critical value s c . In all cases studied, this is indeed the behavior we find. Therefore, the strongest energy bound is (3.4) with s = s c .
Note that unlike the case away from the BF bound, the sign of s c is not important to the stability of the Neumann theory, as the α 2 term is dominated by the positive α 2 log α term at large α.
For example, consider the simple potential We wish to determine whether or not P s (φ) exists globally as we vary the parameter s. For s < 0.35, we find that there is some φ at which P ′ s becomes imaginary, so a global real solution does not exist. For s ≥ 0.35, we find that P s (φ) exists globally. Numerical solutions for various values of s are plotted in Figure 1
IV. FAKE SUPERGRAVITY
In this section, we analyze a class of planar domain walls in Einstein-scalar gravity.
As explained below, these solutions turn out to be related to the static, spherical solitons referred to in the stability conjecture of [10].
Following [33], we consider boost-invariant planar solutions of the form When the potential can be derived from a superpotential, it follows [46,47] that Hence, in these "fake supergravity" theories, the asymptotic behavior of the scalar field is determined by the small φ behavior of the superpotential.
If we insert the P + superpotential (2.25) into these equations of motion, we find so the logarithmic mode is turned off. As noted above, with this superpotential the minimum energy result only holds if α = 0.
We can instead use one of the generic P s superpotentials to generate our domain wall, but it was shown in [33] that any superpotential which is not the critical P c leads to a naked singularity. Further, the critical superpotential domain wall corresponds to the large α limit of scalar solitons (section V) and the zero temperature limit of planar black holes with hair (section VI).
When we use the P c superpotential (3.3), we obtain the solution Now the logarithmic mode is present, and β(α) takes the scale invariant form β(α) = kα − 2 d α log α, which is expected due to the scale invariance of the equations of motion (4.2). We also have where .
The energy density for such planar solutions is so using (4.6), we find Thus, these solutions saturate the bound (3.4).
V. SCALAR SOLITONS
To relate the bound (3.4) to the conjecture of [10], we now analyze the behavior of static, spherical solitons when m 2 = m 2 BF . For an ansatz of the form the equations of motion are Regularity at the origin requires h(0) = 1 and h ′ (0) = φ ′ (0) = χ ′ (0) = 0.
For small α, one can show analytically that where γ is Euler's constant and ψ(z) is the digamma function. Here the slope is positive for This agrees with the linearized stability analysis of [44].
For non-perturbative stability, we need the full nonlinear solution, which can be found numerically. For example, the soliton curve β 0 (α) in the simple case (3.6) is plotted in Figure 2.
Following the arguments of [33], we expect that the large α limit turns global solitons into the boost-invariant P c domain wall of section IV. Thus for large α, the soliton curve should take the scale invariant form for some constant c 0 . This implies Because of the overall negative sign in the definition (2.12), the sign of the logarithmic term is opposite to that of (2.11), so W 0 (α) does not take the scale invariant form (in contrast to m 2 = m 2 BF ). Note however, that the coefficient of the logarithmic term matches that which appears in (3.4). Also, since the logarithmic term dominates at large α, this sign ensures that W 0 is bounded from below.
It follows from this and (3.4) that when V = W + W 0 has a global minimum, the energy is bounded from below. This proves the conjecture of [10] in the case where the BF bound is saturated.
VI. POINCARÉ ADS AND FINITE TEMPERATURE
In this section, we examine stability in asymptotically Poincaré AdS and further discuss our results from the dual field theory perspective. We generalize to finite temperature and demonstrate the existence of a phase transition at the critical temperature.
Explicitly, we are interested in static, plane-symmetric solutions of the form Since we are not in global AdS, there is no longer a natural scale in the theory, and so we will always define the logarithmic terms at a cutoff scale Λ. Note that this cutoff scale appears whenever we had a logarithm in global AdS so as to make the argument dimensionless. For example, the scale transformation acts as r → cr, α → c d/2 α, β → c d/2 (β − α log c) , (6.2) and is unbroken under the boundary conditions In the gauge g ii = r 2 , the metric behaves asymptotically as The energy of these static solutions with hair is again given by (4.7). For zero temperature configurations, we know the solution will be the P c fake supergravity domain wall, and therefore the planar soliton curve β 0 (α) takes exactly the scale-invariant form (6.3) with some k = k 0 determined by V (φ). We can easily integrate this to find We plot β 0 and W 0 in Figure 3.
which has a pole in the upper half plane.
The full non-linear effective potential is which has a global minimum at regardless of the value of k 0 or f ! Thus, for double-trace deformations, the AdS vacuum is always unstable, but there is still a stable ground state. As usual, the precise nature of this ground state depends on the full nonlinear structure of V (φ) as it will correspond to a fake supergravity domain wall.
B. Finite Temperature
The instability of the AdS vacuum described in the previous section occurs at zero temperature, and it persists for low temperature. However, heating the system up enough will lift the instability. As in [39], we can identify the critical temperature by looking for a static normalizeable mode for the scalar field in the background of AdS-Schwarzschild. This locates the temperature at which the zero-momentum quasinormal mode moves from the upper to the lower complex plane, which is precisely T c . The static linearized wave equation for δφ = φ(r) on the AdS-Schwarzschild background is The solution which is smooth on the horizon is where Q ν (z) is the Legendre function of the second kind. This behaves at large r as which implies that the critical temperature is This is the location of a second order phase transition. We can calculate the behavior of the order parameter and the full off-shell potential by constructing numerical solutions (see Figure 4). The system behaves much like the case away from the BF bound studied in [39], because the system near T c is governed by the temperature and not the order parameter.
We confirm by calculating V that the second order phase transition is not masked by a first order transition.
VII. DEFORMING THE DIRICHLET THEORY
The previous sections have studied deformations of the Neumann theory β = 0. In this section, we discuss some analogous results for deformations of the Dirichlet theory α = 0.
In (2.9), we chose a scalar boundary condition β = β(α), but it is equally valid to take instead α = α(β), which we again express as for some function W (β). This boundary condition corresponds to a deformation of the Dirichlet theory by a term W (O), where the operator O has conformal dimension ∆ = λ + (or ∆ = d/2 at the BF bound).
Denoting the soliton curve as α 0 (β), the effective potential is Once again, this satisfies the properties that extrema of V give solutions satisfying our boundary conditions and the value of V at the extremum is the energy of the corresponding soliton.
Focusing on the plane-symmetric solutions, the energy is which satisfies a bound analogous to (3.4), Note that since there is no term on the right hand side of (7.4) corresponding to W 0 (β), the spinor charge calculation does not seem to produce an energy bound in terms of V(β) 9 .
Nevertheless, in what follows we shall assume that the designer gravity conjecture holds, so that stability of the theory is still determined by the effective potential V.
To calculate W 0 (β) we must invert (6.3), which is not one-to-one. We therefore find the solution in four branches, which then implies Here w n (z) is the generalized Lambert function, defined as the n th solution to z = we w .
These four branches are shown in Figure 5. The n = 0 branches follow β → ∞, so using the fact that we find Note that even though W 0 is unbounded from below, this does not imply that the α = 0 theory is unstable, as in that case all we require is the existence of a P + to prove positivity of the energy.
Let us now consider double-trace deformations of the Dirichlet theory 10 , α = −f β, cor- As pointed out in [34], f runs as and is marginally irrelevant (relevant) for f > 0 (f < 0), running to a Landau pole at the scale µ = Λe 1/f 0 which is above (below) the UV scale Λ. As shown in [42], the Green's function for φ on an AdS background in the deformed theory is which has a pole at ω = iΛe 1/f , a width corresponding to the Landau pole scale. As we turn on a positive f , the pole comes in from +i∞, confirming that it is a marginally irrelevant deformation. Turning on a negative f brings a pole out of the origin, implying an infrared instability from the marginal deformation. 10 Note that our definition of α differs by a sign from that of [34], but our definition of f is the same. To study the non-perturbative stability of the theory, we look at the effective potential V(β). Using our result above for W 0 in the large β limit, we have . . (7.11) and therefore for positive f there is a global minimum at This is a scale well above our UV scale Λ, as expected from an irrelevant deformation. For negative f , the effective potential V has no global minimum. This suggests that the theory is unstable and has no minimum energy state. This is demonstrated in a plot of V for both signs of f in Figure 6.
This implies that, unless we turn on higher order multitrace terms, there is no endpoint to the IR instability. This is what we should expect, as we have turned on a negative sign deformation which is not stabilized by any terms in the undeformed effective potential W 0 .
We suspect that the resulting dynamics will be similar to what was found in [21], where a destabilizing marginal perturbation lead to a big crunch in the bulk. It was important in constructing these explicit time-dependent big crunch solutions that the boundary conditions preserved the full conformal symmetry. In our system, where the BF bound is saturated, the linear boundary conditions α = −f β are not scale-invariant, but we still expect that there will be a big crunch. Indeed, whenever V is unbounded from below, we suspect that the system will have a big crunch instability.
VIII. DISCUSSION
In this work, we have proved a minimum energy theorem for asymptotically AdS space- and thus there is a phase transition at the critical temperature (6.14). We also explained the endpoint of the instability due to a positive double-trace deformation in the Dirichlet theory. We conjecture that the theory with a negative double-trace deformation is unstable, in that it has no minimum energy state.
Witten's original spinorial proof of the positive energy theorem was motivated by the idea that any supersymmetric theory must be stable, since then the Hamiltonian can then be expressed as a square of the supercharge. It is important to note, however, that supersymmetry is not necessary for the derivation of the energy bound (3.4). The superpotential used to construct the spinor charge does not have to be the "actual" superpotential that appears in supersymmetry transformations; we only require that P (φ) is a real, global solution to (2.18). In any case, the superpotential (3.3) is non-analytic and thus would not arise in a supersymmetric theory. Furthermore, the boundary conditions that we consider do not preserve supersymmetry, and in particular, there are no supersymmetric multi-trace deformations when the BF bound is saturated [48].
In some of the consistent supersymmetric truncations which include scalar fields saturating the BF bound, the soliton curve is not single-valued as a function of β or α. In the known examples of consistent AdS 5 × S 5 truncations, we find that the spherical soliton solution sometimes tends towards α = const. as β → ∞. In the planar limit, this becomes a domain wall solution with α = 0, β = const. These domain walls are actually 1/2 BPS and the ten-dimensional description involves smearing the D3 branes in the transverse directions [9]. These domain walls are honest supergravity domain walls, in that the P = P + which generates their solution in the manner of (4.2) is the analytic superpotential in the bulk supersymmetry algebra. However, we find that in cases without even potentials, the soliton with φ(r = 0) → +∞ approaches the 1/2 BPS domain wall, but the solution with φ(r = 0) → −∞ approaches a fake supergravity solution corresponding to a P c with s c = 0.
As an example, we present the nontrivial (α, β) curve for the SO(2) × SO(4) scalar of [9] in figure 7. This means that despite not being able to construct a simple W 0 (α), we can still find a critical superpotential (by gluing together the critical superpotential for φ < 0 and the analytic superpotential for φ < 0) and therefore prove an explicit energy bound. It would be very interesting to study the implications of this for deformations of the N = 4 theory.
The stability conjecture of [10] in fact contained a second part: the soliton associated with the global minimum of V is the minimum energy solution. Note that this does not automatically follow from (2.23) or (3.4), since the terms on the right hand side of the inequality only approach V for large α. Hence, proof that the ground state is the minimum energy soliton is still an open issue (it has been shown in [31] that the minimum energy solution must be static). As mentioned above, there are additional cases where the scalar field has a logarithmic branch near the AdS boundary, and so it might be worthwhile to study stability for these other so-called "resonant" theories (stability in the case n = λ + /λ − = 3 was partially addressed in [14]). It would also be interesting to further understand energy bounds in the Dirichlet theory, as in this case the result from the spinor charge calculation (7.4) did not lead to an expression related to the effective potential V(β). Finally, it is important to point out that ∆Γ = W is only true at leading order in 1/N, and it would be interesting to understand how non-planar corrections modify this story. Let us define 2ǫ = (λ + − λ − ) and consider the limit ǫ → 0. In [14] it was noted that the identification α →α 2ǫ , β →β −α 2ǫ , 2ǫW →Ŵ −α 2 4ǫ .
(A3) correctly transformed the expression for the conserved energy E →Ê. Applying this transformation to the expression for the spinor charge in [14] leads to a term divergent term of the formα 2 /ǫ. So this does not yield an energy bound, in agreement with the statement in section II B.
We can now try instead to take the limit of the modified expression for the energy bound Applying the transformation (A3) to this expression and reparametrizing for some constantŝ, the energy bound takes the form The O(1) term in (A5) cancels the original ǫ −1 divergence, but introduces a new term diverging as log ǫ. This logarithmic divergence is canceled by the O(ǫ log ǫ) term in (A5), while the O(ǫ) term gives a finite contribution. Dropping the hat notation, this exactly reproduces the result (3.4).
We can also attempt to take the limit of the generalized superpotential from [33], Since a reasonable guess [33] is to consider solutions of the form P = (d − 1)/2 + p 0 φ 2 + p 1 φ 2 log φ + . . .. However, it is straightforward to check that expansions of this form do not represent a well defined series solution near φ = 0 (unless p 1 = 0). Furthermore, as noted in the text above, a new correction of the form φ 2 log φ is not of the right form to cancel the divergence that appeared in the spinor charge calculation of [14]. Thus, taking the ǫ → 0 limit is a bit more subtle.
We begin by considering masses near the BF bound, with ǫ ≪ 1 but non-zero. Solutions to (2.18) then have the general pattern where for some arbitrary parameters. Substituting this into the c n given above, expanding for small ǫ, and collecting terms, we find (1 + 2ζ + 3ζ 2 + 4ζ 3 + . . .) where ζ ≡ φ 2ǫ d/2−ǫ . Treating φ as small, so that ζ ≪ 1, we sum the series in ζ to get At this point it is safe to take ǫ → 0, with the result where p 0 is given in (3.2). Finally, rewritinḡ reproduces (3.3).
|
2011-12-20T03:19:45.000Z
|
2011-12-16T00:00:00.000
|
{
"year": 2012,
"sha1": "9a8e00209b40321eb9333eefdac8de8daedd385c",
"oa_license": null,
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "Arxiv",
"pdf_hash": "9a8e00209b40321eb9333eefdac8de8daedd385c",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
212630982
|
pes2o/s2orc
|
v3-fos-license
|
Bacterial Pathogens Involved in Bovine Mastitis and Their Antibiotic Resistance Patterns in the Adamawa Region of Cameroon
Data on the sensitivity pattern of bacteria are scarce in sub-Saharan Africa, especially in Cameroon. This paper reports the prevalence of bovine mastitis and major bacterial pathogens associated with the disease and their antimicrobial profiles in the Adamawa Region of Cameroon. It was conducted to investigate the sensitivity pattern of bacteria isolated from mastitis cases that could be helpful in the application of appropriate therapeutic measures. For this study, 224 lactating cows were examined. A high average prevalence (59.8%) in subclinical mastitis was recorded as compared to clinical mastitis (3.6%; χ2=163.7, P=10-4). Out of the 135 clinical and subclinical mastitis cases recorded, bacteria were cultured from 115 milk samples (85.2%, n=135). In all, 14 different bacterial pathogens were isolated including: coagulase negative Staphylococci (27.5%), Staphylococcus aureus (23.3%), Escherichia coli (11.3%), Streptococcus agalactiae (7.1%), Streptococcus dysagalactiae (4.2%), Enterococcus faecalis (2.8%), Klebsiella pneumoniae (2.8%), Enterobacter aerogenes (2.1%), Pseudomonas aeruginosa (2.1%), Corynebacterium spp. (1.4%), Proteus spp. (1.4%), Brucella spp. (1.4%), Mycoplasma spp. (0.7%), and Mycobacterium spp. (0.7%). A major variation in the sensitivity of isolated bacteria against 14 different antibiotics was noticed. Overall the sensitivity test revealed that Enrofloxacin, Gentamicin, and to a lesser extent Oxacillin and Amoxicillin/Clavulanic acid, were most efficacious. The study gives a significant contribution to the epidemiology and contributes to reducing the lack of knowledge about the antibiotic resistance patterns of major bacterial mastitis in Cameroon. The application of these antibiotics could be beneficial in resolving the cases of bovine mastitis in dairy herds.
Introduction
Mastitis, an inflammatory mammary gland condition, is the most common, troublesome and the most expensive disease of dairy ruminants worldwide as it is responsible for heavy economic losses in terms of reduction in milk yield, profit margins, and quality of milk and milk products [1][2][3][4]. Although physical and chemical injuries may cause inflammation of the mammary gland, infections most often caused by bacteria or other microorganisms (fungi, viruses, algae) are the primary cause of mastitis [5]. Thus, based on etiopathological investigations, it is usually classified as subclinical, acute, subacute, chronic or gangrenous [6,7].
The causative organisms are well adapted to survive in the mammary glands and in most cases, establish mild subclinical infection of long duration during which pathogens of public health significance might be shed into milk from the infected quarters [8]. Furthermore, mastitis is associated with a number of zoonotic diseases including Tuberculosis, Brucellosis, Campylobacteriosis and streptococcal sore throat in which milk acts as a vehicle of infection [7,9]. Public hazards associated with the consumption of antibiotic contaminated milk and products cause allergic responses, changes in intestinal flora and development of antibiotic resistant pathogenic bacteria [10,11].
The dairy industry in Cameroon is rudimentary [12] and mastitis is becoming a significant constrained in its development. Gram positive and Gram negative bacteria are involved as major pathogens causing mastitis worldwide, such as Staphylococcus aureus, Escherichia coli, Streptococcus spp., Klebsiella spp. [13]. S. aureus and E. coli are the most commonly isolated pathogen from clinical mastitis [14]. Staphylococcus spp. is a major pathogen causing various forms of subclinical and clinical mastitis in cattle [15]. Coagulase negative staphylococci remain the most frequently isolated pathogens from the subclinical mastitis in dairy cows [14].
An important aspect in the appropriate control of infectious diseases is identification of the causative agents. Antimicrobial therapy aiming against infectious agents causing mastitis is usually recommendable [16]. The indiscriminate use of antimicrobial drugs without testing in vitro sensitivity, as commonly practice in the country, may be considered the primary cause of lack of success in treatment. Transmission of resistant pathogens to humans via bulk milk with subclinical mastitis is of major public health interest [17]. In addition, the risk to human health for Mycobacterium avium subsp. paratuberculosis [18], Mycobacterium bovis, the causal agent of Tuberculous, mastitis, and other milk zoonoses is of great concern particularly in developing countries where there is an increase in the consumption of untreated milk [19]. Therefore, it is important to investigate the sensitivity pattern of the different bacteria isolated from mastitis as well as apply the appropriate therapeutic measures. Such data are very scarce in sub-Saharan Africa, especially in Cameroon.
In this context this study was carried out to identify the causative bacterial agents of bovine mastitis in Adamawa region of Cameroon as well as evaluate their antibiotic susceptibility profiles. The investigation also attempt to provide epidemiological data which are key to the formulation of antimicrobials therapeutic measures against bovine mastitis in the country.
Study design and sampling population
In this study, 224 lactating cows from 16 different smallholder dairy farms located in the Adamawa Region of Cameroon were examined to determine the prevalence of mastitis, and to identify the major bacterial pathogens associated with the disease and their antimicrobial patterns. The cows enrolled were randomly chosen from farms practicing the semi intensive husbandry system and included 64 Holstein-Friesian breed, 50 Adamawa Gudali hybrid breed, 32 Adamawa Gudali breed, 34 White Fulani breed, 24 Red Fulani breed, and 20 Banyo Gudali breed. Of the total number of cows sampled, 103 cows were less than or equal to 5 years of age and 124 were more than 5 years of age.
Detection of mastitis
To determine clinical and subclinical mastitis in the lactating cows, clinical examination of the udder was performed [7,20]. Screening was done using the California mastitis test (CMT) (ImmuCell® CMT, Portland, USA) as previously described [12,20].
Collection of milk samples
Before milk collection from the CMT positive animals, the teats of the udders were wiped thoroughly with 70% ethyl alcohol, with particular attention to the teat orifice. The first streams of milk were discarded and sterile test tubes were used in collecting the milk in a strictly aseptic manner. Approximately 10 ml of milk were collected per cow. The samples were delivered to the microbiology laboratory in an ice-cooled box within 4 hours and processed immediately for the isolation, characterization and identification of bacteria.
Direct microscopy
The milk samples were centrifuged and the obtained pellet was swiped on a slide and then stained. A Gram-and Ziehl Neelsen stains were used routinely [20].
Bacteriological culture
The bacteriological culture was carried out following standard microbiological technique and microbiological procedures for the diagnosis of bovine mastitis infection [20]. Briefly, a loop full of milk streaked on 7% sheep blood agar plates are checked for growth after 24, 48 and up to 72 hours to rule out slow growing microorganisms. A sample was considered negative if there is no growth after 72 hours. Suspected bacteria were sub-cultured onto different selective/differential bacteriological media and incubated at 37°C for 24 hours. Pure cultures were achieved as per procedures described by [21,22].
Colony morphology, hemolytic characteristics, Gram staining, catalase test, motility test, triple sugar iron reaction, CAMP test, IM-ViC (Indole, Methyl red, Voges-Proskauer, Citrate), coagulase and cytochrome oxidase tests were conducted to identify the isolates according to the procedures adopted by Quinn et al. [20]. Furthermore, biochemical identifications by commercial kits were carried out (Integral System Enterobacteria, Integral System Staphylococci, Integral System Streptococci, Liofilchem®, Abruzzo, Italy).
Standard specific culturing techniques were applied in the suspected cases of Paratuberculosis, Tuberculosis, Brucellosis and CBPP (Contagious Bovine Pleuropneumonia) for the isolation of Mycobacterium spp., Brucella spp., and Mycoplasma spp., respectively.
In all, 12 isolated bacteria were subjected to antimicrobial susceptibility testing with the exception of Mycoplasma and Mycobacterium species. Brucella species were tested for antimicrobials susceptibility using five antimicrobial agents [Enrofloxacin (5µg), Streptomycin (10µg), Gentamicin (30µg), Doxycycline (30µg), Oxytetracycline (30µg)]. Based on the susceptibility to antimicrobials, the bacteria were categorized into three groups: sensitive, intermediate and resistant. For statistical analysis, the intermediate group was considered as resistant.
The interpretation on susceptibility was done according to the guidelines of Clinical and Laboratory Standard Institute [23].
Statistical analysis
The qualitative data were analyzed using Statistical software STA-TA version 13 (STATA Corporation, College Station, Texas, USA). Univariate analyses on prevalence percentages were performed. Statistical differences were calculated by Chi Square test and P-values less than 0.05 were considered statistically significant.
The subclinical and clinical mastitis were most represented in the bovine population aged less than or equal to 5 years, 69.9% and 5.8% (n=103), respectively. In relation to age, a significant difference was observed only for subclinical mastitis (69.9% vs 51.2%, n=121; χ2=8.06, P=0.0045). In relation to the farm, prevalence rate ranged from 25.0% to 81.8%, for subclinical mastitis and from 0% to 9.1% for clinical mastitis. The farms with the highest prevalence rates for subclinical mastitis also showed the highest prevalence rates for clinical mastitis.
Bacteria isolates
From the 135 clinical and subclinical mastitis cases recorded, bacteria were successfully cultured from 115 milk samples (85.2%, n=135). In one hundred and four samples (77.0%, n=135) grew pure cultures. Eleven samples (8.1%, n=135) had mixed growth, of which one isolate per sample was considered for further analyses based on medical/veterinary importance judgment taking into consideration the morphology of the colonies. Twelve samples presented no growth (8.9%, n=135), four samples (3.0%, n=135) were contaminated with manure at the site of collection hence were discarded, and fungi grew in four other samples (3.0%, n=135), so they were not included in the analyses. Mastitis of viral origin or uncultivable bacterial species may be responsible for the negative cultures.
In all, 14 different bacterial pathogens were isolated (
Antimicrobial susceptibility testing
The in-vitro antimicrobial susceptibility assays showed high resistance patterns (Figure 1).
Discussion
In most sub-Saharan countries including Cameroon, sub-clinical mastitis received little or no attention and efforts are focused on the treatment of clinical cases while high productive and economic losses could come from sub-clinical mastitis. In the present study, there were overwhelming cases of sub-clinical mastitis (59.8%) compared to clinical mastitis (3.6%). Our findings are similar to those of many studies [12,24]. In the current study, fourteen different bacterial pathogens were isolated from milk samples collected from 135 mastitis cows. The isolated bacteria were Coagulase negative Staphylococci (27.5%), Staphylococcus aureus (23.3%), Escherichia coli (11.3%), Streptococcus agalactiae (7.1%), Streptococcus dysagalactiae The study showed that Staphylococcus spp, Escherichia coli, and Streptococcus spp, are the major cause of mastitis in Adamawa Region Cameroon. This finding is in agreement with those of many studies carried out in many parts of the world [7,[25][26][27][28].
The in vitro antibiotic susceptibility testing of twelve different types of bacterial isolates to 14 different antibiotics such as Enrofloxacin, Amoxicillin, Streptomycin, Erythromycin, Ampicillin, Gentamicin, Doxycycline, Oxytetracycline, Penicillin G, Trimethoprim/ sulphamethoxazole, Neomycin, Amoxicillin/Clavulanic acid, Ceftiofur, and Oxacillin showed overall effective drug therapy against isolated pathogens, in the following order: Enrofloxacin, Gentamicin, and to a lesser extent by Oxacillin and Amoxicillin/Clavulanic acid was observed but resistance of most of the isolates to the other antibiotics were noticed. The variation in the sensitivity of common antibiotics could be the result of extensive and indiscriminate use of these in the treatment of udder infection.
In the past two decades, a significant increased of antimicrobial resistance among Gram-positive bacteria has been observed, including multidrug-resistant staphylococci, penicillin-resistant streptococci, and among Gram-negative bacteria, including the emergence and spread of resistance in Enterobacteriaceae. Klebsiella pneumoniae and Enterobacter spp. infections now involve strains not susceptible to third-generation cephalosporins. Such resistance in K. pneumoniae to third-generation cephalosporins is typically caused by the acquisition of plasmids containing genes that encode for extended-spectrum β-lactamases (ESBLs), and these plasmids often carry other resistance genes as well. ESBL-producing K. pneumoniae and Escherichia coli are now relatively common in healthcare settings and often exhibit multidrug resistance. ESBL-producing Enterobacteriaceae have now emerged in the community as well [29].
In the currently study, bacteria of the family Enterobacteriaceae recorded 100% resistance to Beta-Lactams. Moreover, Enterobacter aerogenes showed over 66.7% to the third generation Cephalosporins. Resistance of Enterobacter spp. to third-generation Cephalosporins was the most typically caused by overproduction of AmpC β-lactamases, and treatment with third-generation cephalosporins may select for AmpC-overproducing mutants. Some Enterobacter cloacae strains are now ESBL and AmpC producers, conferring resistance to both third-and fourth-generation cephalosporins [30].
Fluoroquinolones resistance Enterobacteriaceae was 17.4% (n=23). Quinolone resistance in Enterobacteriaceae is usually the result of chromosomal mutations leading to alterations in target enzymes or drug accumulation. More recently, however, plasmid-mediated quinolone resistance has been reported in K. pneumoniae and E. coli, associated with acquisition of the qnr gene [30].
Oxacillin-resistant Staphylococcus aureus (MRSA) represents an important problem worldwide, and its prevalence may vary significantly in human and veterinary medicine. Most MRSA isolates show resistance to virtually all Beta-lactams by production of penicillinase and a low-affinity penicillin-binding protein (PBP) called PBP 2a [31].
Since its detection in Papua New Guinea and Australia, Penicillin resistance in Streptococcus spp. has now been reported worldwide [32]. In the present study the Penicillin resistance rate observed for Streptococci isolates was 43.7% (n=16), lower when compared to other beta-lactams, in particular to Ampicillin (81.2%, n=16; χ 2 =4.8 P=0.0285).
Further investigations will be needed to study the beta lactamase production by Gram negative isolates, and Oxacillin/Methicillin resistance from Staphylococcus genus.
In a summary, the different bacteria isolated from sub-clinical and clinical mastitis cases in this study showed that Staphylococci were the most common, followed by Streptococcus species and Escherichia coli. Thus, for effective treatment of bovine mastitis, medicinal formulations should contain antibiotics with good inhibition spectrum of against most species of bacteria. In this context, it is interesting to note that Enrofloxacin especially, and to a lesser extent, Gentamicin, Oxacillin and Amoxicillin/Clavulanic acid showed the highest sensitivity among almost all of the bacteria isolates in this study and should considered among the choice antibiotics for effective treatment of bovine mastitis in the study area to yield the best possible result. Other studies [33,34] have shown similar susceptibility pattern regarding the use of Fluoroquinolones against bovine mastitis pathogens.
Finally, due to logistical reasons we were unable to perform the antibiotic susceptibility test for the isolated Mycobacterium and Mycoplasma species. Nevertheless, this will be carryout in subsequent studies when the situation will have been resolved.
In conclusion, potential drug resistant pathogens in otherwise normal dairy herd may be a serious concern for public health. Current findings suggest further studies with the isolated strains of bacteria. This study revealed the existence of alarming levels of resistance of Staphylococcus spp., Gram negative bacteria and to a lesser extent, Streptococcus spp. to commonly used antimicrobial agents. The results suggest a possible development of resistance from prolonged and indiscriminate usage of some antimicrobials. Thus, it is very important to implement a systemic application of an in vitro antibiotic susceptibility test prior to the use of antibiotics in both treatment and prevention of intra-mammary infections.
|
2020-03-09T03:42:50.629Z
|
2020-07-07T00:00:00.000
|
{
"year": 2020,
"sha1": "1b1da665a76c5a1b2654e76e03cd93415d21a102",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.24966/drt-9315/100012",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "1b1da665a76c5a1b2654e76e03cd93415d21a102",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
}
|
254390451
|
pes2o/s2orc
|
v3-fos-license
|
Successful initial tofacitinib treatment for acute severe ulcerative colitis with steroid resistance: a case series
Background The standard therapy for acute severe ulcerative colitis (ASUC) is intravenous corticosteroids; however, 30% of ulcerative colitis (UC) patients do not recover with corticosteroids alone. Few studies have reported the efficacy and safety of tofacitinib for ASUC with steroid resistance. We report a case series of successful first-line treatment consisting of tofacitinib (20 mg/day) administered to ASUC patients with steroid resistance. Methods Patients diagnosed with ASUC at our institution between October 2018 and February 2020 were retrospectively evaluated. They were administered a high dose of tofacitinib (20 mg) after showing no response to steroid therapy in a dose of 1-1.5 mg/kg/day. Results Eight patients with ASUC, 4 (50%) men, median age 47.1 (range 19-65) years, were included. Four patients were newly diagnosed, and the median UC duration was 4 (range 0-20) years. Six of the 8 patients were able to avoid colectomy. One patient (patient 2) had no response; however, remission was achieved after switching from tofacitinib to infliximab. One patient (patient 6) with no response to tofacitinib underwent total colectomy. Only one patient (patient 4) experienced an adverse event, local herpes zoster, treated with acyclovir without tofacitinib discontinuation. Conclusions Clinical remission without serious adverse events can be achieved with high probability and colectomy can be avoided by first administering high-dose tofacitinib to steroid-resistant ASUC patients. Tofacitinib may be one of the first-line treatment options for steroid-resistant ASUC.
Introduction
Acute severe ulcerative colitis (ASUC), defined using the Trulove Witts criteria [1], is an emergent condition. A total a case series of successful first-line treatment comprising highdose tofacitinib (20 mg/day) administered to ASUC patients with steroid resistance.
Patients and methods
Patients diagnosed with ASUC at our institution between October 2018 and February 2020 were retrospectively evaluated. They were administered tofacitinib (20 mg) after showing no response to steroid therapy in a dose of 1-1.5 mg/kg/day. The steroid dose for the patients in this case series was 40-50 mg/day. The case series also includes one thiopurine user (patient 1). The patients were screened for cardiovascular or thrombotic problems before tofacitinib treatment. All underwent a laboratory examination, stool testing for Clostridioides difficile, and endoscopic biopsies for cytomegalovirus. Tofacitinib has 70% hepatic metabolism and 30% renal metabolism; therefore, patients with hepatic dysfunction and renal dysfunction require a reduced dose. During this case series, no patient required a dose reduction.
The UC Disease Activity Index (UCDAI) [12] and Mayo scoring system [13] were used to determine the severity of the patients' general condition. Defecation frequency, rectal bleeding, mucosal appearance on colonoscopy, physician's rating of disease activity, gastrointestinal symptoms, adverse events, and drug changes were recorded. Adverse events were evaluated using the Common Terminology Criteria for Adverse Events (CTCAE) version 4.03. A clinical response was defined as an improvement of 3 points or more in the UCDAI and Mayo scores. Clinical remission was defined as a UCDAI score and Mayo score of 2 points or less.
The authors declare that the patients described in this case presentation have given their written consent for their personal or clinical details to be published in this study, along with any identifying images. This study was approved by the appropriate ethics committee (details blinded for peer review). This research was carried out in accordance with the Declaration of Helsinki.
Results
This report included a total of 8 patients with ASUC, 4 (50%) men, with a median age of 47.1 (range 19-65) years. Four patients were newly diagnosed with UC, and the median UC duration was 4 (range 0-20) years. All patients were bio-naïve before starting tofacitinib. Table 1 summarizes the baseline characteristics and laboratory data of the 8 patients. Relevant clinical data were retrospectively evaluated from the patients' electronic medical records. A clinical response was observed in 6 of the 8 patients before they experienced remission. Six patients were able to avoid colectomy. One patient (patient 2) had no response; however, remission was achieved after switching from tofacitinib to infliximab. One patient (patient 6) with no response to tofacitinib underwent total colectomy. When we used tofacitinib during induction and the follow-up phase, only one patient (patient 4) experienced a major adverse Patient 1 was a 60-year-old man admitted to the hospital on February 1, 2020. His previous treatment was azathioprine because of intolerance to 5-aminosalicylic acid (5-ASA). The test results showed C-reactive protein (CRP) 19.3 mg/dL, albumin (Alb) 1.8 g/dL, and hemoglobin (Hb) 8.4 g/dL. His UCDAI score and Mayo score were 12. He underwent colonoscopy (Fig. 1A) and was administered tofacitinib after steroid resistance was observed. A clinical response was observed after approximately 3 days of treatment. He was discharged from the hospital after clinical remission on March 3, 2020 (Fig. 1B).
Patient 2 was a 19-year-old woman admitted to the hospital on May 16, 2020. She had not been treated previously and was experiencing her first onset of acute UC. The test results showed CRP 7.1 mg/dL, Alb 2.0 g/dL, and Hb 4.7 g/dL. Her UCDAI score and Mayo score were 12. She was given tofacitinib after steroid resistance was observed. She showed no clinical response to tofacitinib, but after switching from tofacitinib to infliximab (5 mg/kg) remission was achieved in 5 days. She had an excellent response to infliximab and was discharged from the hospital in clinical remission on June 25, 2020.
Patient 3 was a 53-year-old woman admitted to the hospital on October 24, 2018. Her previous treatment was 5-ASA. The test results showed CRP 7 mg/dL, Alb 2.7 g/dL, and Hb 8.6 g/dL. Her UCDAI score and Mayo score were 12. She underwent colonoscopy ( Fig. 2A) and was administered tofacitinib after steroid resistance was observed. A clinical response was observed after approximately 2 days of treatment. She was discharged from the hospital after clinical remission on December 14, 2018 (Fig. 2B).
Patient 4 was a 52-year-old woman admitted to the hospital on August 1, 2019. Her previous treatment was 5-ASA. The test results showed CRP 24.1 mg/dL, Alb 1.3 g/dL, and Hb 6.3 g/dL. Her UCDAI score and Mayo score were 12. She underwent colonoscopy (Fig. 3A) and was administered tofacitinib after steroid resistance was observed. A clinical response was observed after approximately 4 days of treatment. She developed local herpes zoster, against which she had never been vaccinated. However, she was cured with an antiviral drug combination, without stopping tofacitinib. She was discharged from the hospital after clinical remission on September 18, 2019 (Fig. 3B).
Patient 5 was a 65-year-old man admitted to the hospital on August 1, 2019. He had not received previous treatment and was experiencing his first onset of acute UC. He was referred to the hospital by his former physician because of severe bloody stools. The test results showed CRP 3 mg/dL, Alb 2.7 g/dL, and Hb 9.3 g/dL. His UCDAI score and Mayo score were 12. He underwent colonoscopy (Fig. 4A) and was given tofacitinib after steroid resistance was observed. A clinical response was observed after 3 days of treatment. He was discharged from the hospital after clinical remission on December 8, 2019 (Fig. 4B).
Patient 6 was a 20-year-old man admitted to the hospital on November 25, 2019. He had not received previous treatment because of intolerance to 5-ASA. The test results showed CRP 24.1 mg/dL, Alb 1.3 g/dL, and Hb 6.3 g/dL. His UCDAI score and Mayo score were 12. He was given tofacitinib after steroid resistance was observed. He showed no clinical response to tofacitinib, but remission was not achieved after switching from tofacitinib to infliximab after 5 days. However, there was no clinical response to infliximab. Finally, he underwent total colectomy. He was discharged from the hospital after clinical remission on December 25, 2019. Patient 7 was a 65-year-old man admitted to the hospital on November 18, 2020. He had not received treatment previously and was experiencing his first onset of acute UC. He was referred to the hospital by his former physician because of severe bloody stools The test results showed CRP 15.8 mg/ dL, Alb 1.4 g/dL, and Hb 8.7 g/dL. His UCDAI score and Mayo score were 12. He was given tofacitinib after steroid resistance was observed. A clinical response was observed after approximately 3 days of treatment. He was discharged from the hospital after clinical remission on December 26, 2020.
Patient 8 was a 51-year-old man admitted to the hospital on January 30, 2021. He had not been given any previous treatment because of intolerance to 5-ASA. The test results showed CRP 5.6 mg/dL, Alb 1.4 g/dL, and Hb 9.6 g/dL. His UCDAI score and Mayo score were 12. He underwent colonoscopy (Fig. 5A) and was given tofacitinib after steroid resistance was observed. A clinical response was observed after approximately 5 days of treatment. He was discharged from the hospital after clinical remission on February 27, 2021 (Fig. 5B).
Discussion
There is no consensus regarding which biologic drugs, including anti-tumor necrosis factor (TNF)-α antibodies and calcineurin inhibitors, should be first administered to treat ASUC. Retrospective studies of case reports have shown that tofacitinib is useful for hospitalized patients who have received previous treatment for ASUC [14][15][16]. This study describes patients with steroid resistance who were first administered tofacitinib for ASUC. It is reported that tofacitinib is a good clinical response rate of 8 week was 57.6%/remission rate was 17.6% with a relatively severe UC background (Mayo score of 8±1.7) with an OCTAVE in an international joint phase 3 study with an induction 1 & 2 [7]. Therefore, tofacitinib is one of the possible treatments for ASUC. The results are not so different from those of anti-TNF. All but one of the patients in this case series were able to avoid total colectomy. Infliximab and calcineurin inhibitors have been reported as possible treatments for ASUC, but they have a long half-life and require more time before their efficacy can be judged [4,17]. Because tofacitinib has a very short half-life of 3.2 h, it is possible to judge its efficacy more quickly compared to infliximab and calcineurin inhibitors. The short half-life of tofacitinib is beneficial because a response can be observed only 3-5 days after its administration. During this case series, we could determine whether to continue tofacitinib or switch from tofacitinib to infliximab or a calcineurin inhibitor after only 3-5 days of observation (Fig. 6).
The concomitant use of tofacitinib with other immunosuppressive therapeutic agents (thiopurine preparation, calcineurin inhibitor, anti-TNF-α antibody) is contraindicated. Therefore, we think it is reasonable to first consider tofacitinib (half-life 3.2 h) for ASUC before attempting treatment with infliximab (half-life 8.1 days) or a calcineurin inhibitor (half-life 34 h).
According to our previous experience, a UC patient with steroid-resistant idiopathic thrombocytopenic purpura can experience exacerbation of the condition. The first administration of tofacitinib to the patient achieved clinical remission and maintained remission of UC and idiopathic thrombocytopenic purpura for more than 1 year. Whole-transcriptomic sequencing was performed for this patient because of inflamed rectal mucosa indicated by biopsy results before and after JAK inhibitor administration. It was suggested that the distinct molecular signatures were JAK inhibitors and an anti-TNF-α antibody [19]. This would indicate a complementary relationship between JAK inhibitors and the anti-TNF-α antibody. As in Patient 2, although tofacitinib did not have any effect, infliximab was effective. However, more cases need to be investigated to prove this complementary relationship.
Regarding side effects, tofacitinib can induce severe lymphocytosis, anemia, herpes zoster infection, increased serum lipid level, or thrombosis. Although the patients in our case series did not have serious adverse events, patient 4 developed local herpes zoster because there was no time to administer a vaccine. However, infliximab and calcineurin inhibitors entail a similar possibility of exacerbating herpes zoster. Additionally, patients with a risk of thrombophilia were treated with heparin antithrombotic therapy to protect against a thrombotic event.
This study had some limitations. First, the sample size was small. Second, this was a single-center, retrospective case series. Although only 1% of UC cases are ASUC [20], 4 of the 8 ASUC patients in this case series (50%) were experiencing their first onset of acute UC and 2 (25%) were intolerant to 5-ASA. These results provide interesting patient background.
In addition, according to OECD health data, the length of hospital stay in Japan (34.7 days) tends to be longer compared to Europe (UK 8.7 days), and the United States (6.4 days). Since our ASUC patients included cases of the fulminant type, the average hospitalization period was 4 weeks or more. The longest hospital stay was 7 weeks. Although the number of ASUC cases is very small, further multicenter studies must be performed to confirm the safety and efficacy of tofacitinib for its treatment. Initial administration of tofacitinib to steroid-resistant patients with ASUC was able to avoid total colectomy in 6 of 8 patients in this case series. If tofacitinib is started first, the effect can be judged within a short period of time (3-5 days), and anti-TNF can be safely administered without overlap. It should be noted that, if anti-TNF is administered in advance, it takes time to wash out and start tofacitinib.
In conclusion, a high rate of clinical remission can be achieved and colectomy can be avoided by first administering tofacitinib to steroid-resistant ASUC patients. Evaluation of tofacitinib for steroid-resistant ASUC and the efficacy associated with sequential drug transition needs to be demonstrated in multicenter studies in the near future.
Summary Box
What is already known: • Only 1% of ulcerative colitis (UC) cases are acute severe UC (ASUC) • The standard therapy for ASUC is intravenous corticosteroids; however, 30% of UC patients will not recover with corticosteroids alone • Infliximab and calcineurin inhibitors have been reported as possible treatments for ASUC, but they have a long half-life and require more time before their efficacy can be judged • Tofacitinib directly inhibits signaling of an important subset of proinflammatory cytokines What the new findings are: • Tofacitinib is one of the possible first-line treatments for ASUC with steroid resistance • All but one of our patients was able to avoid total colectomy in this case series • The very short half-life of tofacitinib is beneficial, because a response can be observed only 3-5 days after its administration in a real clinical case series • A high rate of clinical remission can be achieved without severe adverse events, and colectomy can be avoided, by initial administration of high-dose tofacitinib
|
2022-12-08T16:13:21.908Z
|
2022-11-29T00:00:00.000
|
{
"year": 2022,
"sha1": "4c716fc88fe3d1d21a842cdae18678e6628a7d13",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.20524/aog.2022.0768",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "46080900f4934c7cf5cda8770709c7c40b3d976e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
4552733
|
pes2o/s2orc
|
v3-fos-license
|
Low Molecular Weight Chitosan-Coated PLGA Nanoparticles for Pulmonary Delivery of Tobramycin for Cystic Fibrosis
(1) Background: Poly(lactic-co-glycolic acid) (PLGA) nanoparticles (NPs) loaded with Tobramycin were prepared using a solvent-evaporation method. (2) Methods: The NPs were coated with low molecular weight chitosan (LMWC) to enhance the mucoadhesiveness of PLGA-NPs. The following w/w ratios of tobramycin to LMWC were prepared: control (0:0.50), F0 (1:0.25), F0.5 (1:0.5), and F1 (1:1). (3) Results: The results showed that the size of the particles increased from 220.7 nm to 575.77 nm as the concentration of LMWC used in the formulation increased. The surface charge was also affected by the amount of LMWC, where uncoated-PLGA nanoparticles had negative charges (−2.8 mV), while coated-PLGA NPs had positive charges (+33.47 to +50.13 mV). SEM confirmed the size and the spherical homogeneous morphology of the NPs. Coating the NPs with LMWC enhanced the mucoadhesive properties of the NPs and sustained the tobramycin release over two days. Finally, all NPs had antimicrobial activity that increased as the amount of LMWC increased. (4) Conclusion: In conclusion, the formulation of mucoadhesive, controlled-release, tobramycin-LMWC-PLGA nanoparticles for the treatment of P. aeruginosa in cystic fibrosis patients is possible, and their properties could be controlled by controlling the concentration of LMWC.
Introduction
Cystic fibrosis (CF) is a life-threatening chronic pulmonary infection where the patient's lungs secrete a highly viscous mucus which impairs mucociliary clearance. This mucus acts as a medium that supports bacterial infections such as Staphylococcus aureus, Haemophilus influenzae, and Pseudomonas aeruginosa. Inflammation and infection of the lungs cause injury and structural changes, including the stimulation of the release of neutrophil chemoattractants from epithelial cells and neutrophils. Further, neutrophil breakdown leads to increased viscosity of the mucus [1]. All of these conditions significantly impact the success of antibiotic treatment in CF patients [2]. In recent years, inhaled antibiotics have received greater attention, especially in the treatment of pulmonary infections related to CF. Inhaled antibiotics deliver high drug concentrations directly to the site of infection, which reduces the side effects and improves the therapeutic potential of the antibiotic against microorganisms such as Pseudomonas aeruginosa [3,4].
Tobramycin is one of the important antibiotics used for lung infections that develop secondary to CF and are caused by P. aeruginosa. Aerosolized tobramycin has been used clinically to reduce the systemic toxicity of tobramycin (nephrotoxicity and ototoxicity) and enhance its concentrations in the lungs. The new dry powder inhalers (DPIs) such as Tobramycin Inhalation Powder™ (TIP) showed similar efficacy to the nebulized inhalation solution [5,6]. These formulations have the advantages of improved patient convenience, portability, and reduced treatment time. However, these formulations face some problems such as the limited penetration of the drug through the thick mucosa of CF patients [3]. Therefore, novel strategies for improving the delivery and the deep penetration of tobramycin through the mucosa, such as the use of nanoparticles (NPs), could enhance the overall therapy outcomes [7,8].
Recently, it has been reported that polymeric NPs have the potential to penetrate mucus and overcome the steric inhibition that results from the dense mucin fiber meshes [9]. Furthermore, controlling NP surface properties such as charge and degree of lipophilicity could reduce the unfavorable chemical properties of the free molecule [10]. NPs cross the mucosal epithelium better than microspheres, since both the microfold (M) cells overlying the mucosa-associated lymphoid tissue (MALT) and the epithelial cells are involved in the transport of NPs [11,12]. Therefore, the use of nanoparticles seems very beneficial for antibiotic inhalation.
Biodegradable polymeric NPs may control the drug level at the infection site, which is expected to enhance the drug efficacy, to decrease the number of doses administered, and to reduce the side effects. NPs prepared from poly(lactic-co-glycolic acid) (PLGA) are reported to be safe to the lung and do not induce lung tissue damage. Furthermore, in vitro cytotoxicity studies have shown that PLGA has no manifest toxicity against healthy lung macrophages or CF bronchial cells [13].
PLGA NPs have been used to control the delivery of antibiotics in several ways. They have been used in the treatment of Mycobacterium tuberculosis, P. aeruginosa, Staphylococcus aureus, and Escherichia coli infections through different routes of administration [8,[14][15][16]. The major challenge in using PLGA NPs is going through the thick mucin barrier and reaching the infected cells of the lung to interact with the defective environment. For that reason, other polymers have been used to modify PLGA NPs to improve their effectiveness, to enhance their deposition and their retention in the lungs, and to prevent their exhalation [3].
Bioadhesive polymers can improve the effectiveness of a therapy by increasing the residence time of the formulation in the lungs. Among the carbohydrates generally used in the pharmaceutical field, chitosan, a copolymer of glucosamine and N-acetylglucosamine, has a well-known bioadhesive nature. It establishes electrostatic interactions with the sialic groups of mucins in the mucus layer. In addition, it was demonstrated that chitosan could enhance the absorption of hydrophilic molecules by promoting a structural reorganization of the tight junction-associated proteins [17,18].
In this study, the aim was to design and to develop a pulmonary mucoadhesive nanoparticulate system for tobramycin and to demonstrate its antimicrobial efficacy. Further, the enhancement of the mucoadhesive properties of these NPs to mucin was one of the major aims. The surface properties (charge) and the bulk properties (size and entrapment efficiency) in relation to formulation variables are to be evaluated. Finally, the antimicrobial activity of the developed NPs against P. aeruginosa is to be investigated.
Preparation of PLGA and LMWC-PLGA NPs
The NPs were prepared according to the method described by Bodmeier et al. with some modifications [19]. Briefly, 100 mg of PLGA was dissolved in 10 mL of dichloromethane (DCM) to prepare a 1% solution. In a different beaker, 80 mg of PVA (0.4% w/v) and 200 mg of tobramycin (1% w/v) were dissolved in 20 mL of deionized water. The organic phase was then dropped into the aqueous phase under sonication for 3 min (Amplitude 40%, pulse on 30 s, pulse off 5 s) using an ultra-sonic processor (Sonic, Vibra cell, SON-1 VCX130, probe number 422-17, Newtown, CT, USA). The formed oil in water (o/w) emulsion was mixed with 20 mL of 0.5% PVA and stirred using a magnetic stirrer to remove excess DCM for 2 h. The nanoparticles were separated by centrifugation (Thermo scientific, Darmstadt, Germany) at 10,000 rpm for 30 min. Then, the NPs were washed three times with deionized water. Finally, the sample was freeze-dried (Telstar, Spain) for 48 h at −80 • C to obtain the NPs [20]. In the case of the formulation of LMWC-PLGA NPs, the same procedure that was described previously was followed, but specific amounts of LMWC were added to the aqueous phase. By the end, five formulations were prepared and studied. One of these formulations did not contain chitosan, but was loaded with tobramycin and was called F 0 [21]. Another three formulations were prepared using 100 mg of PLGA (1%), which was dissolved in 10 mL of DCM and 80 mg of PVA (0.4% w/v); 200 mg of tobramycin (1% w/v); and varying concentrations of LMWC, which were then dissolved in 20 mL of deionized water. Thus, F 0.25 contained 0.25% w/v of LMWC in the aqueous phase, F 0.5 contained 0.5% w/v of LMWC in the aqueous phase and F 1 contained 1% w/v of LMWC in the aqueous phase. Finally, one more formula was prepared using 0.5% w/v of LMWC in the aqueous phase, but no tobramycin was added, and this was called the Control. The compositions of all the formulations are given in Table 1.
Characterization of PLGA and LMWC-PLGA Nanoparticles
The mean particle size (PS), the polydispersity index (PDI) and the zeta potential (ZP) of NPs were determined using a Zetasizer nano ZS90 instrument (Malvern Instruments, Malvern, UK) at 25 • C using dynamic light scattering (DLS). All measurements were carried out in triplicate (n = 3). The zeta potentials were determined by placing diluted samples of the NPs in deionized water at 25 • C in clear disposable zeta cells. Based on the Smoluchowski equation, the electrophoretic mobility between the electrodes was converted to a zeta potential. All measurements were carried out in triplicate (n = 3).
The morphologies of PLGA and LMWC-PLGA NPs were explored using Scanning Electron Microscopy (SEM) (Thermo scientific, Darmstadt, Germany). Furthermore, the effect of the surface modification caused by LMWC was investigated using SEM. The samples were coated with carbon film prior to their analysis and were then studied under a microscope.
The Fourier-transform infrared (FT-IR) spectra of PLGA and LMWC-PLGA NPs were compared to study the interaction between PLGA and LMWC. A Shimadzu IR spectrophotometer (Shimadzu, Kyoto, Japan) with a high-performance diamond single-bounce ATR accessory (wave number 400-4000 cm −1 , resolution 4 cm −1 with 64 scans per spectrum) was used to record the results.
Drug Entrapment Efficiency and Loading Capacity
The drug entrapment efficiency (EE) and loading capacity (LC) were determined. The EE was determined by finding out the free amount of tobramycin that was not encapsulated in the NPs in relation to the total amount of tobramycin used in each formulation. During the formulation of the NPs, the resulting supernatant from centrifugation was analyzed for free tobramycin using the HPLC-UV method [22]. The encapsulated amount of tobramycin was calculated by subtracting the free amount of tobramycin from the total amount in the dispersion (n = 3). The EE was calculated according to the following equations: The LC was determined by dividing the amount of tobramycin trapped in the nanoparticles by the total sample weight as follows: Tobramycin was measured using HPLC-UV (Shimadzu, Japan) according to Russ et al. with some modifications. The C 18 column (5 µm, 4.6 × 250 mm) was used at 25 • C and the λ max was set at 365 nm. The mobile phase was prepared by dissolving 2.0 g of tris(hydroxymethy1) aminomethane in 800 mL of water, and then, 20 mL of 1 N sulfuric acid was added. Then, the solution was added to 1200 mL of acetonitrile. The flow rate was 1.0 mL/min. The samples were derivatized as follows: 400 µL of each sample was added to 1 mL of 2,4-Dinitroflurobenzene reagent (10 mg/mL alcohol) and 1 mL of Tris (hydroxymethyl) aminomethane reagent (15 mg/mL in water/dimethylsulfoxide, 20/80; v/v) in a 5.0 mL volumetric flask. The flasks were shaken, covered and put in an oven at 60 ± 2 • C for 50 min. After that, the flasks were removed and allowed to stand for 10 min at room temperature. Then, the samples were diluted with acetonitrile up to 5.0 mL.
Investigation of the Mucoadhesive Properties of PLGA and LMWC-PLGA Nanoparticles
The mucoadhesive properties of LMWC-PLGA NPs were evaluated by measuring the changes in the zeta potential (ZP) of the NPs when interacting with negatively charged mucin. A mucin stock suspension was prepared by adding mucin powder type III to a Tris buffer (pH 6.8) at a concentration of 1% w/w.
The mucin suspension was stirred overnight at 37 • C; then, it was homogenized by ultrasonication at 40% amplitude for 3 min, and centrifuged at 4000 rpm for 20 min. After that, the NPs were incubated at 37 • C with the mucin suspension. The ZP was measured at the beginning of the experiment and after 1, 2, 3, and 4 h of incubation. The alteration of the ZP of the NPs was used as an indicator that the NPs had interacted with the mucin [23,24]. The ZP was measured as explained in Section 2.3.
In Vitro Drug Release
To determine the in vitro release of the drug, tobramycin-loaded nanoparticles were dispersed into 2 mL of a phosphate buffer solution with a pH of 7.4. Then, the suspension was put into a cellulose dialysis bag (molecular weight cutoff 12-14 KDa) (Spectra por, Rancho Dominguez, CA, USA). The dialysis bags were soaked into tubes that contained 8 mL of the phosphate buffer solution as a dissolution medium. After that, the tubes were transferred into a 37.0 • C water bath that shook at 100 rpm [25]. At allocated time intervals, 5 mL of dialysis solution was withdrawn, and this volume was replaced by fresh dissolution media. The tobramycin concentration in each sample was determined by the same HPLC-UV procedure used for the determination of the EE.
Antimicrobial Activity of Tobramycin Nanoparticles
The Minimal Inhibitory Concentration (MIC) was measured using P. aeruginosa (PA01). P. aeruginosa was grown in LB overnight at 37 • C with an agitation rate of 100 min −1 . Then, it was diluted to an optical density (OD 550 ) equivalent to 1 × 10 7 cfu/mL. Aliquots of 100 µL of OD 550 , overnight-adjusted culture, were added in triplicate to each well of a 96-well microtiter plate. Each plate contained 100 µL of varying concentrations of tobramycin (free tobramycin or the equivalent amount of tobramycin in F 0 , F 0.25 , F 0.5 and F 1 ). The weight of the nanoparticles used in the preparation of these varying concentrations was determined depending on the EE in each formula. The concentrations were prepared using the serial dilution technique; then, the plates were incubated for 24 h at 37 • C in an orbital incubator (JSR Shaking incubator, Gongju, Korea). A negative control which consisted of uninoculated broth was also included in triplicate on each plate. The MIC was determined as the lowest concentration for which no growth was visually observed in the inoculated wells.
Preparation of Biofilms Using the Calgary Biofilm Device/MBEC Assay
The biofilm of the Pseudomonas bacteria was grown using the Calgary Biofilm Device (commercially available as the MBEC Assay™ for Physiology & Genetics (P & G) (Innovotech Inc., Edmonton, AB, Canada)) according to the previously described method by Ceri et al. [26]. This device consists of a 96-well plate with a lid bearing polycarbonate (PC) pegs, which protrude into each well containing bacterial culture. This allows the growth of 96 identical biofilms per device. An overnight culture of bacteria was adjusted to an optical density (OD 550 ) equivalent to 1 × 10 7 cfu/mL. Each well of the Calgary Biofilm Device was inoculated with an aliquot of 150 µL of the standardized bacterial suspension. The lid containing the pegs was placed carefully into the wells and the CBD was incubated at 37 • C within an orbital incubator for 48 h in a humidified compartment to allow the formation of biofilms. After the first 24 h of incubation, the bacterial inoculum was replaced by a fresh growth medium. Following 48 h of incubation, pegs were placed in a fresh 96-well rinse plate (each well containing 200 µL of fresh growth medium) and were gently rinsed to remove any planktonic or loosely attached bacteria before their exposure to tobramycin F 0 , F 0.25 , F 0.5 and F 1 .
The minimum biofilm eradication concentrations (MBECs) were determined in triplicate [26]. Briefly, the 48-h old biofilm was challenged with a range of concentrations of tobramycin-F 0 , F 0.25 , F 0.5 and F 1 -for 24 h at 37 • C in a gyrorotary incubator (Binder, Tuttlingen, Germany). After 24 h of this challenge, the pegs were gently rinsed three times in phosphate-buffered saline (PBS); then, they were placed in a second 96-well plate (recovery plate) containing 200 µL of fresh growth media and were sonicated for 10 min. Following sonication, the lid carrying the pegs was discarded and the recovery plates were incubated for 24 h at 37 • C in the orbital incubator. The MBEC was determined as the lowest antibiotic concentration that prevented the regrowth of the bacteria from the treated biofilm. A negative control was also included.
Results
PLGA nanoparticles were prepared using the emulsion-solvent evaporation method. The surfaces of the NPs were modified with LMWC in order to enhance their mucoadhesive properties.
Characterization of PLGA and LMWC-PLGA Nanoparticles
The mean particle size and the zeta potential of the NPs that were prepared with and without LMWC are introduced in Figure 1. The PLGA nanoparticles that were prepared without tobramycin (control) had the smallest size among all the NPs under study (187 ± 6.19 nm). When tobramycin was loaded, the sizes of the particles increased. Further, coating the PLGA NPs with LMWC showed a direct relation to the particle size, and the increment in size was related to the concentration of LMWC. Nanoparticles that were prepared using 50, 100, and 200 mg of LMWC resulted in particles with average sizes of 309.57 ± 1.12, 451.8 ± 7.19, and 575.77 ± 2.67 nm, respectively, whereas the uncoated NPs had a size of 220.7 ± 1.77 nm.
The mean diameter that was measured by a Zetasizer analysis of the NPs was confirmed by SEM. The nanoparticles showed a spherical morphology, as shown in Figure 2. Coating the NPs with LMWC affected the surface charge of the NPs. The PLGA nanoparticles had a negative zeta potential (−2.8 ± 0.1 mV) while the LMWC-PLGA nanoparticles had positive charges. Nanoparticles that were prepared using 50, 100, and 200 mg of LMWC gave particles with average charges of +34.0 ± 1.9, +50.1 ± 6.5, and +33.47 ± 1.0 mV respectively. The surface charge was found to have no relation to the concentration of LMWC, as shown in Table 2. Although the changes in the surface charges when the NPs were coated with LMWC provided proof of a successful coating, the coating was further proved using FT-IR studies. The IR spectra of the F0 and F0.5 nanoparticles are shown in Figure 3. Figure 3 compares the FTIR spectra of the uncoated PLGA NPs (F0) and the LMWC-PLGA NPs (F0.5) in reference to the chitosan spectrum. A characteristic band was observed at 3447 cm −1 related to the -NH2 and -OH groups stretching in the LMWC (Figure 3). A band corresponding to amine stretching at 1110 cm −1 was also seen in the infrared spectrum of the native LMWC (Figure 3). The characteristic peaks of LMWC that were observed in F0.5 but not F0 clearly proved that LMWCs were Although the changes in the surface charges when the NPs were coated with LMWC provided proof of a successful coating, the coating was further proved using FT-IR studies. The IR spectra of the F 0 and F 0.5 nanoparticles are shown in Figure 3. Figure 3 compares the FTIR spectra of the uncoated PLGA NPs (F 0 ) and the LMWC-PLGA NPs (F 0.5 ) in reference to the chitosan spectrum. A characteristic band was observed at 3447 cm −1 related to the -NH 2 and -OH groups stretching in the LMWC (Figure 3). A band corresponding to amine stretching at 1110 cm −1 was also seen in the infrared spectrum of the native LMWC (Figure 3).
The characteristic peaks of LMWC that were observed in F 0.5 but not F 0 clearly proved that LMWCs were deposited on the surface of PLGA nanoparticles during the coating process. From the PLGA NPs (F 0 in Figure 3), the C-H stretching in the methyl groups at 1454 cm −1 , the C=O at 1740 cm −1 , the C-H stretching vibrations at 2995 and 2945 cm −1 , and the OH stretching at approximately 3500 cm −1 can be noticed. The C=O peak frequency was also noticed in the spectrum of the LMWC-coated, PLGA nanoparticles (F 0.5 in Figure 3), which resulted from the PLGA core of the coated nanoparticles. These results are in agreement with other studies conducted with similar nanoparticles [27,28]. deposited on the surface of PLGA nanoparticles during the coating process. From the PLGA NPs (F0 in Figure 3), the C-H stretching in the methyl groups at 1454 cm −1 , the C=O at 1740 cm −1 , the C-H stretching vibrations at 2995 and 2945 cm −1 , and the OH stretching at approximately 3500 cm −1 can be noticed. The C=O peak frequency was also noticed in the spectrum of the LMWC-coated, PLGA nanoparticles (F0.5 in Figure 3), which resulted from the PLGA core of the coated nanoparticles. These results are in agreement with other studies conducted with similar nanoparticles [27,28]. deposited on the surface of PLGA nanoparticles during the coating process. From the PLGA NPs (F0 in Figure 3), the C-H stretching in the methyl groups at 1454 cm −1 , the C=O at 1740 cm −1 , the C-H stretching vibrations at 2995 and 2945 cm −1 , and the OH stretching at approximately 3500 cm −1 can be noticed. The C=O peak frequency was also noticed in the spectrum of the LMWC-coated, PLGA nanoparticles (F0.5 in Figure 3), which resulted from the PLGA core of the coated nanoparticles. These results are in agreement with other studies conducted with similar nanoparticles [27,28].
Drug Entrapment Efficiency
In this study, the EE% of tobramycin in the NPs was very high. The EE of tobramycin in the NPs ranged from 83.74% (167.48 mg) to 88.47% (176.94 mg), as shown in Table 2. The coating with LMWC did not show any obvious effect on the EE of tobramycin.
Mucoadhesive Properties of PLGA and LMWC-PLGA Nanoparticles
In order to evaluate the interaction between LMWC-PLGA NPs and mucin, zeta potential measurements of the mucin-NPs' dispersions were conducted, and the results are presented in Figure 4. At the beginning of the experiment, PLGA nanoparticles showed negative zeta potentials while all LMWC-PLGA NPs had positive zeta potentials.
As the incubation time passed, the potentials of the LMWC-PLGA NPs decreased slightly due to their interaction with mucin, but still recorded high positive values in comparison to PLGA NPs.
Drug Entrapment Efficiency
In this study, the EE% of tobramycin in the NPs was very high. The EE of tobramycin in the NPs ranged from 83.74% (167.48 mg) to 88.47% (176.94 mg), as shown in Table 2. The coating with LMWC did not show any obvious effect on the EE of tobramycin.
Mucoadhesive Properties of PLGA and LMWC-PLGA Nanoparticles
In order to evaluate the interaction between LMWC-PLGA NPs and mucin, zeta potential measurements of the mucin-NPs' dispersions were conducted, and the results are presented in Figure 4. At the beginning of the experiment, PLGA nanoparticles showed negative zeta potentials while all LMWC-PLGA NPs had positive zeta potentials.
As the incubation time passed, the potentials of the LMWC-PLGA NPs decreased slightly due to their interaction with mucin, but still recorded high positive values in comparison to PLGA NPs.
In Vitro Drug Release
The release of tobramycin in vitro from different nanoparticle formulations is shown in Figure 5. The pure drug was completely available in the solution (99%) after 30 min. It is evident that the component of the NPs affected the release of tobramycin in vitro. All NPs showed the emergence of an initial burst of release before the first 2 h, followed by a relatively slow release rate of the drug. The release of tobramycin from the coated NPs was slower in comparison to the uncoated NPs. After two days, the uncoated PLGA nanoparticles released 86.82 ± 2.3% of the entrapped drug, while the LMWC-PLLGA NPs released 71.81 ± 3.1, 65.52 ± 1.8, and 59.53 ± 2.0% for F0.25, F0.5 and F1, respectively.
In Vitro Drug Release
The release of tobramycin in vitro from different nanoparticle formulations is shown in Figure 5. The pure drug was completely available in the solution (99%) after 30 min. It is evident that the component of the NPs affected the release of tobramycin in vitro. All NPs showed the emergence of an initial burst of release before the first 2 h, followed by a relatively slow release rate of the drug. The release of tobramycin from the coated NPs was slower in comparison to the uncoated NPs. After two days, the uncoated PLGA nanoparticles released 86.82 ± 2.3% of the entrapped drug, while the LMWC-PLLGA NPs released 71.81 ± 3.1, 65.52 ± 1.8, and 59.53 ± 2.0% for F 0.25 , F 0.5 and F 1 , respectively.
Antimicrobial Activity of Tobramycin Nanoparticles
The antimicrobial activity of tobramycin as a raw material or when it was loaded in the different NPs prepared in this study was tested against a planktonic culture of P. aeruginosa (PA01). The MIC value for each formulation was measured. The results showed that tobramycin alone and all the four formulas (F0, F0.25, F0.5, and F1) inhibited bacterial growth, whereas the control NPs (not loaded with tobramycin) did not show any bacterial inhibition. Tobramycin alone had an MIC value of 1 µg/mL, while F0, F0.25, F0.5, and F1 had MIC values of 128. 15, 32.25, 4.95, and 2.9, respectively. The results are summarized in Table 3. The P. aeruginosa (PA01) biofilms grown on the Calgary Biofilm device were challenged with tobramycin, F0, F0.25, F0.5, and F1. As shown in Table 3, tobramycin had an MBEC value of 7.8 compared to 512, 250, 15.6, and 125 for F0, F0.25, F0.5, and F1, respectively.
Discussion
The presence or absence of LMWC in the formulation and its concentration affected the PLGA NPs' sizes and surface charges. Chitosan is a hydrophilic polymer that swells when it is dispersed in water, and the water viscosity increases as the chitosan concentration increases [29,30]. The greater increase in particle sizes when the NPs were coated with LMWC was maybe related to its effect on the viscosity of the adjacent liquid layer next to the NPs. As the LMWC concentration increases, this
Antimicrobial Activity of Tobramycin Nanoparticles
The antimicrobial activity of tobramycin as a raw material or when it was loaded in the different NPs prepared in this study was tested against a planktonic culture of P. aeruginosa (PA01). The MIC value for each formulation was measured. The results showed that tobramycin alone and all the four formulas (F 0 , F 0.25 , F 0.5 , and F 1 ) inhibited bacterial growth, whereas the control NPs (not loaded with tobramycin) did not show any bacterial inhibition. Tobramycin alone had an MIC value of 1 µg/mL, while F 0 , F 0.25 , F 0.5 , and F 1 had MIC values of 128. 15, 32.25, 4.95, and 2.9, respectively. The results are summarized in Table 3. 3.6. Effect of Tobramycin PLGA NPs on P. aeruginosa Biofilms The P. aeruginosa (PA01) biofilms grown on the Calgary Biofilm device were challenged with tobramycin, F 0 , F 0.25 , F 0.5 , and F 1 . As shown in Table 3, tobramycin had an MBEC value of 7.8 compared to 512, 250, 15.6, and 125 for F 0 , F 0.25 , F 0.5 , and F 1 , respectively.
Discussion
The presence or absence of LMWC in the formulation and its concentration affected the PLGA NPs' sizes and surface charges. Chitosan is a hydrophilic polymer that swells when it is dispersed in water, and the water viscosity increases as the chitosan concentration increases [29,30]. The greater increase in particle sizes when the NPs were coated with LMWC was maybe related to its effect on the viscosity of the adjacent liquid layer next to the NPs. As the LMWC concentration increases, this layer is expected to enlarge and become more viscous. This fact may explain the increment in the size of the surface-modified PLGA particles. Another reason that may explain this increment in size is the larger amount of LMWC that was deposited on the surface of the PLGA nanoparticles as the concentration of LMWC increased [23,31].
The PLGA nanoparticles had a negative zeta potential because of the PLGA free carboxyl groups on the surface of the NPs. On the other hand, the LMWC-PLGA nanoparticles had high positive charges due to the new amine functional groups on the NPs' surfaces that is related to LMWC [24]. Although the size of the NPs increased as the amount of LMWC used in the formulation increased, the zeta potential did not show the same direct relation to the amount of LMWC. It has been shown that the charge density on the surface of chitosan nanoparticles depends on the nanoparticle size and the amount of chitosan that is used in the preparation [32,33]. It is expected that as the amount of chitosan used in the preparation of the NPs increases, the number of free amine groups will also increase, which will lead to higher zeta potentials. On the other hand, the effect of the particle size on the zeta potential is more complicated. In general, the smaller the particles of any given sample, the greater the total surface area per weight, but at the same time, the individual particles themselves have smaller surface areas [34]. Therefore, it was observed that as the sizes of the chitosan NPs increased, the zeta potentials increased until a maximum value was reached, after which the charge began to decrease again. Finally, the high positive surface charge of LMWC-PLGA NPs is expected to prevent the aggregation of the particles [30,35].
Electrostatic interaction was suggested as the mechanism that explained the mucoadhesive interaction of the LMWC with the mucin. The zeta potential decreased after the incubation with mucin in all LMWC-coated nanoparticles. The interaction between the sialic groups of the mucin (negatively charged) with the surface layer of the LMWC (positively charged) on the LMWC-PLGA NPs was expected to decrease the zeta potential. After four hours of incubation, the surface charges of the LMWC-PLGA NPs were almost the same, which may indicate the stability of the electrostatic interaction between the LMWC-coated nanoparticles and the mucin. On the other hand, the uncoated PLGA NPs (F 0 ) showed a stable negative charge, which indicates that there was no interaction with the mucin [36,37].
All the nanoparticle formulations showed a biphasic release, with an initial burst release of the drug, followed by a sustained release. This biphasic behavior of the drug release was mentioned in other works related to PLGA NPs. The high burst release at the beginning was related to the free drug or weakly bonded drug in the nanoparticles [38][39][40]. It is well-known that PLGA NPs degrade through the hydrolysis of the ester linkages between their lactic and glycolic acid oligomeres in an aqueous medium, which then causes the drug to be released [38]. On the other hand, LMWC swells in water and forms a hydrogel layer that controls drug diffusion. Coating the NPs with LMWC slowed down the drug release, and this reduction was related to the amount of LMWC used in the formulation. This indicates that the LMWC layer on the nanoparticle surface serves as an additional barrier against drug diffusion [10].
All the formulas in this study were found to inhibit the bacterial growth except the control formula, which was not loaded with any drug. There was a clear difference in the MIC between the four different formulations, where F 1 had the lowest MIC with a value of 2.9, and F 0 had the highest MIC with a value of 128.15. The MIC values may be related to different variables such as the particle's size or the charge. The fact that the control NPs did not show any antimicrobial activity confirms that the antimicrobial activity was elicited by tobramycin and not by any other ingredient in the formulation. Although the control NPs did not show any antimicrobial activity, there was a clear relationship between the amount of the LMWC used in the formulation and the NPs' antimicrobial activity. In general, increasing the amount of LMWC in the formulations enhanced the NPs' microbial inhibition. This relation may be related to the ability of coated NPs to adhere well to the microbes' membrane in comparison to the uncoated NPs, which gives the released tobramycin the opportunity to affect the microbes faster and in higher concentrations.
Tobramycin alone exhibited the lowest MBEC, and the closest MBEC value was achieved by F 0.5 , which was double the value of tobramycin alone. Generally, the encapsulation of tobramycin into nanoparticles results in higher MBEC values, which may be a reason for the gradual release effect of the NPs, as proven by the drug release study [41,42]. The ability of the NPs to control the drug release over a long time is expected to improve the overall efficacy of the formulation in comparison to tobramycin alone [43,44].
Conclusions
A modified, mucoadhesive, tobramycin nanoparticle targeting P. aeruginosa for the treatment of cystic fibrosis was prepared successfully in this work. The prepared formulations could improve patient compliance due to their prolonged action, which would be beneficial in reducing the overall frequency of dosing and minimizing the side effects. Coating the NPs with LMWC enhanced the NPs' mucoadhesion and sustained the drug release from the NPs. The concentration of chitosan that is used in the formulation is important in determining the physicochemical properties, the release and the antimicrobial activity of the formulation.
|
2018-04-03T05:23:35.089Z
|
2018-03-01T00:00:00.000
|
{
"year": 2018,
"sha1": "eb88adf9d7c95bc0e61770dda32f048fa9f7c3a5",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8247/11/1/28/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "eb88adf9d7c95bc0e61770dda32f048fa9f7c3a5",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Materials Science"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
233442511
|
pes2o/s2orc
|
v3-fos-license
|
The Misapplication of Democracy and the Plight of the Individual in the Drama of Henrik Ibsen
Let me begin by stating the obvious. It is now common knowledge in Henrik Ibsen’s scholarship that the Norwegian playwright had a very uneasy relationship with politicians. The dramatist took delight in satirizing the pomposity and hypocritical practices of politicians and other public officials through the use of a flowery rhetorical style characteristic of platform politicians. A close reading of critical writings on Ibsen’s major plays that have a political agenda reveals that most of the reputed commentators conclude that the author directs his criticism against the democratic form of government. Some of the critics are even of the opinion that Ibsen in his works is in favour of aristocracy as an alternative to democracy. What is however intriguing about the claims of these critics is that they do not actually take up time to define what democracy as a form of government is all about before illustrating how the dramatist writes against it in his plays. The central concern of this paper therefore, is to demonstrate from a new historicist standpoint that Ibsen in his drama does not completely condemns democracy as a form of government except when it comes to the application of some democratic principles which are hostile to the welfare of the individual.
Introduction
According to New Historicists, works of art can be read as subversive discourses offering a critique of the prevailing socio-political and economic ideologies of the society in which they were written. In his essay titled the "The Poetics of Culture,"as quoted by Kreiswirth M. and Michael Groden, [1], Louis Montrose posits that New Historicism "is concerned with how a literary work offers a genuinely radical critique of authority, or how a text articulates views that threaten political orthodoxy." In other words, "a willingness to explore the political potential of writing is a distinguishing mark of new historicism." (535) The quickening force behind the writing of this paper is to critically examine how conservative middle class politicians misapply some democratic principles that inhibit the progress and self-fulfillment of the individual. This study will focus on Ibsen's A Doll's House, [2] The Pillars of the Society, [3] An Enemy of the People [4] and Rosmersholm. [5] Before proceeding with our discussion, it important to determine however briefly, what democracy is all about. This will enable us to clearly fathom the extent to which the plays are subversive discourses against the political practices of the period and society where they were written. According to Cherif Bassiouni et al [6].
Democracy aims essentially to preserve and promote the diginity and fundamental rights of the individual, to achieve social justice, foster economic and social development of the community… The online Wikipedia dictionary defines democracy as a political form of government where governing powers emanates from the people either by means of elected representatives or through a referendum. This means that in a democratic system, government is the servant of the people and is answerable to them since the power it wields comes directly from the people. This goes to corroborate Abraham Lincoln's definition which states that democracy is government of the people, by the people and for the people. In order to be labeled a modern democracy, a country needs to fulfill some basic requirements and these requirements need not only be written down in the constitution but must be kept up or implemented in everyday life by the governing authorities and politicians. Some of the basic principles of democracy include: the guarantee of basic human rights, majority rule, and separation of powers, that is, the executive, parliament and the judiciary. A democratic country must also promote freedom of speech and opinion, equality of all citizens before the law, religious liberty, general and equal right to vote in competitive free, transparent and fair elections. Individual freedom should be guaranteed so that citizens can vote in their personal interest. There should also, and above all, be good governance. By good governance, we mean, focus should be placed on the general welfare of the entire community and there should as well be absence of corruption which is one of the canker worm that has eaten deep into the fabrics of modern society. Albert Sama [7] in his book entitled Nation Building, Governance and Human Rights intimates that, "for governance to have positive results, it must have the following traits; participation, rule of law, transparency, equity, inclusiveness, consensus oriented, responsiveness, efficiency and accountability (92). Sama further notes that, "the test of good governance in a country is made manifest through its results…the vibrancy of the civil society, and job creation by both the state and individuals.
Critics who argue that Ibsen is an anti-democrat par excellence may certainly have to convince us that the playwright in his works is against all the lofty principles of democracy highlighted above. Put differently, critics who take Ibsen for a complete anti-democrat do not adequately substantiate or convincingly prove their point. For example, Mordecia Roshwald [8] has noted that "Ibsen's attack on democracy is clearly exaggerated and vulnerable."(227). In the words of Chesterton G k, [9] "the playwright made no disguise for his passionate hatred of democracy." (222). In his article titled "Henrik Ibsen: Anti-Democrat and Individualist," K. Balzersen [10] opines that, "the primary anti-democratic contribution of Ibsen is arguably An Enemy of the People" (5). Balzersen goes further to note that Ibsen in the play advocates aristocracy in the place of democracy. Like Balzersen, George Bernard Shaw [11] in The Quintessence of Ibsenism concludes that the playwright in An Enemy of the People is against democracy. Shaw notes that in the past, men used to submit "to kings and consoled themselves by making it an act of faith that the king was always right…in the same way, we who have to submit to the majorities make it blasphemy against democracy to deny that the majority is always right, although as Ibsen says, it is a lie." To Martin Esslin, (12), Ibsen in An Enemy of the People advocates aristocracy as an alternative to democracy. Didachos Afuh Mbeng (13) argues that Ibsen in the plays under study is an ambivalent writer who presents both the democratic and aristocratic forms of government without necessarily being in support of any of them. Without any intention of wanting to condemn all the aforementioned critical views, this study seeks to demonstrate that the playwright was only a partial anti-democrat especially as far as individual liberty and the democratic principle of majority rule were concerned. The dramatist was concerned more with showing how middle class conservative politicians misused democracy for their personal and political aggrandizement. A critical reading of his plays reveal that democracy was a paper functioning mechanism in the middle class society of his day. In the words of Harold Bloom [14], Ibsen was writing at a time "when society professing liberalism in name only, distanced itself from the ideals of liberty and equity (28).
Political Conservatism, the Majority and the Individual
To begin with, Dr Stockman, the protagonist of An Enemy of the People is a victim of the hypocritical practices of selfseeking conservative politicians. The physician discovers that the town's water system managed by his brother, Peter Stockman (city mayor) is polluted. It is with excitement that he goes to his brother, who is the leading politician of the town to discuss the matter with him so that something can be done urgently to rescue the situation before an epidemic breaks out as a result of the poor sanitary conditions. But Peter Stockman who thinks only in monetary terms estimates that it will be too costly for him to undertake any repair works on the baths. Moreover, the repair works can last for as long as two years which for Peter will be a great loss. Peter argues that closing down the baths for two years will certainly deter tourists from visiting the town thereby leading to a decrease in the yearly income of his municipality.
After failing to reach any consensus with his brother, the doctor decides to inform the general public through the local newspaper, namely, The People's Daily Messenger. The newspaper men promise to stand by him in his fight for the truth that is badly needed for the cleansing of the society. The newspaper editor-in-chief, Hovstad reassures Doctor Stockman that the time has come for him to "break up that ring of pig-headed reactionaries who hold all the powers." Hovstad further states that he cannot let such a unique opportunity to sleep away. In his words, "the myth of the infallibility of the ruling class has to be shattered. It has to be rooted out, like any other superstition." (Act 2, 307). At first, everything seems to be in the doctor's favour. He naively thinks that with the journalists by his side, he is probably going to carry the day. But Peter Stockman, who doubles as Mayor and chairman of the board of directors in charge of water, does everything in his powers to frustrate all his plans.
After his unsuccessful attempt to get the support of the local newspaper men, Doctor Stockman plans to give a public lecture during which he will inform popular opinion or the entire community about his discovery at the baths. Nobody offers him a venue for the meeting excerpt his friend, the sea captain, Horster, who himself is less concerned with the daily affairs of the community. Most of his time is spent at the high sea at work. Although it is Dr Stockman who has taken the initiative to organize the meeting, newspaper publisher Aslaksen together with the masses mobilized by the Mayor take over control of the proceedings. Aslaksen is appointed as chairman of the meeting to stop Dr Stockman from saying anything about the town's baths. The doctor however struggles to utter a few incoherent statements in anger when he realizes that everybody is against him in the hall. After a protest and some disturbing noise from the crowd, the doctor continues: Well Enemies of truth and freedom in our society (Act 4: 530) After listening to the doctor's cynical and distracted statements, the masses enjoin him to be more explicit by telling them who exactly "the most dangerous enemies are." The doctor further says: Yes, you can be sure that I will name them, because that is exactly The discovery I made yesterday. The most dangerous enemy of truth Of freedom among us is the compact majority. Yes, the damned, Compact Liberal majority-that is it, now you know it (Act 4: 530) As soon as Doctor Stockman makes these statements, the crowd spearheaded by publisher Aslaksen utters cat-calls obliging the physician to withdraw his pronouncements. But the fearless doctor responds saying; "never Mr. Aslaksen, it is the great majority in our society that robs me of my freedom, and that wants to forbid me from telling the truth.
When Mayor Peter Stockman is given the floor, he, in a highly rhetorical speech attacks his brother, Doctor Stockman, for trying to tarnish the image of the town. He says that doctor Stockman "wants to prove that the administration blundered in constructing the springs." The politician further shamelessly states, like his counterpart, Consul Bernick in The Pillars of the Society that: Now all you have got to ask yourself a simple question, has anyone Of us the right, the democratic rights as they call it, to speak the Minor flaws in the springs, to exaggerate the most picayune faults… (Cries of no! no!) and to attempt to publish these defamations for The whole world to see. We live and die on what the outside world
Thinks of us (Act 4: 536)
The mayor then goes ahead to enjoin his fellow countrymen to join forces with him and fight against what he calls "a common enemy." Doctor Stockman is unanimously declared "an enemy of the people." He is stoned and fired from his position of the town's baths physician. A campaign is launched for no one to use him as their personal doctor. His only friend, Captain Horster equally gets fired for letting the doctor use his residence for the meeting. Even the doctor's daughter who is a High School teacher is dismissed from work. The doctor's innocent children-Morten and Kill are also sent away from school for a few days. The fact that Morten and Kill are thrown out of school as a multiplier effect of their father's unwanted quest for truth, shows how even children's rights were violated in the hypocritical middle class Norwegian society by the conservative party men in power.
Note should be taken here of Ibsen's realistic use of political discourse as seen in the utterances of Mayor Peter Stockman. His language is reminiscent of that of conservative platform politicians who say one thing and mean another. He talks of the citizen's "democratic rights" to freedom of speech when in essence, he is interested in stopping Doctor Stockman from speaking the truth in public. He deprives his brother of the democratic right of freedom of speech. Peter is bent on putting up window dressing for "the outside world" to see. "Minor flaws in the springs" must be covered up in order not to scare potential customers away.
Ibsen shows his indignation against the democratic principle of majority rule in the words of Doctor Stockman when he says: I am against the age old lie that the majority is always right…the Majority never has truth on its side-I say. This is one of these societies Lies that a free thinking man must revolt against… well, well, you can Shout me down, but you cannot reply. The majority has might on its Side-sadly, but it is not in the right. I and the other few individuals are In the right (Act 4: 532) The doctor further adds that "the majority is never right until it does right." Doctor Stockman is convinced that what he is saying is the truth, no matter the general outcry against him by the manipulated and ignorant masses. He is sincere in his convictions and stands by them against all odds. Democracy is projected here as the dictatorship of the majority. Ibsen's message in An Enemy of the People is that democracy hardly considers the opinion of the individual so long as the majority always carries the day through vote. This unfortunately means that even if an individual has all it takes to rescue a situation from getting worst as it is the case with doctor Stockman in the play at issue, he may never be given the opportunity or listened to since the democratic majority will always have a crushing effect on "he who stands most alone" This is exactly the fate that befalls doctor Stockman in An Enemy of the People. Under the corrupting influence of the all-powerful Mayor, Doctor Stockman is ironically rejected and stigmatized by all the members of the community whose welfare he struggles to protect. This is the sad reality we all notice around our contemporary societies wherein the good intensions of some members of the civil society are often frustrated by politicians in the so-called majority democratic political parties. The playwright is often quoted to have said, "I do not believe in political measures nor have much confidence in the altruism and good will of those in power."
Socio-Economic Corruption and the Individual
Another middle class conservative and influential personality who, like Peter Stockman in An Enemy of the People, mismanages democratic principles of good governance and respect for human rights is consul Bernick in The Pillars of Society. Bernick is the leading politician and financial magnet of the town. His family is regarded as that of the Rosmers in Rosmersholm, as the model in the community. Bernick is a capitalist who sacrifices everything for his economic purposes. Ibsen, like Karl Marx, was an enemy of capitalism because of its exploitation of the masses. While agreeing with Marx that capitalism worked to the detriment of the workers, Ibsen unlike Marx did not advocate class war wherein labourers unite and through a violent revolt, overthrow their masters. What Ibsen seems to suggest in his plays is mutual dialogue between the master and workers in finding solutions to common problems.
Ibsen however makes clear his hatred for capitalism in the hypocritical and corrupt commercial practices of Consul Bernick and his colleagues in The Pillars of Society. Bernick and his local business partners like Rumel, Vigeland and Sandstad are generally regarded as pillars on which the society stands for its welfare. Consul Bernick, for instance, co-runs a shipyard with American capitalists and they care very little about the welfare of workers. In Act two, the Consul quarrels with Aune, the Shipwright simply because the latter is unwilling to repair "The Indian Girl," a ship owned by the Americans with whom he manages the shipyard. He fears being blamed by his foreign counterparts. He is interested in protecting his public image especially in the eyes of his foreign partners. Local capitalists like Bernick exploit their fellow countrymen in order to satisfy their foreign business partners.
The Consul quarrels with the shipwright for the second time on issues related to the arbitrary replacement of workers with new machines.
Bernick-Yes, for your own limited circles for the working class. Oh I know all about your political agitations. You make speeches, you stir people up. But when a chance for tangible progress turns up as now with the machines, you won't collaborate. You are afraid.
Aune-Yes I am certainly afraid Bernick, I am afraid for all the people machines rob of their bread. You often speak sir, of considering the community, but I think that community has its inventions to work before the community has educated a generation that can use them. Bernick-You read and think too much Aune. You get no good from it. That is what makes you discontented with your position.
Aune-It is not that sir, but I cannot bear to see one man after another discharged and losing his livelihood because of these machines (Act 3, 56) This dialogue demonstrates that capitalists like Bernick do not take into consideration the negative effects of the introduction of machines on the individual worker. All they see is the benefit that will accrue from such ventures. Bernick is a capitalist who uses rhetorics to keep workers subservient to him. He blames Aune for reading too much, stating that he gets no good from such intellectual activity. He will want Aune to remain blind and humble about his secondary position in the shipyard so that he can continue to exploit him. Bernick like Mayor Peter Stockman in An Enemy of the People tells the shipwright that the individual must always be ready to subject himself to the community or morality: Well, if nothing else can be done, the lesser must give way to the Greater, when all is said, the individual must be sacrificed to the Majority. That is the only answer I can give you, and that is the way Things work in this world. But you cannot do anything else, but Because you don't want to prove the superiority of machines over Hand work. (Act 1, 58) Consul Bernick, as a matter of fact, sacrifices his love, family and self to his commercial aim. The individual, he says, must be ready to sacrifice himself to the maximization of profit. Bernick discharges his workers indiscriminately without providing them with any alternative means for the sustenance of a livelihood. Capitalists like Bernick therefore abuse the democratic natural rights of their workers to share in the work which makes wise use of the earth's material resources. By dismissing them from work indiscriminately, Bernick leaves them with no other means to support themselves and their immediate families.
Nil Krogstad in A Doll's House is yet another middle class government official whose corrupt practices have a negative effect on the welfare of the other individuals. Kogstad in the play doubles as a banker and lawyer. Through him, Ibsen satirizes professional lawyers who instead of promoting justice in the democratic society in which they live, are ironically the first to go against the law. Doctor Rank describes Krogstad as "a moral invalid." Talking to Kristine about Krogstad, he says:
I don't know if you have in your neck of the wood a type of person
Who scuttles about breathlessly sniffing out hints of moral corruption?
And then maneuvers his victims into some sort of key positions where
He can keep an eye on him. It is the healthy that are in the cold these Days in society (Act,2,467) This excerpt is a clear indication of the fact that Ibsen is a socio-realistic writer who devotes his art to the criticism of contemporary societal vices. Krogstad is a state agent whose character, as doctor Rank puts it, "is rotten to the root." His corrupt practices are reminiscent of twenty-first century societies wherein people who occupy privileged positions more often than not manipulate the administrative machinery so as to procure lucrative positions or jobs for their family members and friends even if they have no qualification for the jobs. After securing such unmerited positions for the unqualified and incompetent relatives, the God fathers keep watch over their submissive servants in matters of remuneration. Here as highlighted earlier in this paper, the freedom of the individual is infringed upon. Those Krogstad manipulates into key positions are obliged to work following his dictates given that any deviation from his directives may cost such individuals their undeserved jobs. Krogstad's corrupt practices are damaging because no society can progress if those who possess the qualities, qualifications and potentials to work are left "in the cold" whereas the unskilled and unqualified ones are given pride of place. This is unfortunately the sad reality of many modern societies wherein gaining employment into certain public and private services, depends on who you know and which tribe or ethnic group you come from. Such discriminatory practices work against the democratic principles of equality, justice and freedom of all human beings. That is, people no longer have trust in themselves thereby killing creativity and even confidence in their leaders who are often the first to violate with impunity the laws that are voted and promulgated.
Conclusion
We set out at the beginning of this inquiry to investigate the thesis that Ibsen was not completely against democracy as some of his renowned critics uphold. The playwright was rather only indignant about the poor application of democracy by those who ran the affairs of the state. Janet Garton (15) quotes Ibsen's letter of January 3 rd 1882, to his friend George Brandes in which the playwright expressed his worry about the state of affairs back in his home country on the advent of democracy. He regretted the fact that democracy had done very little in the enhancement of the individual's freedom. Democracy as he observed was tailored to suit only "party line." The playwright noticed that his fellow countrymen were still largely "narrow-minded" at a time when democracy, the most popular form of government, had been introduced in their country. He was, as Edwin Sloson [16] puts it, "a disillusioned democrat" (253). He expected his native brothers and sisters to rise above their previous provincialism and move forward with the fast changing times.
Ibsen succinctly gives a positive view of democracy in the character of Rosmer in Rosmersholm. This is evident in the following conversation between Rosmer and doctor Kroll.
Dr Kroll-Rosmer, I will never get over this (looks sadly at him) oh! that even you could give yourself to the forces of decadence and corruption that are undermining our miserable country.
Rosmer-It is the forces of liberation, I want to give myself to.
Dr Kroll-Oh I know about that, which is what they call it, both the pied papers and the tools that get led astray. But do you really think there is any liberation to be found in the spiritual position that is filtering through our whole society.
Rosmer-I am committed to the spirit that destroys. Not to any faction. I want to bring together people from all sides. As many as I can reach, as honestly as I can. I want to live and use all my vital energies towards that one end, the creation of true democracy in this land Dr Kroll-Don't we have democracy enough? For my part, I think we are still on our way down into the muck and mire, where only the lowest of low can thrive.
Rosmer-Exactly why I want democracy to assume its rightful role Dr Kroll-What role? Rosmer-To elevate all our people into noblemen. Dr Kroll-Through what means? Rosmer-By liberating their minds and tempering their wills Dr Kroll-You are a dreamer, Rosmer. Will you liberate them and temper them? (Act, 1, 518-519) The preceding dialogue clearly demonstrates that Ibsen was not completely against the democratic form of government as many of his critics argue. He was rather interested in castigating the shortfalls of egoistic politicians who failed to properly apply the human friendly principles of democracy which worked against the individual's quest for freedom and self-realization. The playwright did not only end at criticizing the short comings of those who wield political power, but he as well suggested what democratic leaders should do to make the most popular form of government in modern society, more workable. The great task for democracy Ibsen says in the words of Rosmer, is, "to elevate our people into noble men." It is only through great men and women who are endowed with the utmost reasonable freedom as individuals that the realization of the lofty principles of democracy is possible. By liberating people's minds from conventional practices and "tempering their wills," as Rosmer puts it, democracy will as well be liberated from petty party politics.
|
2021-04-27T18:28:16.726Z
|
2021-01-22T00:00:00.000
|
{
"year": 2021,
"sha1": "74a69a8aaea50e9332ffa4b8702206df6164eba4",
"oa_license": "CCBY",
"oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.ijla.20210901.12.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "74a69a8aaea50e9332ffa4b8702206df6164eba4",
"s2fieldsofstudy": [
"Political Science",
"History"
],
"extfieldsofstudy": [
"Political Science"
]
}
|
256194713
|
pes2o/s2orc
|
v3-fos-license
|
Inhibition of EED activity enhances cell survival of female germline stem cell and improves the oocytes production during oogenesis in vitro
Ovarian organoids, based on female germline stem cells (FGSCs), are nowadays widely applied for reproductive medicine screening and exploring the potential mechanisms during mammalian oogenesis. However, there are still key issues that urgently need to be resolved in ovarian organoid technology, one of which is to establish a culture system that effectively expands FGSCs in vitro, as well as maintaining the unipotentcy of FGSCs to differentiate into oocytes. Here, FGSCs were EED226 treated and processed for examination of proliferation and differentiation in vitro. According to the results, EED226 specifically increased FGSC survival by decreasing the enrichment of H3K27me3 on Oct4 promoter and exon, as well as enhancing OCT4 expression and inhibiting P53 and P63 expression. Notably, we also found that FGSCs with EED226 treatment differentiated into more oocytes during oogenesis in vitro, and the resultant oocytes maintained a low level of P63 versus control at early stage development. These results demonstrated that inhibition of EED activity appeared to promote the survival of FGSCs and markedly inhibited their apoptosis during in vitro differentiation. As a result of our study, we propose an effective culture strategy to culture FGSCs and obtain oocytes in vitro, which provides a new vision for oogenesis in vitro.
Introduction
EED, SUZ12, EZH1/2 and RbBP4/7 are the four main subunits of Polycombrepressive complex 2 (PRC2), which regulates the enrichment of di-and trimethylation of histone H3 Lys27 (H3K27me2/3) [1]. Despite the fact that EZH2 is a central catalytic domain of PRC2 [2], EED is required for physically binding H3K27me3 via five tandemly repeated WD motifs, and hence exerts an important function in PRC2 assembly [3,4]. Inactivation of either EED [5] or EZH2 [6] severely compromises the core function of PRC2 and further causes loss of H3K27me3. Indeed, PRC2 regulates the stability of gene expression in vivo by promoting or blocking cell differentiation, fine-tuning cell fate decisions and guiding cell differentiation throughout the shift from pluripotent to differentiation [7].
In the fetal ovary, the gain of H3K27me3 was initially identified in primordial germ cells (PGCs) at E10.5 and persisted at its peak level until E15.5. EED and EZH2 were all detected in the nuclei of E11.5 and E12.5 PGCs, then they continued to be detectable until E15.5 [8]. In addition, H3K27me3, EED and EZH2 also were likewise abundant in growing oocytes of postnatal day mouse ovaries [9]. H3K27me3 was shown to be elevated in stage-specific genes relevant to meiotic development during mammalian spermatogenesis [10]. The conditional deletion of EED in the male germ cell results in complete male infertility [10]. Female EED deletion mice, on the other hand, had normal fertility and generated pups with considerable overgrowth [11]. By contrast, the conditional knockout of EZH2 in growing oocytes remained normal reproductive characteristics, and the pups were born underweight [12].
The functional study of female germline stem cells (FGSCs) has significant implications for our comprehension of oogenesis [13]. At present, the way to enhance the FGSC proliferation efficiency is a major focus of the following research. Previous study reported that GDNF [14] significantly contributes to FGSC selfrenewal, creating opportunities for gametogenesis of mammals in vitro [15]. Recently, Grosswendt et al. [16] reported that PRC2 plays a vital role in mouse early embryonic development stage, in which EED deficiency in zygotic leads to embryonic lethality in mice by impairing gastrulation development. However, EED mutants have substantially more PGCs, indicating that PRC2 dominates the limitation in early germline lineage. This further implicated that inhibition of PRC2 may promote FGSC proliferation and survival.
In the present experiment, we systematically examined the function of PRC2 in FGSC proliferation and differentiation in vitro. Our results indicated that FGSCs with EED226 treatment maintained a significantly higher survival during in vitro cultivation, which has shed new light on FGSC culturing strategy.
Inhibition of the PRC2 function could boost FGSC proliferation
Firstly, FGSCs were identified by germline (AP-2γ, BLIMP1, STELLA and VASA) and pluripotency (OCT4) specific gene expression. Immunofluorescence analysis showed positivity for nuclear AP-2γ, BLIMP1, STELLA, OCT4 and cytoplasmic VASA proteins in isolated FGSCs (figure 1a). Then, to probe the role of PRC2 in FGSC self-renewal, we evaluated and examined the effect of small chemicals (EED226 and GSK343) treatment on FGSC growth via colony formation array (figure 1b). The result demonstrated that EED226 significantly enhanced the formation of colonies and the growth of cells in a dose-dependent manner (1, 5 µM) (figure 1c). However, it is here observed that when the EED226 concentrations added were higher (10 µM), the clones number was markedly decreased compared with the clonal efficiency of other groups. What calls for special attention is that there appeared to be no significant difference in the number of the FGSC clones with GSK343 treatment (figure 1d). To further confirm this effect, we conducted a time gradient assay to evaluate the FGSC proliferative effect of inhibitors at various concentrations by CCK8 detection. The result indicated that the OD values of FGSCs with 5 µM EED226 treatment markedly increased relative to controls at 24 h and 48 h (figure 1e). Likewise, CCK8 assay results showed that GSK343 had no significant effect on FGSC proliferation (figure 1f ). EdU incorporation assay was further performed to validate the facilitation effects of EED226 on FGSC proliferation. Significantly, more EdU incorporation was induced by the EED226 treatment, compared with untreated controls (electronic supplementary material, figure S1). Taken together, the above results demonstrate that 5 µM EED226 was able to enhance FGSC proliferation.
Considering that EED226 and GSK343 are specific inhibitors of EED and EZH2, respectively, we next examined the expression of EED and EZH2 in FGSCs after treatment with EED226 or GSK343 via RT-qPCR and WB. Results indicated that no significant differences in EED and EZH2 expression were observed after the addition of inhibitors (figure 1g-i). Correspondingly, to examine the effect of inhibitors on the enrichment of H3K27me2/3, we compared the level of H3K27me2/3 in FGSCs after with or without inhibitor treatment. There were no obvious changes in the level of H3K27me2 regardless of the addition of EED226 or GSK343 (figure 1j). As well, H3K27me3 levels did not significantly change when FGSCs were exposed to GSK343, but the level of H3K27me3 was markedly reduced after EED226 treatment (figure 1j). Based on above results, EED226 potently binds EED in vitro, inhibited PRC2 catalytic capability in consequence decreasing H3K27me3 level, and promoted FGSC proliferation.
Inhibition of the EED activity promotes the expression of OCT4 and inhibits the expression of P53 and P63
To investigate the transcriptional effects of EED inhibition, we detected the expression level of survival-associated genes in FGSCs. The expression of Oct4 was significantly enhanced with EED226 treatment. Also, P53 and P63 expressions, as indicators of apoptosis, were obviously downregulated in comparison to the control group (figure 2a, p < 0.01), suggesting that the EED226-mediated epigenetic dynamics of H3K27me3 might be correlated with the transcriptional variation of these genes. We further verified this interaction via ChIP-qPCR analysis. It turned out that H3K27me3 occupancy at the Oct4 promoter and exon region decreased after EED226 treatment (figure 2b, p < 0.01). This corresponded with its expression levels. As for P53 and P63, we did not detect the variation of H3K27me3 modifications at the promoter and exon regions. These findings suggested that EED226 could regulate the H3K27me3 enrichment on the Oct4 promoter and exon. On the other hand, Oct4 may be core mediator of the effect of EED226. To verify the above deduction, we employed RNA interference strategy to generate the Oct4 knockdown FGSCs. First, we examined the cellular proliferation of FGSCs with Oct4 knockdown by the colony formation and CCK8 assay in the presence of EED226. The results indicated that the survival ability of FGSCs was significantly reduced with Oct4 knocked down in comparison to the control group (figure 2c, p < 0.01, figure 2d, p < 0.001). Besides, RT-qPCR and western blot results confirmed that the expression level of OCT4 was decreased (figure 2e,f, p < 0.01), accompanied by a concurrent elevation of P53 and P63 expression. Those results indicated the effect of EED226 on FGSC survival depends on the expression of OCT4. But meanwhile, the results indicated that there may be a negative regulatory relationship between OCT4 and expression of P53 and P63.
Inhibition of EED function does not affect the differentiation capacity of FGSCs in vitro
In the above experiments, the regulatory role of EED on FGSC proliferation has been established. Next, we systematically characterized the germline of FGSCs after EED inhibition via royalsocietypublishing.org/journal/rsob Open Biol. 13: 220211 royalsocietypublishing.org/journal/rsob Open Biol. 13: 220211 structures in both groups (figure 3b). Subsequently, we counted the number of GFP + oocytes isolated from rOvaries in both groups. On average, 172.67 ± 12.67 GFP + oocytes were formed per rOvary, which was significantly increased compared with control (144.33 ± 7.10, p < 0.05) (figure 3c,d). According to the above results, a significant number of oocytes were produced from the EED226 group under the identical culture conditions employed, which may be relevant to the high survival efficiency of FGSCs after being induced by EED226. Next, individual follicles were performed in IVG culture. Following 11 days of culture, primary oocytes grew to germinal vesicle oocytes in both groups (figure 3e). After maturation, the percentage of invitro-generated MII oocytes was 32.67 ± 2.24% in EED226 group, and no significant difference was observed versus the figure S2). Mating experiments were used to assess the fertility of progeny derived from ovarian organoids. When mated with normal males or females of proven fertility, the respective adults derived from EED226 organoid oocytes produced similar litter sizes compared with control group (electronic supplementary material, table S4). These results showed that ovarian organoids based on EED226 system were capable of generating fertile offspring.
Inhibition of the function of EED did not affect the process of meiosis
Meiotic recombination is a highly complex process required for oogenesis. We analysed prophase I progression in rOvaries, including leptotene, zygotene, pachytene and diplotene. Oocytes from rOvaries exhibited the four meiosis markers on IVD days 3 to 9, which indicated FGSCs were induced to enter meiosis on IVD day 3 ( figure 4a). In addition, we also compared the percent of prophase I progression in rOvaries from two FGSC sources. Furtherly, statistical analysis showed no statistical difference between EED226 and control groups, suggesting that their ability to enter meiosis were equivalent (figure 4b,c). As observed by the distribution of γH 2 AX on the pachytene chromosomes, persistent double-strand breaks are seen in in-vitro-generated oocytes. Of note, by pachytene stage, nearly 43.02 ± 3.10% and 33.57 ± 6.61% of oocytes in EED226 and control groups display asynapsis to some extent, a percentage significantly greater than that ever observed in the EED226 group (10%) (figure 4d,e). The finding of the high degree of asynapsis in the EED226 group was understandable and acceptable, given that P63 and P53 are thought to play roles in a conserved mechanism for controlling meiosis integrity [17] and given the recent observation that P63 and P53 presented low expression in FGSCs with EED226 treatment. Then, the expressions of P53 and P63 were detected at different time periods (IVD 7, 14 and 21 d) by IHC. Moreover, P63 and P53 are specifically highly expressed in oocyte nucleus and cytoplasm, respectively. royalsocietypublishing.org/journal/rsob Open Biol. 13: 220211 Quantitative analysis results indicate that the intensity of P53 was low positive or negative from IVD 7 d to 21 d in EED226 and control groups (figure 5a). Meanwhile, in the control group, the intensity of P63 expression begin to increase from IVD 7 d. After IVD 14 and 21 d, the expression levels of P63 remain high in the oocytes of primordial, primary and early secondary-like follicles. However, the intensity of P63 was lower in the EED226 group in comparison with the control (figure 5b).
Discussion
A previous study revealed that disruption of EED, as the core PRC2 subunit, significantly elevates the population of PGCs [16]. Based on this, we speculate that the functional inhibition of PRC2 may contribute to the proliferation of FGSCs in vitro.
Here, the potential effects of EED226 and GSK343, as the inhibitors of EED and EZH2, on the in vitro proliferation and developmental competence of FGSCs were analysed in this study for the first time. Our results show that 5 µM EED226 effectively promoted cell survival via downregulating the occupancy of H3K27me3 on Oct4 gene regions and further increased the expression of OCT4. Moreover, we found that FGSCs treated with EED226 could develop into oogonia that entered meiosis and successfully differentiate into functional oocytes in vitro. When germ cell destiny was established in E7.5, transcriptional regulation during development is governed by the dynamic of H3K27me3 enrichment [18]. H3K27me3 is enriched at developmental gene promoter region in PGCs [1,19], whereas germline-specific genes, such as Dazl, Dppa3 and Vasa, are enriched for only H3K4me3 [20]. While in FGSCs, the developmental genes (e.g. OCT4) were occupied by H3K27me3 [21]. A tight link between OCT4 and H3K27me3 has been widely demonstrated [22,23]. For example, the pluripotency genes, including OCT4, were increased in EED knockout mESCs [24]. In the present experiment, there was elevated OCT4 expression upon artificial PRC2 suppression, indicating that the OCT4 expression was regulated by PRC2 in FGSCs. Those results further verify the above conclusion.
OCT4 is extensively expressed in ESCs and PGCs; however, it is expressed at low levels upon differentiation during mouse embryonic development [25]. In female PGCs, OCT4 is repressed by the onset of meiotic prophase I (E14.5) and re-expressed during the growth phase of oocytes after birth [26]. A previous study demonstrated that OCT4 deletion in PGCs resulted in apoptosis of early germ cells [27]. In addition, another study revealed that inhibition of Otx2 as a repressor for OCT4 could promote the expression of OCT4 and further increased the generation of PGCLCs derived from ESC [28]. Combined with these results, it is indicated that OCT4 is required for germ cell survival and proliferation. Similarly, OCT4 is also ubiquitously expressed in FGSCs. In multiple species, accumulating evidence supports the existence of FGSCs in neonatal and adult ovaries [29][30][31][32]. FGSCs, derived from either neonatal or adult mouse ovaries, could differentiate to form functional oocytes after transplantation into mouse ovaries or construction of ovarian organoid model [13,15,33]. While the two populations of FGSCs express OCT4, they differ from each other in expression pattern of OCT4. OCT4 appeared to be expressed in FGSCs derived from neonatal ovaries with a nuclear localization [29,33], and slightly in FGSCs derived from adult ovaries with a cytoplasm [34,35]. In this study, we used FGSCs derived from neonatal ovaries as the experimental subject to explore the potential effects of PRC2. The results indicated that inhibition of PRC2 promoted OCT4 expression and further demonstrated OCT4, as a crucial determinant that regulated the survival and proliferation of FGSCs. Notably, FGSCs with higher expression of OCT4 royalsocietypublishing.org/journal/rsob Open Biol. 13: 220211 retained the normal germline capacity. We harboured the idea that this phenomenon might be attributed to several reasons. First of all, it has generally been accepted that pluripotency is regulated by a complex interconnected signalling network that is cooperatively regulated and maintained by several core pluripotency factors [36]. In the present study, our finding indicated that the expression of OCT4 was increased by inhibition of PRC2, and other pluripotency factors, including Nanog, Sox2 and Esrrb, were not statistically affected. Moreover, while OCT4, as a maternally inherited factor, has typically been detected in mature oocytes, its primary function is in the maintenance of germ cell proliferation and survival rather than classical germline determinants [37]. Thus, we consider that elevation of OCT4 alone is not sufficient to alter germline capacity. The activation of P53 and P63 has an important impact on various developmental processes, such as DNA damage repair, cell differentiation, apoptosis and proliferation [38]. The abnormal elevation of activated P53 causes a complete loss of fetal germ cells during mouse embryogenesis [39,40]. Additionally, the absence of P63 could effectively block the apoptosis caused by ionizing radiation in PGC [41]. In this study, we found that the expression levels of P53 and P63 were significantly decreased by elevated OCT4 expression after inhibition of EED. By contrast, when OCT4 was knocked down by RNAi, P53 and P63 showed a significant elevation. This suggests that OCT4 plays a role in negatively regulating P53 and P63 expression. Studies indicated that OCT4 plays an important role in enhancing reprogramming efficiency and maintenance of the multi-/ pluripotency of ESCs [42] and iPSCs [43] by suppressing the expression of P53. However, there is no consensus on the association between OCT4 and P63. Our results suggested that OCT4 has been shown to inhibit P53 and P63 expression in FGSCs, which in turn enhanced cell survival, while a comprehensive regulatory mechanism is still needed for further study.
In the germ cell development, FOA is a conserved phenomenon in vivo [44] and in vitro [33], occurring during the progression through the meiotic prophase I stages and the formation of primordial follicles. FOA, as a high-quality control system of oocyte selection, serves an important role during the establishment of the mammalian ovarian reserve [45]. The quality control system of the oocyte is essential for genetic inheritance stability. Any oocytes with DNA damage are removed by programmed cell death (PCD) [46,47]. During this process, a significant drop in the number of oocytes has been previously reported between E15.5 and E18.5, followed by a lesser loss of oocytes between E18.5 and within a few days of birth [48][49][50]. According to research, P63, as the principal member of the P53 family, is thought to play a role in a conserved mechanism for controlling female germline integrity [51,52]. In mice and humans, P63 expression begins in late pachytene-stage oocytes and peaks in diplotene oocytes. P63-null mice exhibited a high survival rate of oocytes [51] as well as the abnormal enrichment of γ-H 2 A (as an indicator of DNA damage) [53]. Another study demonstrated that double deletion of P53 and P63 can salvage oocytes that have been lost owing to checkpoint depletion, and the recovered oocytes are functional. After two months, the ovaries had a significant amount of all follicle types as well as recombination-defective oocytes [54]. In this experiment, we observed that the FGSCs with EED226 treatment yielded a larger number of follicles during oogenesis in vitro compared with control, accompanied by the low expression levels of P63 in the early stage of IVD culture. These results point to a synergistic role for P63 in controlling germ cell survival. Notably, similar to the in vivo results, concomitant with the reduction in P63 expression, ovarian organoids showed a higher proportion of recombination-defective oocytes, indicating the meiosis checkpoint role of P63 in oogenesis during reconstitution of mouse oogenesis from FGSCs.
Organoid is a new model that has a lot of potential for clinical applications. Hikabe et al. [55] and Yoshino et al. [56] established the reconstitution in vitro arising from ESCs or iPSCs, and the resultant oocytes produced healthy pups. Meanwhile, Luo et al. [15] and Li et al. [33] developed and described a female germline stem cell-derived ovarian organoid model. It also confirmed that the model endocrine function remained intact. Furthermore, primordial germ cell-like cells (hPGCLCs) were created by co-culturing human pluripotent stem cells (hPSCs) with mice ovarian somatic cells, and eventually developed into human oocytelike cells [57]. Thus far, the three ovarian organoid models were basically established. In the present study, the research was based on the second model and performed optimally. It effectively improves the culture efficiency and yield of FGSCs via inhibition of EED activity. Further, we confirmed that inhibition of EED function did not affect the unipotency of FGSCs. This culture system used here will facilitate the study of potential mechanisms during mammalian oogenesis as well as provide clues to reproductive medicine.
Our result shows the successful establishment of an effective culture strategy to expand FGSCs and obtain oocytes in vitro, which is a reproducible tool that can be used for simulating the underlying mechanisms of oogenesis in vitro. Actually, ovary tissue possesses other stem cells yet, for example, VSELs, very small embryonic-like stem cells with nuclear OCT4 expression [34,[58][59][60][61][62]. Multiple studies have revealed that VSELs had the potential for oocyte differentiation [60,61,63]. Our study may provide novel insights into VSEL expansion.
Animal breeding
The outbred ICR mouse was purchased from SPF Biotechnology (Beijing, China). β-Actin-GFP mice were donated by Lin Liu Lab (Nankai University, Tianjin, China). Mice were bred in the mouse house of Inner Mongolia University in a standard temperature/humidity constant environment.
CCK8 assay
FGSCs with 5 × 10 3 cells per well were plated in 96-well plates. After cell culture for 48 h, CCK8 reagent (C0038, Beyotime, China) was added to the 96-well plate (10 µl well −1 ) and incubated accordingly. Finally, the absorption value was measured with a microplate reader (Bio-Tek Instruments, Thermo, USA) at 450 nm wavelength.
EdU staining array
FGSCs were seeded in 48-well plates, with 1 × 10 4 cells per well. After 24 h, the cells were then incubated with fresh medium containing 10 µM EdU solution (C0071S, Beyotime, China) for another 2 h. FGSCs were fixed for 30 min in 4% paraformaldehyde and permeabilized with 0.5% Triton X-100 for 20 min. Then, according to the manufacturer's protocol, FGSCs were reacted with Click Additive Solution for 30 min, after which cells were treated with Hoechst solution for 10 min, and visualized under a fluorescent microscope. The percentage of EdU-positive cells was calculated by the following formula: EdU-positive rate = EdU-positive cell count/(EdU-positive cell count + EdU-negative cell count) × 100%.
RNA isolation and RT-PCR
FGSCs were pelleted by centrifugation to remove the extra medium and were then resuspended in 100 µl of RNAiso Plus (9109, Takara, Japan) for RNA extraction. A total of 1 µg of total RNA per sample was reverse transcribed into cDNA via a PrimeScript RT reagent kit with gDNA Eraser (RR047A, Takara, Japan). The primer's details are shown in electronic supplementary material, table S1.
Immunohistochemical staining
The fixed rOvaries were embedded in paraffin. 3-5 µm of paraffin sections were used for immunohistochemical assay. After the standard procedures of dewaxing, rehydrating and antigen repair, the slides were treated with 3% hydrogen peroxide in PBS to inactivate endogenous peroxidase activity and incubated with blocking buffer (10% serum in PBS), for 1 h at 37°C. After that, the slides were incubated with the first antibody overnight at 4°C, followed by the secondary antibody (HRP conjugated anti-rabbit IgG, A0279, Beyotime, China) incubation for 30 min at room temperature. HRP activity was detected with DAB solution (P0203, Beyotime, China). The slides were examined under a microscope and photos were taken for analysis by ImageJ. Antibodies and concentrations are listed in electronic supplementary material, table S2.
Three-dimensional culture
The recombinant ovary (rOvary, ovarian organoid) was produced according to a modified dynamic co-culture method [33,55,68]. A brief description is given below. FGSCs (from βactin-GFP female mice ovary) were co-cultured with female gonadal somatic cells (from 1-3 dpp wild-type mice ovaries, from which germ cells have been removed by Ddx4 antibodies conjugated with magnetic beads) with GK15 + RA medium in a U-bottomed 96-well plate for 2 d. rOvaries were then transferred onto Transwell-COL membranes (3492, Corning, USA) soaked in GK15 + RA medium for 2 d. Afterward, rOvaries were cultured with IVD (in vitro differentiation) medium on 21 d and formed individual follicles. After this, the individual follicles were cultured with IVG (in vitro growth) medium for 11-14 d.
Western blot
FGSCs were collected and lysed in RIPA (P0013B, Beyotime, China) supplemented with a protease inhibitor cocktail to protein extraction. The concentration of the protein was measured by using the BCA protein assay (23225, Thermo, USA). 20 µg of proteins from each sample were mixed with 2 × loading buffer (P0015B, Beyotime, China) and then the assay performed following standard procedures. Bands were visualized with Clarity Western ECL Substrate (32209, Thermo, USA) and quantified with ImageJ. Antibodies and concentrations are listed in electronic supplementary material, table S3.
ChIP-qPCR assay
Briefly, FGSCs (10 4 cells) with or without EED226 treatment were cross-linked, lysed, and sheared to obtain 200-800 bp fragments. Nearly 2 µg of either anti-H3K27me3 (ChIPgrade, 9733S, Cell Signaling Technology, USA) or anti-IgG (ChIP-grade, 2729S, Cell Signaling Technology, USA) was used for the immunoprecipitation reaction. Purified immunoprecipitated DNA with equal volumes were used in qPCR reactions (TB Green Premix Ex Taq, RR420B, Takara, Japan) with qPCR primers targeting the promoter, and exon regions of candidate genes. Electronic supplementary material, table S1, lists the ChIP-qPCR primers.
Statistical analysis
All experiments were repeated at least three times. Experimental data were expressed as mean ± s.d. or SEM with each experiment, analysed by two-tailed Student's t test royalsocietypublishing.org/journal/rsob Open Biol. 13: 220211
|
2023-01-25T14:03:52.714Z
|
2023-01-01T00:00:00.000
|
{
"year": 2023,
"sha1": "f6198d49261b0516747789dcf355332dc3dbbe91",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "RoyalSociety",
"pdf_hash": "f6198d49261b0516747789dcf355332dc3dbbe91",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
265103352
|
pes2o/s2orc
|
v3-fos-license
|
Hidden Markov random field models for cell-type assignment of spatially resolved transcriptomics
Abstract Motivation The recent development of spatially resolved transcriptomics (SRT) technologies has facilitated research on gene expression in the spatial context. Annotating cell types is one crucial step for downstream analysis. However, many existing algorithms use an unsupervised strategy to assign cell types for SRT data. They first conduct clustering analysis and then aggregate cluster-level expression based on the clustering results. This workflow fails to leverage the marker gene information efficiently. On the other hand, other cell annotation methods designed for single-cell RNA-seq data utilize the cell-type marker genes information but fail to use spatial information in SRT data. Results We introduce a statistical spatial transcriptomics cell assignment model, SPAN, to annotate clusters of cells or spots into known types in SRT data with prior knowledge of predefined marker genes and spatial information. The SPAN model annotates cells or spots from SRT data using predefined overexpressed marker genes and combines a mixture model with a hidden Markov random field to model the spatial dependency between neighboring spots. We demonstrate the effectiveness of SPAN against spatial and nonspatial clustering algorithms through extensive simulation and real data experiments. Availability and implementation https://github.com/ChengZ352/SPAN.
Introduction
Spatially resolved transcriptomics (SRT) profiles gene expression at high resolution while tracking tissue locations of cells (Asp et al. 2020).These technologies have greatly accelerated biomedical studies.Mainstream SRT technology can be grouped into two main categories.One is the in situ hybridization or sequencing-based technologies with single-cell resolution, such as seqFISH (Shah et al. 2016), seqFISHþ (Eng et al. 2019), and MERFISH (Moffitt et al. 2018).These technologies quantify gene expressions at single-cell resolution, but with only tens to hundreds of genes.Another category is spatial barcoding-based sequencing technologies, including SLIDE-seq (Rodriques et al. 2019), SLIDE-seq V2 (Stickels et al. 2021), and 10Â Visium (10ÂGenomics).These methods measure the whole transcriptome in spots that contain dozens of cells.
Cell type clustering and identification of SRT data provide the spatial distribution of distinct cell types and are critical analytical steps in many biomedical studies.Like single-cell RNAseq (scRNA-seq) technology, SRT generates highly sparse and over-dispersed discrete count data, which is statistically and computationally challenging.Many statistical and machine learning models have been proposed for the clustering analysis of scRNA-seq data, including (Satija et al. 2015, Kiselev et al. 2017, Stuart et al. 2019, Tian et al. 2019, 2021).However, methods designed for scRNA-seq data have a common issue: they ignore spatial information and simply assume cells are independent.The natural dependencies between neighboring cells or spots are very informative if they can be characterized efficiently and will result in better analysis results.Recently, several clustering methods that capture spatial information have been published.Some deep learning-based models, such as spaGCN (Hu et al. 2021), STAGATE (Dong and Zhang 2022), DSSC (Lin et al. 2022a), and stLearn (Pham et al. 2020), explicitly model the spatial dependency via graph neural network (Kipf and Welling 2017), deep constrained clustering (Tian et al. 2021), or spatial-aware data normalization.Statistical methods, such as Bayesspace (Zhao et al. 2021), have also been proposed.In these methods, a mixture Gaussian distribution combined with a hidden Markov random field (HMRF) is typically utilized to determine the cell type assignments and smooth the clustering labels in adjacent fields with similar transcriptomics.However, all the aforementioned single-cell and SRT analytical methods are unsupervised algorithms, and users need to assign cell types by aggregated cluster-level expression profiles after clustering analysis.Typical workflows conduct differential expression analysis between clusters to manually label cell types according to overexpressed marker genes.These separate steps can lead to suboptimal results, since they simply ignore marker gene information during clustering analysis.
To utilize the prior information of cell-type specific marker genes, CellAssign (Zhang et al. 2019a), and SCINA (Zhang et al. 2019b) have been proposed.They assume marker genes are over-expressed in the corresponding cell types, and utilize a mixture model with annotated markers to assign clusters of cells into known cell types.However, these two methods are developed for scRNA-seq data, and do not leverage spatial information in the SRT data.
To address these issues, we propose a statistical spatial transcriptomics cell assignment framework (SPAN) that assigns cells or spots into known types in the SRT data with prior knowledge of predefined marker genes and spatial information.The SPAN model combines a mixture model with an HMRF to model spatial dependency between neighboring spots and annotates cells or spots from SRT data using predefined overexpressed marker genes.The discrete counts of SRT data are characterized by the negative binomial (NB) distribution.Other experimental or technical covariates, such as batch or individual information, can also be incorporated into the model.We evaluate SPAN on extensive simulations and real data experiments and show that SPAN outperforms existing SRT and single-cell clustering methods in various analyses.
Materials and methods
The framework of SPAN consists of two modules: a mixture NB distribution module and an HMRF module, as illustrated in Fig. 1.The mixture module takes the gene expression matrix and the marker gene indicator matrix as input to determine region assignments, and the HMRF module uses spatial information to refine the clustering results.The following sections describe the details of these two modules in SPAN.
The NB mixture module
Let Y represents a spot-by-gene count matrix with N spots and G marker genes.We assume that these N spots can be divided into K cluster types.The proposed method models the count matrix Y by the likelihood of NB distribution.The expression of gene g at spot i for a latent clustering k can be obtained by: where the NB distribution is parameterized with mean l igk and dispersion / igk , and z i 2 f1; . . .; Kg is the latent variable indicating the cluster that the spot i belongs to.We follow CellAssign (Zhang et al. 2019a) to parameterize the log mean value l igk as: with the constraint that d gk > 0. s i represents the size factor for spot i. q is an indicator matrix derived from the prior knowledge, q gk ¼ 1, if gene g is highly expressed in cluster type k, and q gk ¼ 0, otherwise.The multiplication factor d gk > 0 models the average log fold change for the marker gene g highly expressed in the cluster type k, and when q gk ¼ 1, the expression of gene g is multiplied by a factor of e d gk .b g0 is the base gene expression of gene g.X represents an optional covariate matrix, including the batch and other individual information, and P is the number of covariates.d gk , b g0 and b gp are the parameters that will be learned from the model.We also place a hierarchical prior on the multiplication factor d, d gk $ log À normalðd; r 2 Þ, where the parameter mean d and variance r 2 of the log-normal distribution are set to 0 and 1, respectively.We then add another hierarchical prior on the spot-type assignment p k ¼ pðz i ¼ kÞ, where ðp 1 ; . . .; p K Þ $ Dirchletða; . . .; aÞ, and p k and a are initialized to 1=K and 0.01, respectively.
Moreover, we set / igk as a sum of radial basis functions (RBF) dependent on the mean l igk (Eling et al. 2018).
where, a j and b j are the parameters of RBF, and B is the total number of centers of RBF and x j is the center j.The centers are set to be equally spaced apart from 0 to the maximum number of counts y ig .Let h ¼ fd; b; a; b; pg denotes the model learning parameters.The joint distribution of z and y parameterized by h is defined by pðz i ¼ k; y i jhÞ ¼ p k Q g NBðy ig jl igk ; / igk Þ.The parameters can be optimized by the Expectation Maximization (EM) algorithm.In E-step, the posterior probability c ik can be calculated by: In M-step, h can be derived by maximizing the Q function: Because there is no closed form solution, we can optimize the Q function via the gradient descent.
The HMRF module
In SRT data, expression patterns are often correlated in adjacent positions, and two nearby locations tend to have similar clustering assignments.Thus, we apply HMRF to integrate the spatial information and smooth the clustering results.
Let z ¼ fz 1 ; z 2 ; . . .; z N g represent the latent spot type assignment.The dependency of z can be modeled by a HMRF parameterized with U ¼ fg; fg (Besag 1986).To be more specific, the joint probability of z is assumed to be pðz; UÞ / exp X with the constraint that f kl > 0, where n k denotes the number of spots belonging to cluster k and P K k¼1 n k ¼ N. n kl is the number of neighbor spot pairs with different group assignments (k, l).The constraint f kl > 0 punishes the two neighbor spots having different clustering types.
For a specific spot i, the conditional probability of the spot having type z i ¼ k, given the type of all its neighbors is where u i ðlÞ represents the number of neighbors of spot i having clustering type l, and f kl f lk .
Then, the parameters U can be estimated by maximizing the conditional likelihood where z @i denotes the neighbors of spot i.
SPAN model
SPAN model is formed by integrating the two modules.We introduce the HMRF as the prior on the mixture NB model to build the SPAN model.Let C ¼ fd; b; a; bg, the conditional probability given clustering assignment z i is where I is the indicator function.
The log-likelihood of parameter C can be written as We estimate the model parameters and infer the clustering assignment z à , simultaneously.We apply an iterative training process based on iterated conditional models (ICM) (Besag 1986) to estimate C and U.The model training process is illustrated in Algorithm 1.We first pretrain the mixture NB model to initialize the clustering assignment z ð0Þ .Then, we iteratively compute the parameters U and C and update the clustering assignment z.
Model implementation
The SPAN model is implemented in Python3 using PyTorch (Adam et al. 2017).Adam (Kingma and Ba 2015) optimizer is used to optimize the L 1 ðz; UÞ and L 2 ðyjz; CÞ and the learning rate is set to 0.01.The hyperparameter B is set to 10 by default, and our model has stable performance under different values of B (Supplementary Note S3 and Supplementary Fig. S8).The model is first pretrained for one epoch without considering the spatial information and then trained to optimize the entire SPAN model.The experiments are conducted on NVIDIA Tesla P100 GPU.
Hidden Markov random field models for cell-type assignment of spatially resolved transcriptomics 3 Simulation study
Simulation setting
To illustrate the effectiveness of our model, which integrates the marker gene and spatial information for cell type annotation, we compare its performance with two different benchmarks.The first benchmarks apply the standard workflows that use unsupervised clustering methods followed by annotation.These approaches are used for scRNA-seq [Seurat (Stuart et al. 2019), SC3 (Kiselev et al. 2017) and PCAþKmeans] and SRT data [Bayesspace (Zhao et al. 2021), stLearn (Pham et al. 2020) for ST/Visium platform and Giotto (Dries et al. 2021) for others].The second benchmark is a marker gene-based cell annotation approach [Cellassign (Zhang et al. 2019a)] designed for scRNA-seq data.
Since SPAN and CellAssign only use the raw counts of marker genes as input, we illustrated the performance of other competing methods based on two inputs: (i) marker genes and (ii) selected high-variance genes (HVGs).We first selected the top 2000 genes (1000 genes for the simulated datasets) by using the mean-variance relationship (Kobak and Berens 2019), and performed principal component analysis (PCA) on the marker genes or selected HVGs.The top 50 PCs were used as input (15 PCs for Bayesspace).For datasets with batch effects, PCs were first corrected by Harmony (Korsunsky et al. 2019), respecting the batch IDs for Bayesspace.
Following CellAssign, we mapped the unsupervised clustering results to the ground-truth groups.First, we applied analytic Pearson residuals normalization (Lause et al. 2021) to correct sequencing depth and stabilize the variance across marker genes in the count data.Next, we calculated the top 50 PCs for the Pearson residuals normalized count.We computed the average PCs for each ground-truth group and the inferred cluster.Then, we assigned the predicted clusters to the group with the highest Spearman correlation coefficients between the mean PCs of the predicted cluster and all groundtruth groups.
We used Accuracy, macro F1 score and Matthews correlation coefficient (MCC) to evaluate the performance of different methods.To generate the macro F1 score value, we first calculated the F1 score for each cluster with the One-vs-Rest strategy and computed the mean of all F1 scores.
SPAN outperforms competing methods in various settings
To evaluate the model performance, we designed several simulations for different biological scenarios.We extracted the spatial information and ground truth spot type assignment from the sample 151 673 in the dorsolateral prefrontal cortex (DLPFC) (Maynard et al. 2021) dataset.We then generated the raw count matrix of 3611 spots of 2500 genes from seven groups via the R package Splatter (Zappia et al. 2017).Each simulated cluster has the same number of spots as the corresponding ground truth cluster type in the sample 151 673.We determined the marker genes for each group in the simulated dataset by selecting genes with large differential expression (DEFacGroup value generated by the Splatter).Gene g was selected as a marker for group k, if DEFacGroup gk > 1.5.We repeated all experiments ten times under the same setting with different random seeds.The detailed simulation settings are summarized in Supplementary Note S1.
Model performance under different signal strengths
We first evaluated the model performance under different signal strengths.We generated several datasets with different log fold change levels of the gene expression between groups by modifying the variance parameter sigma in the log-normal distribution used by Splatter.Larger variance leads to stronger signal strength and larger distances between different clusters.We varied the sigma from 0.3 to 0.225 and fixed other parameters.The accuracy, F1 score and MCC are illustrated in Fig. 2, and one clustering assignment example is shown in Supplementary Fig. S1.Points in Fig. 2 represent the results on simulation datasets with different random seeds.We demonstrated the performance of SPAN and CellAssign using marker genes as input, and other competing methods using HVGs and marker genes (labeled with the letter M in parentheses) as input, respectively.We note that, for all methods, the performance decreases as the signal strength decreases.Except for two marker gene-based approaches, SPAN and CellAssign, other methods leveraging marker genes as inputs often achieve better performance compared to those utilizing HVGs.This proves that marker genes can provide useful information for determining cluster types, while other nonmarker genes may introduce additional noise information.Moreover, by comparing the marker gene-based approach to the unsupervised methods, we find that SPAN yields better performance than Bayesspace and stLearn, and CellAssign also achieves higher performance than Seurat, SC3 and Kmeans.It indicates that introducing prior knowledge of marker genes can help cluster annotation.Furthermore, the higher accuracy and MCC achieved by the three spatial methods SPAN, Bayesspace and stLearn demonstrate the efficiency of considering the spatial information in determining spot type in SRT.Finally, the cluster assignment examples show that the SPAN, Bayesspace and stLearn can generate smooth clusters by considering spatial information.
Model performance given imperfect marker gene information
We then investigated the performance of our model under different levels of inaccurate marker genes in two scenarios.The first case assumes that the simulated data may contain some cluster-irrelevant genes (nonmarker genes), but other marker genes are assigned to the corresponding groups.The second case assumes that the simulated data does not have nonmarker genes but some markers are assigned to incorrect clusters.
In the first case, we randomly replaced some marker genes with other nonmarker genes and randomly assigned a group type to each fake marker gene.We fixed the sigma and varied the ratio of nonmarker genes from 0% to 20%.A ratio equal to 0 means that we do not introduce any fake marker genes, and the larger the ratio, the more fake markers.We compared the performance of marker gene-based approaches, SPAN and CellAssign, and two spatial clustering methods, Bayesspace and stLearn, using only marker genes as input.As shown in Fig. 3, when the ratio increases, the performance of all methods decreases, which indicates that incorrect marker genes can deteriorate the performance.Moreover, compared to SPAN, the performance of other benchmarks drops quickly when the rate increases.In the second case, we varied the ratio of incorrectly assigned markers from 0% to 20%.We compared the performance of SPAN and CellAssign.Similarly, SPAN achieves better performance, as illustrated in Supplementary Fig. S2.
SPAN is robust under different spatial dependencies
In SRT, expression patterns are often correlated, and adjacent locations are more likely to belong to the same group.While we expect this cell type smoothness assumption to hold well in most SRT, there are also cases where this is decidedly not the case and more diverse spatial dependency patterns are observed.Therefore, we conducted the following experiments to see how the performance of SPAN persists under different spatial dependencies.
Firstly, we introduced spatial noise by randomly switching the gene expression and related spot type assignments of spots between different groups.We fixed the sigma and varied the switch ratio from 0% to 20%.A ratio equal to 0 means that we do not switch the spot position, and the larger the ratio, the greater the signal noise, and the less smooth the group assignment.We tested the performance of three spatial clustering methods, SPAN, Bayesspace and stLearn, under different levels of spatial noise.Figure 4 illustrates the performance under accuracy, F1 score and MCC, and Supplementary Fig. S3 shows a cluster assignment example.We can see that as the ratio increases, the performance of all algorithms decreases.Since spatial clustering methods Hidden Markov random field models for cell-type assignment of spatially resolved transcriptomics assume that closer spots should have similar assignments, the higher the spatial noise, the worse the performance.However, we expect that the model can still distinguish regions based on the marker gene information, even in the presence of some spatial noise.SPAN yields the best performance under all these settings, which demonstrates the contribution of introducing prior information of cell type corresponding marker genes.Secondly, to further demonstrate the performance of the proposed method when the smoothness is not strong, we directly simulate a scenario where different cell types are mixed.Specifically, spatial information is extracted from a Slide-seq cerebellum dataset from the RCTD (Cable et al. 2022), which is then used for data simulation (Supplementary Fig. S4d).These simulated datasets have 4122 cells from seven distinct cell types (Supplementary Note S1).Only the cell type 3 clearly exhibits strong spatial correlation, while other clusters, such as 1 and 7, are intermixed.We find that when the signal strength between cell types is strong enough (sigma ¼ 0.3), the gain from modeling spatial dependency remains positive, despite the imperfect smoothness.As a result, our methods can outperform other benchmarks (Supplementary Fig. S4).However, when the signal strength between cell types is weak, our model may not benefit significantly from modeling the weak type dependency between neighboring cells, leading to only comparable performance.
Thirdly, we also evaluated the model performance using the spatial information extracted from another ST platform.This melanoma dataset (Thrane et al. 2018) contains 293 spots from 4 groups.It is noted that some cell types contain only a limited number of spots, which implies a small sample size and presents a challenging case when marker gene information connot be used.Similarly, we observed that our proposed method yielded better performance than the competing methods under different signal strength (Supplementary Fig. S5), thanks to its integration of marker gene information.
SPAN still prevails when imperfect prior information is provided
The proposed method relies on prior knowledge of the number of cell types, K, and the marker-cell type indicator matrix, q.It is common that our prior knowledge about cell types may not be perfect, with one cell type not provided (corresponding to K À 1), or additional cell types provided but not existing in the given sample.It is interesting to test whether SPAN can assign the cells with missing cell type information to NAs ("to be determined"), and if SPAN can decline to assign any cells to additional cell types when they are provided but not actually present in the given sample.Thus, we next evaluated the model's robustness with imperfect prior cell type information that reflects real-world scenarios (Supplementary Note S1).Specifically, for the given seven cell types (Layers 1-7), we removed the marker gene information of one layer (Layer 1, marked as NA) while adding marker gene information of four nonexistent cell types (Layers 8-11).Not surprisingly, SPAN gave an almost perfect assignment for the cells with correct marker gene information (Layers 2-7, Supplementary Fig. S6d).Interestingly, SPAN could successfully assign some cells from Layer 1 with missing marker gene information to NAs.However, it might also misassign some cells in Layer 1 to other layers, including the nonexistent cell types.Overall, our model still outperformed Cellassign (Supplementary Fig. S6a-c), despite the imperfect prior information.
SPAN effectively accounts for batch information
We demonstrated the performance of SPAN and Bayesspace for batch correction.Similar to the previous simulation, we extracted the spatial information and spot clusters from the samples 151 673 and 151 674 in the DLPFC dataset, and generated 3611 and 3635 spots for two batches via Splatter, respectively.For batch correction, SPAN takes batch IDs as inputs, while Bayesspace cannot handle batch effects.Hence, we applied Harmony (Korsunsky et al. 2019) to correct batch effects, respecting the batch IDs, and the Harmony-corrected PCs were used for Bayesspace analysis.As illustrated in Supplementary Fig. S7, SPAN also outperforms Bayesspace.
Application to real data
We applied SPAN to three real datasets (Supplementary Note S2) to evaluate the model performance.The DLPFC dataset (Maynard et al. 2021) has 12 samples, each of which has 3000-4000 spots from five to seven groups.We use a list of 91 genes that were annotated and compiled in previous studies (Molyneaux et al. 2007, Zeng et al. 2012, Maynard et al. 2021, Lin et al. 2022a) as the input for SPAN (Supplementary Table S1).Figure 5 illustrates the performance of 12 samples under accuracy, F1 score and MCC.SPAN achieves the best performance compared with other marker gene-based methods and unsupervised clustering methods.Similar to the clustering results observed on the simulated dataset, we note that introducing prior knowledge of marker genes and spatial information can improve the model performance.Supplementary Figure S9 shows the ground truth and predicted cluster assignments generated by different methods of sample 151 569.The assignments generated by our model are more similar to the ground truth.We also evaluated the performance of SPAN for batch correction on the eight samples with a total of 32 397 spots from seven groups in the DLPFC dataset (Supplementary Fig. S10).We assumed that the eight samples were from different batches.SPAN takes the batch IDs as inputs, while PCs corrected by Harmony are used for Bayesspace clustering.For the other two real datasets, we generate the marker genes via differential expression analysis using DESeq from the R package DESeq2 (Love et al. 2014).A detailed dataset description and selected marker genes are provided in Supplementary Note S2 and Supplementary Table S1, respectively.
The Adult Mouse Brain (FFPE) dataset, which is provided by the 10Â scRNA-seq platform (Zheng et al. 2017), contains 2264 spots from nine groups.259 genes are used as input for SPAN.Supplementary Fig. S12 illustrates the model performance, and SPAN again achieves the best performance.
We then applied SPAN to the osmFISH (mouse cortex) dataset (Codeluppi et al. 2018).This dataset contains 4839 cells from 11 groups.Due to the low dimension of features, we use all 33 genes as input for all methods.Unlike the 10Â platform which has the apparent neighbor relationship between spots, osmFISH dataset only provides cell coordinates.To generate the cell neighbor relationship, we constructed the neighbor structure by finding the k-nearest neighbors of each cell.In the experiments, we set k ¼ 15. Figure 6 illustrates the model performance, and SPAN outperforms the other competing methods.
Finally, we demonstrated the running time of SPAN by comparing the running time with other competing methods on the 12 DLPFC samples.As shown in Supplementary Fig. S13, SPAN requires less running time than the other two spatial clustering algorithms.
Discussion
Existing clustering algorithms for SRT data assign cell types by leveraging spatial information in an unsupervised way, while other cell annotation methods fail to use spatial information efficiently.In this article, we propose SPAN to annotate cells or spots in SRT data by integrating the prior knowledge of cell/spot-type marker genes and spatial information.SPAN leverages cluster corresponding marker genes to determine the assignments and applies the HMRF to smooth the results.SPAN can also jointly handle spots from multiple batches (samples) by taking the batch ID as input.We have demonstrated the performance over other unsupervised clustering methods and cell annotation algorithms on different simulation scenarios and real datasets.
This study finds that the prior information of canonical marker genes described in the literature or from curated datasets can improve the accuracy of cell type assignments.Other studies have also demonstrated the utility of marker gene information in clustering cells (Tian et al. 2021, Lin et al. 2022a,b).Together, these promising results suggest the potential of leveraging marker gene information and emphasize the need for integrating it into the development of computational methods for various analytic tasks in single-cell studies, such as cell-cell communication, trajectory inference, and multiomics data analysis.Moreover, given the incomplete and imperfect cell type information, handling unknown cell types and/or nonexistent cell types remains a challenging problem.Type I error control is not yet considered in cell type assignments.Having type I error control may lead to annotating unknown cell types as NAs, as desired, or assigning multiple cell types to a cell when uncertainty is high.Type I error control for multiple assignments is relevant to conformal prediction, a field that has received much attention in recent years (Xu et al. 2023).All of these aspects could be intriguing topics for future research.
Although SPAN achieves good performance for SRT data, there are still some limitations that can be improved.First, we used the k-nearest neighbors approach to determine the neighbors based on the Euclidean distance between cells.However, alternative methods for generating neighbors can be further investigated.Second, in SRT, a spot may contain different types of cells, such that the cell-type markers cannot reflect the entire gene expression of the spot.When two or more types of cells are evenly distributed in a spot, it may greatly impact the model performance.This is likely to occur at the border of two groups.Implementing some downstream analysis may provide more accurate predictions.Third, our model only applies the first-order Markov model to simulate the correlated expression pattern in the adjacent spots in SRT data.However, it is possible to explore more complex spatial patterns by incorporating second or higher-order neighbors.Thus, we may apply a second or higher-order Markov model to enhance the performance of SPAN.Fourth, since we apply the ICM to estimate the model parameters during the model training process, we need to assign each cell/spot to a known cluster type at each iteration.In other words, our model cannot handle unknown cluster types.We will solve this matter in our future studies.
Figure 1 .
Figure 1.The framework of SPAN.SPAN consists of two modules, a mixture NB distribution modules (right) and an HMRF module (left).SPAN leverages the mixture module to determine region assignments and uses the HMRF module to smooth the clustering results.
Figure 2 .
Figure 2. Performance on simulated data with various signal strengths.SPAN and CellAssign use marker genes as input, while other competing methods use HVGs or marker genes (labeled with the letter M in parentheses) as input.(a) Accuracy.(b) F1 score.(c) MCC.
Figure 3 .
Figure 3. Performance on simulated data with various level of nonmarker genes.All methods use marker genes as input.(a) Accuracy.(b) F1 score.(c) MCC.
Figure 4 .
Figure 4. Performance on simulated data with various spatial noise.SPAN uses marker genes as input, while other competing methods use HVGs or marker genes (labeled with the letter M in parentheses) as input.(a) Accuracy.(b) F1 score.(c) MCC.
Algorithm 1. SPAN model training processInput: gene expression matirx Y; covariate matrix X (optional); marker gene-clustering type indicator matrix q; neighbor relation D Ouput: an clustering assignment vector z 1: Initialize C and pretrain the mixture NB model; 2: Initialize U and set z
|
2023-11-11T06:18:32.220Z
|
2023-11-01T00:00:00.000
|
{
"year": 2023,
"sha1": "e0f3681bfc354d8a61dc7bf6cf2507490814d33a",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "78c158878ea0e691440fe447eb8bdd6a87c308fc",
"s2fieldsofstudy": [
"Computer Science",
"Biology"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
260131256
|
pes2o/s2orc
|
v3-fos-license
|
Utilizing a mixed-methods approach to assess implementation fidelity of a group antenatal care trial in Rwanda
Background The Preterm Birth Initiative (PTBi)–Rwanda conducted a cluster randomized controlled trial to assess the impact of group antenatal care (group ANC) on preterm birth, using a group ANC approach adapted for the Rwanda setting, and implemented in 18 health centers. Previous research showed high overall fidelity of implementation, but lacked correlation with provider self-assessment and left unanswered questions. This study utilizes a mixed-methods approach to study the fidelity with which the health centers’ implementation followed the model specified for group ANC. Methods Implementation fidelity was measured using two tools, repeated Model Fidelity Assessments (MFAs) and Activity Reports (ARs) completed by Master Trainers, who visited each health center between 7 and 13 times (9 on average) to provide monitoring and training over 18 months between 2017 and 2019. Each center’s MFA item and overall scores were regressed (linear regression) on the time elapsed since the center’s start of implementation. The Activity Report (AR) is an open-ended template to record comments on implementation. For the qualitative analysis, the ARs from the times of each center’s highest and lowest MFA score were analyzed using thematic analysis. Coding was conducted via Dedoose, with two coders independently reviewing and coding transcripts, followed by joint consensus coding. Results A total of 160 MFA reports were included in the analysis. There was a significant positive association between elapsed time since a health center started implementation and greater implementation fidelity (as measured by MFA scores). In the qualitative AR analysis, Master Trainers identified key areas to improve fidelity of implementation, including: group ANC scheduling, preparing the room for group ANC sessions, provider capacity to co-facilitate group ANC, and facilitator knowledge and skills regarding group ANC content and process. These results reveal that monitoring visits are an important part of acquisition and fidelity of the “soft skills” required to effectively implement group ANC and provide an understanding of the elements that may have impacted fidelity as described by Master Trainers. Conclusions For interventions like Group ANC, where “soft-skills” like group facilitation are important, we recommend continuous monitoring and mentoring throughout program implementation to strengthen these new skills, provide corrective feedback and guard against skills decay. We suggest the use of quantitative tools to provide direct measures of implementation fidelity over time and qualitative tools to gain a more complete understanding of what factors influence implementation fidelity. Identifying areas of implementation requiring additional support and mentoring may ensure effective translation of evidence-based interventions into real-world settings.
Methods
Implementation fidelity was measured using two tools, repeated Model Fidelity Assessments (MFAs) and Activity Reports (ARs) completed by Master Trainers, who visited each health center between 7 and 13 times (9 on average) to provide monitoring and training over 18 months between 2017 and 2019. Each center's MFA item and overall scores were regressed (linear regression) on the time elapsed since the center's start of implementation. The Activity Report (AR) is an open-ended template to record comments on implementation. For the qualitative analysis, the ARs from the times of each center's highest and lowest MFA score were analyzed using thematic analysis. Coding was conducted via Dedoose, with two coders independently reviewing and coding transcripts, followed by joint consensus coding.
Introduction
Successful introduction of any new outpatient care strategy that disrupts the status quo to clinic flow and systems, poses numerous challenges, particularly in low-resource settings. One challenge is to ensure "implementation fidelity," which is defined as the degree to which an intervention is implemented as intended [1]. Implementation fidelity is affected by several factors, including the level of complexity of the intervention. Intervention components impacting complexity include: the number of sessions within an intervention, the number of participants and the incorporation of group level interventions [2][3][4][5]. A further threat to implementation fidelity is the decay of skills that is commonly seen after trainings [6,7]. Considering the many components involved in effectively carrying out an intervention, maintaining implementation fidelity is critical to successfully translate evidence-based interventions into practice [6][7][8].
The Preterm Birth Initiative (PTBi)-Rwanda conducted a cluster randomized controlled trial of 36 health centers, using a standardized tool to assess number of providers, ANC volume, suitable space for group care, services, and equipment [10]. Eighteen health centers were randomized to receive group ANC while 18 health centers continued to provide individual ANC. The trial assessed the impact of group antenatal care (group ANC) on gestational age at birth, finding no impact [9]. The program previously conducted a process analysis of implementation fidelity [10]. The study analyzed and compared quantitative data from observer completed fidelity monitoring tools and provider self-assessments but found, that while there was overall high model fidelity as assessed by observers, there was poor correlation between observers and provider self-assessment tools. The study recommended future implementation of group ANC/PNC in Rwanda continue collection of self-assessment data, assessment by expert observers and expert coaching and mentoring.
This study analyzes quantitative data from Master Trainer assessments and explores additional information and insights from qualitative reports completed by Master Trainers. The study's aims are two-fold. First, we measure the association between implementation time (experience), and implementation fidelity. Second, we aim to assess whether observer assessment and support contribute to facilitator skills improvement over time. We use our results to inform recommendations for monitoring of group ANC in similar contexts.
The Rwanda model for group care was aligned with the WHO four visit focused ANC visit model [11]. As the model was developed before the WHO recommendation to switch to 8 contacts during pregnancy was made, it was not applied to this program. Visits were spaced eight weeks apart, allowing women to have a predictable group visit schedule. Taking into account health care providers' other responsibilities, the discussion and activity portion of group sessions were limited to 60 minutes [12].
The Rwanda group ANC model was adapted and developed by a local technical working group and global group care experts. The technical working group was comprised of 10 Rwandan maternal-child health stakeholders who met 3 times over 3 months, for 4 to 8 hours each time. The group considered existing evidence around group ANC as well as constraints of their ANC delivery system. Using these data, the group agreed upon priorities, content and structure of the adapted group ANC model [12]. The Rwandan group ANC model was designed to include all the essential elements of the original CenteringPregnancy model [12]. The CenteringPregnancy program, widely considered to be the seminal group care model includes essential elements that ensure facilitative leadership and group processes that encourage participation [12]. One unique element of the Rwanda model is that nurses and CHWs serve as co-facilitators. Nurses and CHWs were trained together in order to reinforce the egalitarian nature of the model. Table 1 below outlines key components of the Rwanda group ANC model.
Ethical statement
Ethical approval for all study activities, including the administration of the Model Fidelity Assessment, was granted by the Rwanda National Ethics Committee (0034/RNEC/2017) and Table 1. Model fidelity items.
Demonstrated mastery (accurate knowledge) of the curriculum, including discussion topics and key messages (MFA tool: curriculum knowledge) Followed the lead of the women and could flexibly adjust the visit agenda to better meet women's needs and interests (MFA tool: responsive) University of California, San Francisco Institutional Review Board . A written informed consent form was obtained from each provider and CHW prior to the first group ANC or PNC visit in which she/he participated as a facilitator, for being observed by Master Trainers while facilitating a group ANC or PNC visit. No personal identifiers of providers or CHWs were recorded. Study staff protected all data as confidential. This study analyzed secondary data collected for the purposes of program monitoring.
Inclusivity in global research
Additional information regarding the ethical, cultural, and scientific considerations specific to inclusivity in global research is included in the S1 Checklist.
Data collection process
Implementation of the Rwanda group ANC model took place from July of 2017 to May of 2019. A total of 7 Master Trainers, comprised of one nurse, five midwives, and one physician, were trained in group care facilitation, and collaborated in the development of monitoring tools and processes. Master Trainers trained 72 nurses and midwives and 216 CHWs for three days per training cohort on group care facilitation [13]. Master Trainers were scheduled to visit each of the 18 health centers 1, 2, 3, 5, 7, 9, 12, 15, and 18 months after group ANC implementation began.
A concurrent mixed methods design was used. At each visit, the Master Trainer completed two monitoring tools, a quantitative assessment tool called a Model Fidelity Assessment, and a more qualitative report called the Activity Report. Findings from both tools were triangulated with the objective of achieving a more in depth understanding of the relationship between time, observer assessment and support, and implementation fidelity. This methodology is aligned with Creswell's approach to concurrent mixed methods, in which both qualitative and quantitative data are collected during the same stage and are triangulated to more accurately define relationships among variables being studied [14].
The Model Fidelity Assessment was developed prior to initiation of monitoring visits, but the Activity Report was a tool developed by the Master Trainers as monitoring began because they felt a need to give a more holistic assessment. Occasionally separate MFAs were completed on the same day, evaluating different group ANC meetings.
The assessment tool (the MFA) was developed by the Technical Working Group, UCSF's group ANC technical advisor and Rwandan Master Trainers. Items were based on elements from the CenteringPregnancy model that were found to be essential in providing a structure for effective group ANC. The MFA included 15 items: Items one through three recorded the date, observation site and provider code and were not included in the statistical analysis (Table 1). One item (MFA 9), "Husbands and next-of-kin were engaged and participated in activities (if they were present)", was removed, as it was blank in 93% of MFAs [10]. We analyze the remaining 11 items measuring model fidelity.
Each item was ranked using a 5-point Likert scale from 0 to 4. An overall score, ranging from 0 to 4, was calculated by averaging the score of all MFA items. Higher scores assumed the group session was implemented with greater fidelity. The breakdown of scores was: were not able to perform this skill even though the opportunity was present (0), made attempts but need significant help and to be retrained (1), have beginning skills but require more modelling, role-playing, and instruction (2), require a few minor suggestions from the Master Trainer (3), were fully competent (4).
Qualitative data from the Activity Report included responses to open-ended questions on the group care process, lessons learned, best practices, challenges and recommendations.
Upon completion of each session, Master Trainers provided group ANC facilitators with mentoring and support based on monitoring results. Monitoring consisted of observing preparation and execution of the session, after which Master Trainers answered questions and provided feedback focused on bridging gaps. When gaps were observed, Master Trainers conducted repeat visits earlier to observe if the gap had been corrected.
Statistical analysis of model fidelity assessments
Descriptive statistics describe the characteristics of providers of group ANC, health centers, MFAs and MFA items. The intra-cluster correlation coefficient (ICC) was calculated, based on overall MFA score, to assess for clustering at the health center level. We calculated an ICC of 0.34 based on overall MFA score and thus considered it not necessary to account for clustering via a mixed-effects model in this analysis [15]. Bivariate linear and ordinal regression were performed to measure the association between MFA item score and time since implementation. Time was measured in days since the first monitoring and mentoring visit for each health center. Multivariate regression was performed to measure the association between MFA item score and time since implementation, controlling for individual and health center level predictors. Individual level predictors included provider age, education level and years of experience working in ANC/PNC. Health center level predictors included location (urban versus rural, with rural as the reference group) and patient to staff ratio. Predictors were included based on their potential association with implementing with fidelity the model of group ANC. The p value for statistical significance was set at 0.05. Regression results are presented in years for ease of interpretation. Results were comparable between both models. While the outcome variable of MFA score is ordinal, as assumptions were met for linear regression, we present those results for ease of interpretability.
Qualitative analysis of activity reports
The objective of the qualitative analysis was to explore what factors Master Trainers perceived as influencing implementation fidelity. For each facility, the highest and lowest scoring MFA and corresponding Activity Report was selected for a total of two Activity Reports per health center. This method of sampling was chosen to compare qualitative findings between each health center's lowest and highest levels of fidelity.
Author KS coded All Activity Reports as did the PTBi Data Manager in Rwanda. As part of the data reduction process, only sections pertaining to implementation fidelity were coded [16,17]. The coders discussed and reached consensus on the reduced Activity Reports to be coded. The lead coder created a codebook based on the reduced Activity Reports. The codebook was discussed with the second coder until consensus was reached on codes and definitions [18]. Structural coding was used to map categories to relevant items from the MFA. Magnitude coding was used to note whether Master Trainers made positive or negative comments, allowing for the identification of which categories acted as facilitators (positive comments) or barriers (negative comments) [19]. Axial coding was used to draw connections between initial categories and identify themes [20,21]. Both coders compared and discussed codes until reaching consensus [22]. Dedoose software was used to organize and code the data.
Quantitative results
A total of 160 MFAs were completed for 18 health centers over the span of approximately 22 months. The number of MFAs completed per health center ranged from 7 to 13, with an average of 9 (±1.68).
Between one and three nurses/midwives and/or CHWs were involved in facilitating each group ANC session. When more than one provider was involved in a group session, the variables for provider age, education and years of experience were recorded only for the most senior provider. A total of 59 providers were included in this dataset. Providers ranged from 24 to 51 years old. The majority of providers, 77.1%, were nurses. Seventy-five percent of providers attended university. Previous experience working in ANC/PNC ranged from 0 to 29 years, with an average of 5.69 (± 4.75) years. There were five urban and 13 rural health centers. Patient to staff ratio, on the days health centers offered ANC, ranged from 4.82 to 31.58, with a mean of 11.75 (± 7.18). Table 2 displays the mean and standard deviation for all MFA items. The averages of all but one of the MFA items were between 3.0 and 4.0 (the maximum possible score). MFA 14 (Kept Time) had the lowest average score of 2.76 and displayed the most variance, with a standard deviation of 1.11. Three health centers in particular appeared to pull down the score for timekeeping, with average MFA 14 scores of 1.9, 2.14 and 2.17. Another relatively low-scoring item was MFA 10 (Praised Group), at 3.06, while high-scoring items were MFA 5 (Facilitator Communication), 6 (Correct Assessments), 13 (Responsive), and 15 (Proper Screening). Indeed, MFA 15 (Proper Screening) had the highest average score of 3.49.
Bivariate regression results displayed an inverse relationship between average MFA score at initial observation and improvements in MFA scores. MFAs 4 (Room Setup), 14 (Kept Time), and 12 (Curriculum Knowledge) had some of the lowest average scores at initial observation and highest increases in scores. MFAs 6 (Correct Assessments), 7 (Encouraged Participation), and 11 (Group Participation) had some of the highest average scores at initial observation and lowest increases in scores ( Table 2).
Bivariate regression results showed that a twelve month increase in duration of implementation was associated with an average increase of .37 points in overall MFA score. The highest increases in scores were seen in MFA items 4, 5, 12 and 13 with increases of .49, .47, .55 and .47 respectively. The lowest increases in scores were seen in MFA items 7, 8 and 11, with increases of .23, .21 and .18 respectively (Table 2).
When adding in provider characteristics of age, education level and years of experience working in ANC/PNC, time remained significant for all items (S1 Table). Provider level predictors were not found to be significant for any of the other MFA items.
When adding in health center level predictors of location (urban versus rural) and patient to staff ratio, time remained significant for all items. Health center level predictors were only found to be significant for MFA 8. Scores over time for each MFA item are plotted in S1 Fig.
Qualitative results
Among 160 MFA Assessments, 134 had corresponding Activity Reports. Activity Report findings shed additional light on key factors affecting implementation fidelity. Six themes emerged from the analysis, including group ANC scheduling, logistics of preparing the room for group ANC, provider capacity to co-facilitate group ANC, knowledge regarding specific content areas, facilitation skills and perceptions of women's experiences with the group ANC process. These themes and key recommendations are summarized in Table 3.
Provider availability
Eligible health centers were required to have more than one ANC provider available on days when ANC is provided [23]. This requirement aimed to ensure the availability of a designated provider to conduct group ANC. However, in several instances on days when the Master Trainer was visiting, only a single provider was available to deliver both group and individual ANC. As a result, women visiting the health center for individual ANC (who were not enrolled in the trial), participated in group ANC.
The provider was the same one to attend to ordinary ANC women, do ultrasound and group care that day. Ordinary ANC women were among group care women. (Health Center 2 -Lowest MFA Score) Table 3. Themes, details and recommendations for group ANC.
Themes Details Recommendations
Provider Availability Limited availability of providers resulted in mixed groups of women from group and individual ANC Group ANC delayed to first accommodate women receiving individual ANC Women arrived late or at incorrect appointment times, resulting in large groups of women of different gestational ages Providers worked collaboratively to provide coverage for both individual and group ANC Nurses were in charge of several services, making it challenging to adequately deliver group care. CHWs were sometimes unavailable to co-facilitate, making it difficult to implement group ANC. Untrained staff occasionally facilitated group ANC when trained staff were unavailable or designated to provide other services.
Cultivate relationships conducive to effective staff collaboration. Effective management strategies are required to balance provision of group ANC with other services. Health center management must develop solutions to balance service needs with staff capacity.
Room Preparation Adequate room preparation among some health centers Staff successfully adapted to prepare rooms in absence of adequate resources. Provision of multiple services, high patient volume and unprepared staff, hindered room preparation.
Health center management and staff must review dates for group care and other services in advance, in order to adequately prepare for group ANC.
Facilitation Skills and Process
Several facilitators exceled in leading group ANC. Some facilitators created an environment of judgement and blame, greatly hindering the group care process.
Further training and accountability measures are required to ensure facilitators are delivering group care as intended.
Facilitator Content Knowledge
Certain content and skill areas were not well understood and required further training Additional training must be provided on specific topics.
Group ANC Process
Group ANC gave women the opportunity to learn through discussion, shared experiences and relationship building. Negative interactions with facilitators were detrimental to the group experience, resulting in high levels of dissatisfaction among women In some facilities, providers had the capacity and were able to work collaboratively with the head of the health center to designate individual nurses for both group and individual ANC.
Even the head of the health center was present helping in the management of the activities related to ANC then avail one nurse to help for day ANC. (Health Center 6 -Highest MFA Score)
Several challenges were present regarding shortages of trained staff. Nurses responsible for co-facilitating group ANC were also in charge of other services, making it extremely challenging to adequately deliver group care.
According to the nurse, they have to start by providing emergency care before they join the place where GANC care is taking place. (Health Center 5 -Lowest MFA Score) In some instances, not only were nurses occupied with providing other services, CHWs were also unavailable to co-facilitate. Such staff shortages made it difficult to implement group ANC and also disrupted continuity of care, as the same provider was not always available to co-facilitate at subsequent sessions.
She was also assigned to work in maternity and other trained providers were not available to facilitate the group care. The Provider pointed out that it was not possible that the same group be followed by same facilitator from GANC 2 to GPNC due to the problem of providers' availability. (Health Center 4 -Second Highest MFA Score) Untrained staff occasionally facilitated group ANC when trained staff were unavailable or designated to provide other services.
The nurse who was not trained in group care facilitation was the one in charge of group care that day and says she had been conducting group care facilitation in the past with other facilitators. (Health Center 7 -Lowest MFA Score) While several nurses, midwives and CHWs were trained to facilitate group ANC, severe staff shortages resulted in a variety of challenges. On many occasions, group care either began late, was not properly implemented or was facilitated by untrained staff. In addition, other services as well as patient experiences were negatively affected.
Room preparation
The Technical Working Group recommended group visits be conducted in the interior of the health center, where sessions could be conducted privately. Preparation of the group ANC room included providing weight and blood pressure equipment, learning materials, clean drinking water and indicated medications (such as iron tablets, deworming medication or antimalarial medication). Some health centers were well equipped, with rooms adequately prepared with water, learning materials and a semi-private area for individual assessment.
Didactic materials were prepared: cards about labor signs, danger signs on a newborn and mother as well as care of them. Materials illustrating birth preparation were ready too. (Health Center 1 -Highest MFA Score) In some occasions, when group and individual ANC were provided on the same day, there was confusion around which room to use. High patient volume, in addition to staff unaware that group ANC was being conducted, made it difficult to adequately prepare the room and resulted in individual assessments being conducted in a separate room.
Had not yet prepared room nor had they decided on which room to use for group care as it was an ordinary day for subsequent antenatal visits. I assisted in arrangement of the room but because it was a walk in for many people coming in and going out, the abdominal assessments were done in close by side room. (Health Center 4 -Third Lowest MFA Score) Nurses and CHWs were resourceful in adapting to the challenges of missing materials or broken equipment. However, on multiple occasions, in both low and high performing clinics, staff were unaware that group ANC was being conducted or had challenges effectively coordinating group care preparation with other services being offered.
Facilitation skills and process
Group care facilitators were trained on leading women through semi-structured activities, with the purpose of creating cohesiveness and trust while generating productive discussion [20]. Facilitators were taught to keep the final objectives of the session in mind, while maintaining awareness of their own biases and opinions. Many facilitators exceled in leading all aspects of group care, from teaching women to measure their blood pressure and weight, to promoting discussion.
The nurse facilitated the introduction of participants and facilitators as well as the MT (Mas-
ter Trainer) and initiated the measurement of women's blood pressure and weight. While the health assessment was being performed, the CHW reminded women their group rules before women started discussing among themselves about different experiences on pregnancy.
(Health Center 1 -Highest MFA Score) Group sessions were difficult to continue when providers were unable to create a safe space for women. Some facilitators spoke more than women and did not allow them time to share their thoughts, approached the group with an attitude of judgement and blame and were disorganized and controlling in discussing content. As a result, women felt intimidated and refrained from participating in discussion.
He went in wrong direction blaming and requesting them to talk about what is wrong with their pregnancy. Women were intimidated and there was a total silence in the room. The nurse was intervening talking more than women and kept being judgmental. (Health Center 8-Second Lowest MFA Score)
Facilitators displayed competence in leading group sessions, effectively presenting content while creating an environment conducive to discussion, sharing experiences and establishing relationships among women. Challenges arose when facilitator behavior created a negative environment for women, preventing them from learning in a participatory manner.
Facilitator content knowledge
While nurses and CHWs underwent training on clinical skills, certain topics, such as preeclampsia, were not well understood and required clarification by the Master Trainer.
During recap of danger signs that were covered previously, I realized that they did not understand what pre-eclampsia or eclampsia (kugagara was). Neither the CHW nor the nurse could explain it. They both said it is when the mother is stiff and all the blood has stopped flowing in the body and the mother is almost dead. I had to intervene with the early and later signs and what could be the complications, in order to make them understand. (Health Center 2 -Lowest MFA Score)
Nurses and CHWs appeared to be well versed on group ANC content and skills. Selected clinical components require additional guidance, ensuring accurate assessments and clear explanations of topics.
Group ANC process
A key tenant of group ANC is the group process, through which women develop trust, cohesion and mutual support. Group ANC gave women the opportunity to learn through discussion and sharing personal experiences. Women were also able to gain a clearer understanding of family planning and what to expect during pregnancy.
Every mother has given a chance to share with others what have learned from Ibaruke neza mubyeyi including friendship, take care of themselves which enable them to deliver term and healthy babies. (Health Center 5 -Highest MFA Score) Negative interactions with facilitators were detrimental to the group experience, as women expressed their dissatisfaction and hesitation to return for subsequent sessions.
After he left women expressed also their worries as they told me that they can't really talk when he is the one around because he is always nervous and shout at them. When asked if they will come back for GPNC, they were clear and told me if he is the one in there they won't bother coming back. (Health Center 12 -Second Lowest Score) Master Trainers' perceptions suggested women had largely positive experiences, using group ANC as a forum to learn and clarify misconceptions regarding pregnancy, while creating supportive relationships through activities and sharing personal stories.
Discussion
Understanding fidelity of implementation throughout program introduction is critical in assessing which components of group care may require additional training and support, allowing implementors direction for course correction. Utilizing monitoring results to provide focused mentorship can potentially improve fidelity and strengthen the internal validity of the study. In this case, we see that the trial was implemented with sufficient fidelity and study results can be attributed to the intervention itself, as opposed to implementation failure.
Our study findings are aligned with the concepts posited in Carroll's framework for implementation fidelity [24]. The framework suggests interventions where key components are identified in advance, may have higher levels of fidelity compared to less structured interventions. However, intervention complexity must also be considered, with more complex interventions risking variation in fidelity in how different components are implemented. The framework also considers implementation monitoring, followed by feedback and training, as factors potentially improving both quality of delivery and implementation fidelity. The authors argue the suggested strategies are particularly crucial in the case of complex interventions.
Group ANC implementation fidelity in low and upper middle income countries has only been measured in four other studies, in Malawi and Tanzania, Rwanda, Nepal, and Mexico [10,[25][26][27][28]. At the time of this study, there is an active randomized clinical trial in Malawi measuring the degree of implementation success and associated contextual factors [29]. The Nepal trial included a component measuring the effect of time on process fidelity. The study used observations from a pre-study pilot to assess fidelity and re-train facilitators on topics, mainly regarding peer-discussion. During the intervention period, fidelity data was collected after every group visit and analyzed quarterly. The study found significant improvements over time for women supporting each other and facilitators providing dedicated time to group sessions instead of engaging in other clinic activities. No time effect was found for women sharing and actively engaging in group sessions. Study results are similar to our study, which found only small significant effects for encouraged participation, promoted discussion and group participation.
This study found that group ANC was delivered overall with high fidelity (mean overall MFA score of 3.18 (±.52)). Regression results found a significant positive association between elapsed time since implementation and fidelity for all MFA items. These findings suggest that the "soft skills" required for group care facilitation can be learned, retained and even improved over time, at least with monitoring and support visits in place. The Nepal and Mexico studies also found the need for continuous monitoring and feedback to improve facilitation skills. The Nepal study initially observed highly didactic facilitation. Researchers noted many nurse-midwives had decades of experience providing one-on-one ANC and were not accustomed to taking into account the social context or patients' beliefs. In addition to the initial two-day training, routine post-session debriefings including real-time feedback from nurse supervisors were required to improve facilitation and ensure success of the group ANC process.
Similarly, the Mexico studies had to adapt provider training by adding additional one-onone, on-site time to focus on developing facilitative leadership skills. The vertical nature of doctor-patient relationships proved further challenging, with the study finding the item with the lowest level of fidelity to be, "Whether the facilitator introduced her/himself in a friendly non-hierarchical way and guided but did not control the conversation." The study recommended additional training to achieve the participatory approach required for group ANC.
The MFA items with the lowest average scores at initial observation, room setup, curriculum knowledge and kept time, showed the highest improvements over time in the regression analysis. As this trial was a group ANC pilot, we were not surprised by the initial low scores and subsequent room for these items. The MFA items for correct assessment, promoted discussion and group participation, had the highest average scores at initial observation and the least improvements over time in the regression analysis. High initial scores for clinical assessments were expected, due to providers' prior clinical experience. However, high initial scores for facilitation and participation were unexpected due to the innovative nature of group ANC and departure from traditional patient provider interactions. These results seem to indicate that where providers scored lowest initially, they were able to improve their scores and master the necessary skills with practice, while they maintained the skills they performed well on at the onset. We suggest initially high-scoring items may require less frequent mentoring while initially low-scoring items may require more frequent monitoring and mentoring.
Certain items displayed variations in trends over time. The trend line for the average score of keeping time of group ANC sessions showed a steady increase through the first four monitoring and mentoring visits, followed by a steady decrease. This may be due to the fact that over the course of program implementation, the actual number of group ANC visits surpassed the initially planned number of visits, resulting in several logistical challenges for health centers. These results are in line with PTBi-Rwanda's previous study, which found that at least 25% of group care visits were not implemented with fidelity to the intended two hour session, lasting more than two hours [10]. The tool analyzed in the previous study included two distinct open-ended questions for how much time was spent conducting health assessments and on group discussion. These findings may be more useful than MFA results in informing which component of timekeeping needs to be addressed. Comparing results from both studies displays the potential utility of using monitoring tools with different modes of data collection.
A unique component of the Rwanda model of group ANC was co-facilitation by CHWs and providers. The MFA and fidelity tools analyzed in the previous PTBi-Rwanda study included measures of fidelity scores as well as agreement in scores across the two different tools. This study assessed qualitative measures of the group process as well as the effect of time on model fidelity, highlighting the importance of facilitation support and skills-building throughout the implementation process. We recommend further exploration into the CHWprovider interaction as well as other factors potentially impacting facilitation, such as provider age and experience, and potential gender norms and interactions between male facilitators and female participants.
Based on large improvements in scores for MFA items with low scores at initial observation, we recommend future group ANC programs include intensive monitoring and mentoring at program onset, followed by periodic monitoring through the remainder of the program. It is important to note, however, that certain components of the group process may require greater support than others, such as, preparing the room for group ANC, completing the group session within one hour, and strengthening facilitator knowledge of group ANC content. We recommend the use of both quantitative and qualitative monitoring tools to provide a complete assessment of implementation fidelity. While the MFA has several strengths, solely focusing on quantitative results can fail to capture critical elements affecting implementation fidelity.
Carroll's framework identifies participant responsiveness as a potential moderator, with a lack of participant acceptance and/or engagement potentially impacting implementation fidelity. We recommend implementation fidelity monitoring take place at every level of the program, in the context of group ANC, this includes the level of the health center, providers and women. This is particularly important in addressing more challenging components of group care, such as the impact of facilitator gender on active participation from women. We suggest program implementers consider contextually appropriate mechanisms for measuring factors influencing fidelity at the participant level, such as an exit interview or complaints and feedback hotline. In ensuring the involvement of women, these mechanisms serve as a participatory approach to monitoring implementation fidelity while informing recommendations for improving fidelity of the group ANC process.
Limitations
Seven Master Trainers conducted observations and completed MFAs, however, fidelity of Master Trainer support was not documented, and due to financial and logistical constraints, inter-rater reliability for MFAs was not measured [10]. As a result, the degree of agreement between raters is unknown and it is possible the relationship between time and MFA scores presents a limited view of the results. As Master Trainers provided support primarily for identified gaps, this may explain why items with the lowest fidelity scores at initial visit had the largest increase in fidelity scores over time, however, this may pose limitations in interpreting the relationship between time and initially high scoring items.
A single Master Trainer was assigned to complete Activity Reports and MFAs for their designated health centers for the duration of the program. Due to Master Trainers' workloads and scheduling challenges, continuity was not always maintained. Master Trainers occasionally completed Activity Reports and MFAs for health centers other than what they were assigned to, potentially contributing to variance in perceptions of fidelity and MFA scores. In addition, due to a small sample size of 160 MFAs, there is an increased likelihood of Type 2 error.
Conclusion
This study aims to contribute to the limited body of research on monitoring group ANC implementation fidelity, in the context of low and middle income countries. We suggest a greater investment in monitoring and mentoring at program onset, with periodic visits thereafter. Our study's findings also display the importance of using both quantitative and qualitative measures to assess implementation fidelity. This is particularly significant in identifying areas of implementation requiring greater support, such as room preparation, keeping time and curriculum knowledge. Future programs must consider contextual factors as well as innovation and flexibility in program monitoring and implementation when designing group ANC programs.
|
2023-07-26T05:05:43.840Z
|
2023-07-24T00:00:00.000
|
{
"year": 2023,
"sha1": "789c90acecd77da9f368de2cccdd72194efedefd",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "789c90acecd77da9f368de2cccdd72194efedefd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
225301357
|
pes2o/s2orc
|
v3-fos-license
|
INTRA-AND INTER-ANNUAL TRENDS OF SUN-INDUCED FLUORESCENCE (SIF) FOR CONTRASTING VEGETATION TYPES OF INDIA
The photosynthesis governs productivity and health of the forests. Traditionally, remote sensing derived reflectance measures have been used to assess forest phenology, productivity and stress. The chlorophyll pigments absorb solar radiation, and emit fluorescence in far red region of electromagnetic spectrum. Chlorophyll fluorescence directly relates to the photosynthetic activity of the plants. Measurement of chlorophyll fluorescence from space has recently been achieved in the form of SunInduced Fluorescence (SIF). But SIF response have been found variable with respect to variation in vegetation type, hence, there is a need to study SIF response of tropical forests of India considering their wide extent, contribution to national carbon cycle and climate resilience. In this study, intraand inter-annual GOME-2 and OCO-2 SIF responses of contrasting Indian tropical forest types viz., dry deciduous (Betul, Madhya Pradesh), moist deciduous (Kalahandi, Orissa) and wet evergreen forests (Uttara Kannada, Karnataka) has been investigated with respect to rainfall, NDVI and GPP trends. The results show that dry, moist and wet forests of India have differences in photosynthetic activity at intraand inter-annual scale. GOME-2 SIF observations were more variables than OCO-2 SIF, particularly during green-up and senescence phase. SIF explained higher seasonality for dry deciduous followed by moist deciduous and wet evergreen. Annually integrated SIF (proxy of GPP) was in order: wet evergreen>moist deciduous> dry
INTRODUCTION
Measurements of Sun-Induced Fluorescence (SIF) from space have the potential to improve the accuracy of global photosynthesis maps. Whether a plant is photo-synthetically active or not can be detected directly by capturing the chlorophyll fluorescence radiation through remote sensing techniques. Earlier studies revealed that SIF product can be related with GPP (Lee et al., 2013;Frankenberg et al., 2014F). SIF can be expressed by a similar equation in which few assumptions are made that relate with GPP and LUE (Damm et al., 2010;. Time of acquisition of data has huge impact on the relationship of fluorescence and photosynthetic rate (Tol et al., 2009). SIF is primarily retrieve through Fraunhofer Line Depth (FLD etc.) by Earth Observation (EO) satellite (Meroni et al., 2009). FLD uses Fraunhofer absorption line that introduced by oxygen band O2B (686nm) and O2A (760nm) (Plascyk, 1975;Moya et al., 2004). At present, global scale SIF retrieved through FLD algorithm is provided by a few satellites like (Global Ozone Monitoring Experiment -2) and Orbiting Carbon Observatory 2 (OCO-2) with up to a few km (Joiner et al., 2013;Frankenberg et al., 2014). However, these SIF sensors slightly differ with their retrieval channel and sensing time. GOME-2 uses 734nm -758 nm channel (spectra) to detect morning SIF (9: 30 AM, local time) whereas OCO-2 uses 757 & 771 nm channels to retrieve SIF at 1:30 PM (local time).
SIF also vary with different forest type comprises of different canopy structure and biochemical variables. About 90% * (Verrelst et al., 2015, Chen et al., 1999Tol et al., 2014;Walker et al., 2014). Time-series analysis provides a descriptive feature of seasonality [e.g. ARIMA (auto regressive integrated moving average) model]. Annual SIF variation is estimated by integrating SIF by AUC (area under curve) method (Reed et al., 1994). In present study, we tested the potential of SIF originated at different spectra and time, to capture the seasonality of different forest type induced by photosynthetic activity.
METHODOLOGY
2.1 Materials 2.1.1 Satellite SIF Products: GOME-2 SIF (V27 Level 3) monthly data from 2014 to 2017 were downloaded from https://acd-ext.gsfc.nasa.gov/. GOME-2 sensor is a spectrometer on-board European meteorological satellite MetOp-A and MetOp-B, launched in the year 2006 in sun synchronous polar orbit. Its spatial resolution is 40km x 40 km and swath is 1920 km. It senses irradiance in wavelength range 240nm-790nm at 0.5nm spectral resolution . OCO-2 SIF product data was downloaded from September 2014 to July 2018 from https://co2.jpl.nasa.gov. OCO-2 is an U.S. environmental science satellite which was launched on 2 July 2014 in sun synchronous orbit. OCO-2 has spatial resolution of 2.25km x 1.29km . It measures Earth reflected radiation in O2-A band at 0.76 microns and CO2 band at 1.61 and 2.06 µm. SIF is retrieval at O2-A band using the SIF emission spectra, ranges between 660 -850nm.
Satellite-derived Biophysical Products: MODIS (MYD13C2) NDVI (Normalised Difference Vegetation
Index) 8-days composite product at 0.05º spatial resolution was downloaded from NASA website (https://search.earthdata.nasa.gov). MODIS (MYD17A2H) GPP 8-day composite data of 500m pixel size of 2015-2017 was obtained. GPP products based on fraction of absorbed photosynthetically active radiation (fAPAR) and photosynthetically Active Radiation (PAR) reflectance of vegetation which indicates the productivity of plants.
Ancillary Data:
District wise monthly rainfall data (AWS based) of different forested grids belonging to period 2013 to 2017 was downloaded from Indian Meteorological Department (IMD) website (http://www.imd.gov.in/). In addition, the mean annual precipitation and temperature spatial data layers from Worldclim (www.worldclim.org/) was used. Vegetation type map of India was obtained from Reddy et al. (2015).
2.2.1
Selection of Contrasting Vegetation Types: According to Koppen-Geiger scheme of classification and based on vegetation type map of India (Reddy et al. 2015), three contrasting vegetation types were chosen i.e. Tropical Dry Deciduous (TDD) from Betul, Madhya Pradesh, Tropical Moist Deciduous (TMD) from Kalahandi, Orissa and Tropical Wet Evergreen (TWE) from Uttara Kannada, Karnataka of India ( Figure 1).
Figure 1. Location of selected forest type in India
The factors such as data availability, species composition, Mean rainfall (MRF), Mean Temperature (MT) (℃) area extent (to suffice GOME-2 pixel extent) and seasonal variation were taken into account for selection of the site within the selected vegetation type (Table 1)
2.2.2
Pre-Processing of SIF Data: GOME-2 and OCO-2 SIF SIF data were downloaded in NetCDF (.nc) and NetCDF-4 (.nc4) in 2-D and 1-D data format respectively. NetCDF format can be directly opened in HDF viewer, Panoply, MATLAB and R (CRAN) etc. software. ArcGIS uses "Make NetCDF Raster Layer" tool from ArcTool box to converts 2-D NetCDF file to other raster format (.TIF). GOME-2 SIF images of Indian region were extracted from the global coverage. However, OCO-2 SIF data global coverage is provided in the form of point layer (1-D). A Python script in Linux was used to extract are OCO-2-point data for Indian region. Negative and no data values were removed from raster layer marking as error or flag.
2.2.3
Intra and Inter-Annual SIF Trend Analysis: Box plot method showing lowest value, highest value, lower quartile, upper quartile, distribution or range and median value were used to show the SIF trends using R studio (CRAN team, 2018). ARIMA (auto regressive integrated moving average) model is used to find out the temporal trend of SIF varying with other variables (e.g. rainfall, NDVI, and GPP) through time series analysis (CRAN team, 2018). In this study trend and point inflection (TPI) methods are used jointly by the help of CRAN-R statistical software (CRAN team, 2018). TPI method permits easy discrimination of growing season having multiple growth seasons. Pre-defined or comparative references value has been used to identify the transition phase (i.e. leaf fall as end of senescence, leaf flush as onset of greenness) as a threshold value. Phenological transition periods is the time lag between two specific phenological conditions.
The inflection point method based on detection of values and point at particular range of time (Reed et al., 1994). Trend derivative methods used to estimate time integrated SIF for interannual variation analysis. tSIF estimated by using AUC (Area Under Curve) with quantitative accuracy tests through library (DescTools) available in R-core package. AUC drawn either one of these method "trapezoid", "step" or "spline". "spline" method is used with 'splinefun' function integrate in 'function'. Loess regression is applied to smoothen the time series dataset and then time integrated value is calculated (Figure 2).
Relationship of SIF with Biophysical Parameters:
The biophysical parameters impact and pattern on SIF was analysed. Also seasonal as well as spatial variation examined. Partial correlations applied between SIF of GOME-2 and these parameters. The climatic variables are aggregated spatially and distributed monthly for each selected sites of forest. Individual variables relationship is derived and studied. Correlation coefficient of rainfall ground station point data and GOME-2 SIF is estimated. In this relationship one is ground data i.e. district rainfall IMD data and other one is satellite product. Bias and variance is estimated.
This involves the study of phenological events and relationship during stress as well as growing phase. For this tSIF (time integrated SIF) and tNDVI (time integrated NDVI) values are studied for stress as well as for growing phase. The generated values are compared with each other. To derive GPP relationship with GOME-2 SIF similar above section trend analysis is done. Coefficient of regression value are also generated to compare the relationship of one variable with each other.
Intra-Annual SIF Trend Analysis
Seasonal variations affect the photosynthetic activity by regulating the phenology of forest. Seasonal variation (intraannual) are shown by box-plot of monthly SIF for tropical dry deciduous, moist deciduous and wet evergreen forest type ( Figure 3). Figure 3. Intra-annual trend of SIF derived from GOME-2 (left) and OCO-2 (right) for different forest type.
Tropical Dry Deciduous (TDD)
: GOME-2 SIF values show large variation in minimum and maximum values indicating dry deciduous forest undergo minimum as well as maximum period of photosynthetic activity (Figure 3, top). Similar trend is also observed for OCO-2 SIF response with high variability during growth phase i.e., July -August and negligible during senescence. Though both GOME-2 SIF and OCO-2 able to capture the seasonality of dry deciduous forest well, yet GOME-2 SIF response was found tracking seasonal changes more clearly. The variability in SIF in June reaches to a maximum value of more than 1 and minimum value less than 0.5, due to leaf flush resulting into accelerating metabolic activities (Dadhwal et al., 2012) which causes this variation in SIF after leaf emergence.
In July, August & September growth of trees, particularly foliage is maximum with optimum growth condition. The higher metabolic activities of growing phase promote high productivity (Jha et al., 2013) thus account for high SIF values. Senescence phase starts from end of November and the forest remains leafless till March (Shah et al., 2007). To withstand the high temperature and low rainfall, TDD forests shed their leaves that reduces the transpiration rate and helps for their survival (Singh, Kushwaha, 2005). In the month of February and April, due to undergrowth (shrubs and herbs such as grasses) exposure (Pande et al., 2002), they also contribute to SIF showing more variability as compared to January and December.
Tropical Moist Deciduous (TMD):
These forests receive rainfall for four to five months. Due to this, the duration of growth phase length is more i.e., May to October. So, the growth is evenly distributed due to prolonged favourable condition for growth resulting into slow growth showing lesser SIF values during growth phase and high during senescence phase as compared to dry deciduous forest (Figure 3, middle). The leaves emerge in this type of forest during May -June. The SIF is quite higher as compared to previous months. The variability in SIF was found high because the selected forest has a mixed forest having different leaf emergence phases for different species found in this forest (Sinha et al., 2017). Peak growth phase of this forest ranges from end of June to October (Singh et al., 1993) which shows high values of SIF all round these months. Variability of SIF is more in the months of June and July due to maximum growth. Senescence phase starts from end of December and the forest remains leafless till March when the flowering starts (Poorter et al., 2007).
Tropical Wet Evergreen (TWE):
In the tropical wet evergreen, the seasonal variability do not show any specific trend as compared to dry deciduous and moist deciduous forests as the leaf fall, growth and senescence phases are not separable. For the evergreen vegetation, the SIF values remains uniform throughout the year due to no specific season for leaf fall so they keep on growing all-round the year (Dash et al., 2010). Still during August, it has attained maximum growth with highest value of SIF around 2.0. Minimum photosynthesis in the month of January. Due to metabolic activities throughout the year (Pascal et al., 2004), the variability in SIF is more during all months with a maximum variability in the month of June with highest SIF (Figure 3, bottom). The OCO-2 SIF response deviated from GOME-2 SIF in the month of April and September except these two months, there is similarity between growing as well senescence phase of both the satellite SIF trends.
Time Series Analysis (Intra-And Inter-Annual SIF)
SIF of three contrasting tropical forest types show different level of photosynthetic activity in different months. Overall mean SIF values for wet evergreen is more as compared to dry and moist deciduous. But maximum SIF response was observed in August for dry deciduous. It indicates that dry deciduous forest have high photosynthetic capacity for short duration (Figure 4). To predict the seasonal cycle, the SIF derived from GOME-2 served as an effective predictor for different forest types. The trend is more closely corresponded with rainfall and GPP than with NDVI.
Tropical Dry Deciduous (TDD):
The pattern and seasonality that are obtained through SIF data are different from traditional vegetation indices records. The SIF & GPP shows enhanced productivity from July to October and values reaches to peak during post monsoon months i.e. August, September and October whereas rainfall is at its peak in the month of June and July. During dry months they shed leaf (April, May, June) so, the SIF, GPP and NDVI reaches to minimum in each year to limit evapotranspiration loss (Singh & Kushwaha,2005). SIF and rainfall shows highest value in the year 2016 whereas NDVI and GPP showing similar trend for all the years. As compared SIF from these three variables the correlation with rainfall is better than other two variables i.e. (R 2 =0.62) ( Figure 5). SIF relates to GPP and NDVI with R 2 =0.58 and 0.5 respectively (p <0.005). Figure 5. GOME-2 SIF relationship with rainfall (a), GPP (b) and NDVI (c) for TDD
Tropical Moist Deciduous (TMD):
SIF shows rapid rise after receiving first shower of rains, i.e. from May which is different while comparing to dry sites which shows rise in value during late June supporting the findings of Sinha et al., (2017).
In the year 2016 and 2017 SIF and rainfall follow each other but not in 2014 and 2015. The trend of GPP and NDVI remain same for entire four years whereas SIF reaches to peak value 2.4 during 2016. It was also observed that SIF reacted sharply to increasing rainfall amount during growing phases of forests, more sharply than NDVI. On the other hand, sharp decrease in SIF was observed in post-monsoon period indicating increasing level of water stress in the forests for which NDVI did not show much sensitivity till November months. Rainfall shows highest R 2 with SIF R 2 = 0.55 (p<0.5) for TMD forest (Figure 6). SIF is not well correlated with GPP and NDVIMODIS with R 2 =0.4 (p<0.5) for TMD forest. Figure 6. GOME-2 SIF relationship with rainfall (a), GPP (b) and NDVI (c) for TMD
Tropical Wet Evergreen (TWE):
The SIF response of TWE is unique. NDVI trend of 2015 did not show seasonal variation (Prasad et al. 2007) but SIF showed seasonal variability for the same period. This shows that SIF is more related to phenology as well as stress in evergreen species than NDVI. The seasonal correlation between SIF and GPP was also weaker in the wet tropics, mostly because of the minimal GPP seasonality and noise in the data (Giardina et al., 2018) (Figure 7). Figure 7. GOME-2 SIF relationship with rainfall (a), GPP (b) and NDVI (c) for TWE The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLIII- B3-2020, 2020XXIV ISPRS Congress (2020 Leaf emergence takes place in February and March (premonsoon) and led to rise of SIF value in each year but the same could not captured using NDVI. Flowering and fruiting takes placed in between December-March. Wet evergreen was found less sensitive to precipitation seasonality. As trees in tropical wet evergreen forests contain more biomass and have deep rooting systems which enables to access them deeper soil moisture thus avoid impacts of drought month on photosynthetic capacity.
Inter-Annual SIF Trend Analysis
Minor differences were observed in the values of yearly tSIF estimated from original curve and smoothened curve by AUC and spline algorithm. The unit of tSIF depend upon the SIF unit (i.e. mW/m 2 /nm/sr). Overall tSIF (2014-2017) estimated through GOME-2 shows highest for TWE whereas OCO-2 showed tSIF for TMD was slightly higher than TWE (Table 2). Annual tSIF estimated from GOME-2 (tSIFGOME-2) was higher than tSIF from OCO-2 (tSIFOCO-2) for almost all year (Table 2). Total tSIF was lowest for TMD forest type than TMD and TWE estimated from both the SIF sensors. tSIFGOME-2 of TDD for year 2016 and 2017 is almost same whereas, tSIFOCO-2 shows highest value for year 2016 instead of 2015, as rainfall was also recorded higher for the same year. Yet, tSIF for year 2015 shows little anomaly as OCO-2 estimates higher for TMD and lower for TDD and TWE but GOME -2 estimates highest for TDD than TMD and TWE. TWE shows highest photosynthetic activity than TMD and TDD forest type.
GOME-2
The dry deciduous and moist deciduous forest revealed similar trends in both SIF and NDVI but wet evergreen forest exhibited more prominent differences in SIF and NDVI (Figure 8). This is because SIF is associated with chlorophyll molecules function and not only greenness of leaf whereas NDVI rely only on greenness of leaf (Anyamba et al., 2001).
CONCLUSION
GOME-2 SIF observations efficiently captured the seasonal variability than OCO-2 SIF. Estimated monthly SIF and annual tSIF are guided by the rainfall for all forest type. Observation shows that, SIF effectively captures the photosynthetic variability linked with leaf transition periods (i.e. leaf flushing and senescence phase) particularly in dry deciduous forest. Monthly SIF also shows unique characteristic of dry deciduous forest with a peak and deep annually. SIF potentially captures seasonality while NDVI gets saturated specially in evergreen forest. Annual tSIF as a proxy of photosynthetic activity (i.e. GPP) shows, wet evergreen forest sequestrated more carbon than moist deciduous and dry deciduous forest. GPP from MODIS can be replaced by tower generated GPP to get a more accuracy in terms of relation. In future, SIF can be used as an important tool to measure temporal and spatial variability of photosynthetic activity, stress pattern, and forest health of different forest type.
|
2020-08-27T09:09:02.620Z
|
2020-08-21T00:00:00.000
|
{
"year": 2020,
"sha1": "bea9fd47d7ebdd874225fa360ac04617cd285697",
"oa_license": "CCBY",
"oa_url": "https://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XLIII-B3-2020/1047/2020/isprs-archives-XLIII-B3-2020-1047-2020.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4c2434cc323a5a74be0d22ea811c03185fef6520",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
52961664
|
pes2o/s2orc
|
v3-fos-license
|
VSL#3 can prevent ulcerative colitis-associated carcinogenesis in mice
AIM To investigate the effects of VSL#3 on tumor formation, and fecal and intestinal mucosal microbiota in azoxymethane/dextran sulfate sodium (AOM/DSS) induced mice model. METHODS C57BL/6 mice were administered AOM/DSS to develop the ulcerative colitis (UC) carcinogenesis model. Mice were treated with 5-ASA (75 mg/kg/d), VSL#3 (1.5 × 109 CFU/d), or 5-ASA combined with VSL#3 by gavage from the day of AOM injection for three months (five days/week). The tumor load was compared in each group, and tumor necrosis factor (TNF-α) and interleukin (IL)-6 levels were evaluated in colon tissue. The stool and intestinal mucosa samples were collected to analyze the differences in the intestinal microbiota by 16s rDNA sequencing method. RESULTS VSL#3 significantly reduced the tumor load in AOM/DSS-induced mice model and decreased the level of TNF-α and IL-6 in colon tissue. The model group had a lower level of Lactobacillus and higher level of Oscillibacter and Lachnoclostridium in fecal microbiota than the control group. After the intervention with 5-ASA and VSL#3, Bacillus and Lactococcus were increased, while Lachnoclostridium and Oscillibacter were reduced. 5-ASA combined with VSL#3 increased the Lactobacillus and decreased the Oscillibacter. The intestinal mucosal microbiota analysis showed a lower level of Bifidobacterium and Ruminococcaceae_UCG-014 and higher level of Alloprevotella in the model group as compared to the control group. After supplementation with VSL#3, Bifidobacterium was increased. 5-ASA combined with VSL#3 increased the level of both Lachnoclostridium and Bifidobacterium. CONCLUSION VSL#3 can prevent UC-associated carcinogenesis in mice, reduce the colonic mucosal inflammation levels, and rebalance the fecal and mucosal intestinal microbiota.
each group, and tumor necrosis factor (TNF-α) and interleukin (IL)-6 levels were evaluated in colon tissue. The stool and intestinal mucosa samples were collected to analyze the differences in the intestinal microbiota by 16s rDNA sequencing method.
RESULTS
VSL#3 significantly reduced the tumor load in AOM/ DSS-induced mice model and decreased the level of TNF-α and IL-6 in colon tissue. The model group had a lower level of Lactobacillus and higher level of Oscillibacter and Lachnoclostridium in fecal microbiota than the control group. After the intervention with 5-ASA and VSL#3, Bacillus and Lactococcus were increased, while Lachnoclostridium and Oscillibacter were reduced. 5-ASA combined with VSL#3 increased the Lactobacillus and decreased the Oscillibacter . The intestinal mucosal microbiota analysis showed a lower level of Bifidobacterium and Ruminococcaceae _UCG-014 and higher level of Alloprevotella in the model group as compared to the control group. After supplementation with VSL#3, Bifidobacterium was increased. 5-ASA combined with VSL#3 increased the level of both Lachnoclostridium and Bifidobacterium .
CONCLUSION
VSL#3 can prevent UC-associated carcinogenesis in mice, reduce the colonic mucosal inflammation levels, and rebalance the fecal and mucosal intestinal microbiota.
INTRODUCTION
Recently, the incidence of ulcerative colitis (UC) has shown an upward trend, leading to increased clinical attention on UC-associated carcinogenesis. A recent meta-analysis encompassing eight population-based cohort studies reported a 1.6% prevalence of colorectal cancer (CRC) in patients with UC, and the rate of CRC was 2.4-fold higher than that in the general population [1] . Moreover, the existing treatment for UC is not satisfactory for the prevention of carcinogenesis, involving several risks and side effects with long-term usage. Thus, finding new treatment regimens are essential.
Although the etiology of UC is yet to be elucidated, several studies have indicated that the host intestinal microbiota triggers an immune response that is requisite for the onset of the disease [2] . Microbiota also plays a major role in promoting UC-associated carcinogenesis. It downregulates the host immune response, improves the epithelial barrier function, and increases the mucus production [3] . Previous studies demonstrated that in the sterile intestinal environment, i.e., the lack of intestinal microbiota, a significant reduction in carcinogenic mutations and intestinal tumor formation was observed [4] . Chronic inflammation plays a crucial role in UC-associated tumorigenesis via cellular DNA damage, telomere shortening, and senescence [5] . Previous studies demonstrated that probiotics exert a superior therapeutic effect on inflammation and UC [6] . VSL#3 is a mixture of Lactobacillus casei, Lactobacillus plantarum, Lactobacillus acidophilus, Lactobacillus delbrueckii subsp. bulgaricus, Bifidobacterium longum, Bifidobacterium breve, Bifidobacterium infantis, and Streptococcus salivarius [7] . It proved to be beneficial in the treatment of UC, including remission and relief of the relapse in mild to moderate disease [8][9][10] . Thus, we speculated that probiotic treatment or adjuvant treatment of UC could prevent carcinogenesis. One study demonstrated that VSL#3 can inhibit UC-associated carcinogenesis in a mouse model [11] . However, the mechanism underlying the VSL#3 treatment of UC carcinogenesis is yet to be elucidated.
Therefore, in the present study, VSL#3 was selected to investigate the effect of prevention on UC-associated carcinogenesis and the differences between fecal and mucosal microbiota were analyzed to gain a theoretical insight for the prevention of UC-associated carcinogenesis.
Specimen collection
The mice were sacrificed by the 12 th week via transcardiac perfusion, and colon tissues were removed. The colons were slit longitudinally along the main axis and washed with 0.9% saline. The long and short diameter of each tumor was measured using sliding calipers, and the total tumor load of each colon was calculated (sum of the product of long and short diameter of each tumor). Subsequently, the whole colon was divided into four sections. The section near the anus washed with 0.9% saline to remove the non-adherent bacteria were flash-frozen in liquid nitrogen and stored at -80 ℃ for subsequent microbiota analysis. The remaining sections were used for enzyme-linked immunosorbent assays (ELISA) and histopathological examinations. A stool sample was collected just before AOM injection and sacrifice. A total of six mice were randomly selected from each group, and their stool and intestinal mucosa samples were sent to Allwegene (Beijing, China) for analyzing the differences in intestinal microbiota by 16S rDNA sequencing method.
Fecal DNA extraction and pyrosequencing
Microbial genomic DNA was isolated using a QIAamp DNA Micro Kit according to the manufacturer's instructions. The final quantity and quality of the DNA were assessed at 260 nm and 280 nm using an ultraviolet spectrophotometer and stored at -20 ℃ before further analysis. The V3-V4 hypervariable regions of the 16S rDNA gene were subjected to highthroughput sequencing by Allwegene using the Illumina Miseq PE300 sequencing platform (Illumina Inc., CA, United States).
ELISA for tumor necrosis factor-α and interleukin-6 in colon mucosa
The levels of tumor necrosis factor (TNF)-α and interleukin (IL)-6 in the colon mucosa were measured using commercial mouse TNF-α and IL-6 ELISA Kits (eBioscience, United States), according to the manu- mucosal carcinoma or high-grade intraepithelial neoplasia in mice treated with AOM/DSS. They were manifested with colonic gland structure disorder, large nuclei, deep staining, and nucleoplasmic ratio imbalance ( Figure 4).
Effects of VSL#3 on UC-associated carcinogenesis
Treatment with AOM and DSS led to 100% (19/19, one mouse died during the experiment due to fighting) incidence of colonic neoplasms in the model group with the mean tumor load of 0.97 ± 0.19 cm. 5-ASA and VSL#3 administration significantly reduced both the tumor formation rate and the tumor load (Table 1 and Figure 5). Furthermore, no colonic tumor was detected in the control group.
Colonic TNF-α and IL-6 level comparison
As illustrated in Figure 6 and Tables 2 and 3, the levels of colonic tissue TNF-α and IL-6 in the model group were significantly higher than that in the control group.
The increased levels of these inflammatory factors induced by AOM/DSS were attenuated by 5-ASA and VSL#3 treatment.
VSL#3 treatment alters the composition of fecal microbiota in AOM/DSS treated mice
In order to characterize the diversity of fecal-associated community in UC-associated carcinogenesis, we used Chao 1 and the observed species indexes, as well as the Shannon and Simpson indexes. No significant difference was detected in the diversity and composition of fecal microbiota in each group at the beginning of the experiment. After the 12-wk experiment, although no statistically significant difference was detected in the diversity among groups, the microbiota composition was altered considerably. The change in the composition of fecal microbiota induced by AOM/DSS administration was characterized by a decrease in Lactobacillus coupled with an increase in Oscillibacter and Lachnoclostridium as indicated by metastats analysis (P < 0.05). Both 5-ASA and VSL#3 supplementation was associated with a significant increase in Bacillus and Lactococcus and a decrease in Oscillibacter and facturer's protocols. The absorbance was measured at 450 nm. The results were expressed as pg/mg tissue. A total of eight mice were selected randomly from each group for ELISA.
Statistical analysis
Data are presented as mean ± SE. All statistical analyses were performed using GraphPad Prism Software Version 6.0 (GraphPad Software Inc., La Jolla, CA, United States). Statistical differences between experimental variants were assessed by two-tailed independent t-test, and data from more than two groups were analyzed by one-way ANOVA. Anosim and metastats analysis were used for microbiota analysis. P < 0.05 was considered statistically significant.
General health of mice in each group
As shown in Figure 2, compared to the control mice, the body weight loss was significantly higher in mice treated with azoxymethane/dextran sulfate sodium (AOM/DSS) after day 10 of DSS administration, which was accompanied by colitis symptoms, such as loose and bloody stool and dim body hair, fatigue, and less movement. These symptoms were alleviated when the mice received ordinary drinking water. In week 9, some mice treated with AOM/DSS presented bloody stool again, as well as, anal prolapse in week 10. However, no apparent weight loss was observed in the control mice, and no significant differences were detected among the five groups at the end of week 12.
Establishment of UC-associated carcinogenesis mice model
The mice were sacrificed by week 12, and the colorectal tumors were observed in the model and treatment groups (5-ASA, VSL#3, and 5-ASA + VSL#3). Strikingly, the tumor was primarily localized in the distal two-thirds of the colon. Anal tumor fusion and ring growth at the end of the rectum were observed in mice with anal prolapse (Figure 3). The pathological analysis showed Lachnoclostridium as compared to the model group (P < 0.05). 5-ASA combined with VSL#3 increased the level of Lactobacillus and decreased that of Oscillibacter (P < 0.05) ( Table 4).
VSL#3 treatment alters the composition of mucosal microbiota in AOM/DSS treated mice.
For the mucosal microbiota, no difference was observed in the community diversity among the groups after the 12-wk experiment. However, the distinct shift in the microbiota composition was observed by PCA and Anosim analysis (R > 0, P < 0.05). Further investigation into the discrete bacterial taxa revealed that Ruminococcaceae UCG-014 and Bifidobacterium decreased, while Alloprevotella increased in the model group compared to the control group. After supplementation with VSL#3, Bifidobacterium was increased. Although 5-ASA alone did not alter the mucosal microbiota, the combination with VSL#3 increased Lachnoclostridium and Bifidobacterium in the mucosa (Table 5).
DISCUSSION
The current study found that the rate of tumor formation and tumor load decreased after VSL#3 treatment compared to the model group, while the levels of TNF-α and IL-6 in the colon tissue in the model group were significantly higher than the control group. After the 12 wk treatment of VSL#3, the increase in TNF-α and IL-6 caused by AOM/DSS declined significantly. These findings were consistent with that of previous studies [11,13,14] . The major risk of long-term chronic inflammation is tumor occurrence [2] . Thus, we speculated that VSL#3 could prevent UC carcinogenesis by inhibiting the inflammatory response.
Herein, we found differences between the fecal and mucosal microbiota. In the case of fecal microbiota, the model group mice possessed less Lactobacillus and more Oscillibacter and Lachnoclostridium as compared to the control group. Previous studies have shown that Lactobacillus bulgaricus can reduce colitis [15] , and Lactobacillus rhamnosus can effectively maintain UC remission [16] . Oscillibacter and Lachnoclostridium are newly discovered genera with respect to digestive diseases. In the case of mucosal microbiota, the level of the genus UCG-014 of Ruminococcaceae and Bifidobacterium decreased, while that of Alloprevotella increased in the model group as compared to the control group. Some genus of Ruminococcaceae can consume hydrogen to produce acetate, which is subsequently used by Roseburia to produce butyrate that is not only the main source of energy for intestinal epithelial cells but can also inhibit the signaling pathway of proinflammatory cytokines [17] . Bifidobacterium can produce bacteriocin and organic acids against pathogens on intestinal mucosal invasion [18] . It regulates the intestinal mucosal immunity and prevents the colonization of pathogens. The role of Alloprevotella is not yet clarified as it is not reported frequently in the digestive disease. Therefore, we hypothesize that dysbiosis occurs during UC-associated carcinogenesis, which reduces the beneficial types and increases the detrimental types.
Previous studies have shown that supplementation of probiotics can balance the intestinal microbiota of UC patients [6] , which led us to speculate that supplementation of probiotics can also balance the intestinal microbiota of UC-associated carcinogenesis. The current study demonstrated that Bacillus and Lactococcus were increased, while Oscillibacter and Lachnoclostridium were decreased in the feces following VSL#3 treatment as compared to the model group. Some species of Bacillus and Lactococcus are widely used as probiotics. For example, Bacillus subtilis can significantly reduce P < 0.05 between the model and control groups; 2 P < 0.05 between the model and 5-ASA groups; 3 P < 0.05 between the model and VSL#3 groups; 4 P < 0.05 between the model and 5-ASA + VSL#3 groups. Figure 4 Representative image of hematoxylin-eosin staining of colon tissue examined under a microscope (40 × and 100 ×). A: Control group, the colonic mucosa glands were normal in the control group, the structure was regular, and the opening was good; B: Model group, the colonic gland structure presented disorder, large nuclei, deep staining, and nucleoplasmic ratio imbalance.
DSS-induced colonic mucosal injury and inflammatory
factors in mice and improve the levels of short-chain fatty acids [19] . Lactococcus lactis exerts a protective effect on DSS-induced colitis model mice [20] . Furthermore, Bifidobacterium increased in the mucosa after VSL#3 supplementation, thereby suggesting that VSL#3 supplementation, following the onset of AOM/ DSS-induced colitis, promotes a healthy gastrointestinal bacterial community. Interestingly, VSL#3 is composed of eight strains, including one Streptococcus, three Bifidobacterium, and four Lactobacillus. However, none of the above strains increased significantly in the fecal intestinal microbiota after three-month gavage, suggesting that the positive effect of probiotics on the intestinal microbiota of the host is by regulating the proportion of beneficial and harmful bacteria.
For the differences between fecal and mucosal microbiota, we make the following explanation. There are three kinds of Bifidobacterium in VSL#3, and Bifidobacterium increased in mucosal microbiota but not in fecal. This phenomenon indicated that Bifidobacterium is easily colonized in the mucosa. Conversely, Bacillus and Lactococcus increased in fecal microbiota after VSL#3 intervention but not in the mucosa, indicating that Bacillus and Lactococcus can colonize easily in the feces. Strikingly, the four types of Lactobacillus in VSL#3 did not increase either in the fecal or mucosal microbiota, thereby suggesting that the intestinal environment of UC-associated carcinogenesis is not optimal for the growth of Lactobacillus. Only in the 5-ASA + VSL#3 group, the increase in Lactobacillus was observed in feces, which might be attributed to the low luminal pH.
However, these hypotheses necessitate further studies for substantiation.
5-ASA is the first-line treatment for mild-to-moderate UC, and studies have found that 5-ASA ≥ 1.2 g/d could reduce the risk of carcinogenesis in patients with mild-tomoderate UC [21] . Thus, considering the clinical significance, we designed the 5-ASA monotherapy group and the 5-ASA + VSL#3 group. Interestingly, the change in the fecal microbiota in the 5-ASA group was similar to that in the VSL#3 monotherapy group. The potential mechanisms regulating the microbiota by 5-ASA are as follows: (1) Change in the colonic luminal pH: 5-ASA is released in the colon and translated into acetylsalicylic acid, which in turn, can decrease the luminal pH [22] . Low luminal pH is optimal for the growth of Bifidobacteria and Lactobacilli [23] ; (2) improvement in the anoxia environment: 5-ASA can inhibit the production of chemotactic eicosanoids and cyclooxygenase 2 (COX2), which induces anoxia and can inactivate the oxygen-derived free radicals, improving the anoxia situation, which might affect the composition of intestinal microbiota [22] ; and (3) 5-ASA can downregulate the expression of genes that are involved in bacterial metabolism, invasiveness, and antibiotic/stress resistance [24] . Nevertheless, the present study has some limitations. Herein, we only observed the phenomenon of gut microbiota changes while the specific role of flora is yet to be explored. Our future in vitro studies would focus on the underlying mechanisms.
In conclusion, the current study demonstrated that VSL#3 prevented UC-associated carcinogenesis in the AOM/DSS-induced mice model and decreased the level of Figure 6 Colonic tumor necrosis factor-α and interleukin-6 levels in different groups. a P < 0.05, b P < 0.01, c P < 0.001. TNF-α: Tumor necrosis factor-α; IL-6: Interleukin-6.
TNF-α and IL-6 in colon tissue. The intestinal microbiota dysbiosis was exhibited in UC-associated carcinogenesis mice. Supplementary VSL#3 was beneficial for a balanced fecal and mucosal microbiota in UC-associated carcinogenesis mice. Taken together, VSL#3 may serve as a potential therapeutic agent for the prevention of UCassociated carcinogenesis. Ongoing studies in our group are focused on the underlying mechanisms.
Research background
Recently, an upward trend has been observed in the incidence of ulcerative colitis (UC) leading to increased clinical attention on UC-associated carcinogenesis.
Research motivation
Existing treatment for UC in the prevention of carcinogenesis involves several risks and side effects with long-term usage. Finding new treatment regimens are essential.
Research objectives
To investigate the effects of VSL#3 on tumor formation, and fecal and intestinal mucosal microbiota in the azoxymethane/dextran sulfate sodium (AOM/DSS) induced mice model.
Research methods
C57BL/6 mice were administered AOM/DSS to develop the UC-associated carcinogenesis model. The treatment group was gavaged with 5-ASA (75 mg/kg/d), VSL#3 (1.5 × 10 9 CFU/d), and 5-ASA + VSL#3 from the day of AOM injection for three months (five days/week). The tumor load was compared in each group, and tumor necrosis factor (TNF-α) and interleukin (IL)-6 levels evaluated in colon tissue. The stool and intestinal mucosa samples were collected to analyze the differences in the intestinal microbiota by 16s rDNA sequencing.
Research results
VSL#3 significantly reduced the tumor load in the AOM/DSS-induced mice model, and decreased the level of TNF-α and IL-6 in colon tissue. The model group had a lower level of Lactobacillus and higher level of Oscillibacter and Lachnoclostridium in fecal microbiota than the control group (UC-associated carcinogenesis not induced). Bacillus and Lactococcus were increased after the intervention with 5-ASA and VSL#3, while Lachnoclostridium and Oscillibacter were reduced. 5-ASA + VSL#3 increased the Lactobacillus and decreased the Oscillibacter. The intestinal mucosal microbiota analysis showed a lower level of Bifidobacterium and Ruminococcaceae_UCG-014 and higher level of Alloprevotella in the model group compared to the control group. Bifidobacterium was increased after supplementation with VSL#3. 5-ASA + VSL#3 increased the level of both Lachnoclostridium and Bifidobacterium.
Research conclusions
In mice, VSL#3 can prevent UC-associated carcinogenesis, reduce the colonic mucosal inflammation levels, and is beneficial for rebalancing the fecal and mucosal intestinal microbiota.
Research perspectives
VSL#3 may be a potential therapeutic agent for UC-associated carcinogenesis prevention based on the data presented here.
|
2018-10-30T00:53:26.005Z
|
2018-10-07T00:00:00.000
|
{
"year": 2018,
"sha1": "a64f8f599398468fb6703b733acb01eb35aed96a",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3748/wjg.v24.i37.4254",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "a64f8f599398468fb6703b733acb01eb35aed96a",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
253050362
|
pes2o/s2orc
|
v3-fos-license
|
Climate Change Impacts Can Be Differentially Perceived Across Time Scales: A Study Among the Tuareg of the Algerian Sahara
Abstract As an Indigenous community of Algeria and the broader Sahel, the Tuareg hold unique ecological knowledge, which might contribute to broader models of place‐based climate change impacts. Between January and April 2019, we carried out semi‐structured interviews (N = 23) and focus group discussions (N = 3) in five villages of the province of Illizi, Algeria, to document the local Tuareg community's timeline and ecological calendar, both of which are instruments used to understand place‐based reports of climate change impacts. The livelihoods of the Tuareg of Illizi are finely tuned to climate variability as reflected in changes reported in the cadence of events in their ecological calendar (marked by cyclical climatic and religious events). Participants reported rain and temperature irregularities and severe drought events, which have impacted their pastoral and semi‐pastoral livelihoods. These reports are aligned with scientifically measured climate observations and predictions. Paradoxically, although participants recall with detail the climatic disasters that happened in the region over the last century, the Tuareg do not explicitly report decadal trends in the frequency of extreme events. The differential perception of climate change impacts across scales can have important implications for undertaking climate change adaptation measures.
Among other factors, place-based knowledge and skills can largely affect societies' capacity to react and adapt to climate change impacts (Schlingmann et al., 2021). Indigenous and local knowledge and practices are important components of climate-related local planning and response to cyclic events and natural disasters (Charan et al., 2017;Fletcher et al., 2013;Plotz et al., 2017). Through generations, Indigenous peoples and local communities living in close relation with nature have accumulated very precise knowledge on celestial, meteorological and ecological phenomena (e.g., Garteizgogeascoa et al., 2020;Orlove et al., 2000). This knowledge has allowed them to anticipate weather conditions and seasonal events and to accordingly adapt their livelihood activities (Acharya, 2011;Reyes-García et al., 2018;Turner & Singh, 2011). Combining Indigenous and local knowledge with climate science has been beneficial to many communities, as the combination of both types of knowledge provide a better understanding of climate change drivers and potential impacts (Alexander et al., 2011;Boillat & Berkes, 2013;Jolly et al., 2002;Kassam, 2009a;Nickels et al., 2005;Nyong et al., 2007;Rapinski et al., 2017). However, to effectively incorporate Indigenous and local knowledge to climate and disaster planning, this knowledge must be recognized as valuable, identified, and documented, and included through all stages of the climate change and disaster risk management planning processes (Straza et al., 2018) and knowledge holders should have their rights recognized (Reyes-García et al., 2022).
Some of the ways in which Indigenous and local knowledge systems give meaning to time and report changes is through community historical timelines and ecological calendars. Community historical timelines are analytical tools to chronologically report on the main events affecting a community by placing them in chronological order (McNaught et al., 2011). Ecological calendars, also known as seasonal, natural, or phenological calendars, are based on ecological, phenological, or climatic events observed locally in the physical environment inhabited by the community (Kassam et al., 2018). Ecological calendars are frameworks that link temporal and spatial scales, contributing to landscape management and stewardship (Akulki, 2004;Franco, 2015;Kassam, 2009aKassam, , 2009bKrupnik & Jolly, 2002;Orlove et al., 2008). While the well-known celestial calendars (e.g., Gregorian) are based on the movements of the sun and the moon, ecological calendars emphasize the relative timing of environmental processes. Communities use both ecological and celestial calendars in tracking events that happen with different periodicity (from daily to inter-annual). When adequately used, community timelines and ecological calendars can provide a baseline for understanding local perceptions of climate change impacts and support local planning to adapt to environmental changes (Chambers et al., 2021;Yang et al., 2019).
Dryland ecosystems, which occupy 40% of the terrestrial surface, are particularly affected by climate change. The area occupied by dryland ecosystem is expected to expand by 10% by the end of the 21st century (IUCN, 2019). The people inhabiting dryland ecosystems have unique strategies to cope with the climatic variability of their environment, but climate change reduces their capacity to cope with environmental conditions (IUCN, 2019). Yet, how their coping and adaptive capacities are reduced is poorly understood. As state-level climate change mitigation and adaptation planning is often implemented in participation with local communities, understanding communities' climate change perceptions can contribute to the plans' success. This research is the first to describe the community timelines and ecological calendars of the Tuareg peoples of Algeria. The Tuareg are an Indigenous pastoral community adapted to the hyper-arid conditions of the Sahara Desert. We enquire whether Tuareg people have observed climate-related changes across two different time dimensions, a longitudinal dimension captured by the community timeline and a cyclical dimension captured by changes in the ecological calendar.
Algeria is particularly vulnerable to climate change, with slow-onset impacts such as from increased desertification and erosion to fast-onset impacts like water scarcity and flash floods (Sahoune et al., 2013). Algeria's National Climate Plan sets out targets for climate mitigation and adaptation in participation with local communities including adapting local agricultural calendars. In our discussion, we elaborate on how Tuareg reports of climate-related changes are reflected within the academic climate change literature and in what ways reports on climate change impacts can contribute to climate change adaptation planning in dryland ecosystems. 10.1029/2022GH000620 3 of 13
The People and the Study Area
The Tuareg are a pastoral community indigenous to the Sahelo-Saharan region spanning from the Maghreb to sub-Saharan Africa. Their territories are found across Libya, Algeria, Mali, Niger, Burkina Faso, with some small communities in Chad and Nigeria (Bernus, 2016). The Tuareg are traditionally pastoralists and raise herds of camels, cattle, sheep, and goats (Miara et al., 2019). Despite contradicting views around whether pastoral livelihoods have contributed to environmental degradation in the Sahel (Mortimore & Turner, 2005;Warren, 1995), the fact remains that pastoral societies have persisted in extreme dry conditions since the end of the African Humid Period (6,000-5,000 years ago) and that Tuareg pastoralism seems well adapted to the Saharan dryland (Brierley et al., 2018).
Like many Indigenous peoples around the world, Tuareg efforts to preserve their cultural identity and territories for future generations represent a commitment to what is termed as "indigeneity" (Steeves, 2018). Tuareg culture is rooted in an environmental ethic represented in their lifestyle and traditions and demonstrates an intimate understanding of their environment honed over generations (Bernus, 2016). Tuareg pastoral lifestyle demands vast knowledge of how to steward the land in a way that perpetuates fodder sources (native trees and grasses) to enable them to maintain herds of grazing and browsing livestock in resource-scarce environments (Brierley et al., 2018). While many Tuareg have been forced to transit into sedentary or semi-sedentary pastoral lifestyles, partly due to the greater frequency of drought events (Snorek, 2016), nomadic pastoralism remains an important part of Tuareg cultural identity and lifestyle (Snorek et al., 2014).
Algeria hosts the fourth largest Tuareg population after Niger, Mali and Burkina Faso. This study focuses on the Tuareg living in the wilaya (province) of Illizi in the South-East of Algeria. Illizi is about 1,800 km by road from the capital of Algiers (Figure 1). To the east, the province borders with Tunisia, Libya and Niger. To the west, it borders with the province of Tamanrasset and to the north with the province of Ouargla (Figure 1). The region occupies an area of 284,618 km2 (∼1/9 of the total surface of Algeria). Most of this area is rangeland (28, 450, 102 ha). The land used by cropland agriculture only covers 11,698 ha (OTNP, 2009). The total population of Illizi is estimated at 57,100 inhabitants of whom 43% are under 15 years of age. Most of the population in the province are Tuareg who speak "Tamasheq," an Amazigh or Berber language (Bernus, 2016). The region is composed of three landform types: dunes, plateaus, and lowlands (NAID, 2015). Soils are diverse with various types of edaphic accumulation: ablation, saline, sandy (dunes and nebkhas), and alluvial soils (OTNP, 2009). Vegetation grows mainly along the wadis or watercourses, which are the only environments allowing the presence of perennial plants.
The climate of Illizi is typical of the Saharan desert characterized by relatively high air temperature, low humidity, and very little precipitation (OTNP, 2009). The daily average temperature in summer is between 42.4°C maximum and 25.6°C minimum. In winter, the daily average temperature varies between and 22° maximum and 7°C minimum. As for most of the Sahara, the distribution of rainfall in Illizi is irregular (Yan et al., 2016). The winds are generally light to moderate and the most frequent blow from the southeast and east. The strongest winds often blow during the months of March, April, May, and September. Their speed can reach 120 km/hr, and they can lead to the formation of sandstorms that force the local population to take refuge indoors for hours.
Data Collection and Analysis
Between January and April 2019, we documented the Tuareg community timeline, ecological calendar, and reports of climate change impacts following a standardized protocol developed to document and compare local indicators of climate change impacts (LICCIs) across Indigenous Peoples and local communities around the world (Reyes-García et al., 2020). The protocol used has been developed by the LICCIs project, which aims to show the potential of Indigenous and local knowledge systems to improve scientific understanding of physical, biological, and socioeconomic climate change impacts as locally perceived (www.licci.eu).
We conducted fieldwork in the center and south of the Illizi province, in the villages of In Tourha and Belbachir (near the town of Illizi), and Bordj El Hawes, In Abarbar and Ifri (near Djanet; Figure 1). The research team have long-term trust relationships with the Tuareg of Illizi, which facilitated the implementation of the pre-designed protocol. After explaining the project's scope and objectives and after answering all participants' questions, we requested participants' Free, Prior and Informed Consent to participate. Literate participants signed a written consent form, and we used an oral script for illiterate people, which was signed by a witness. The research protocol was approved by the Ethics committee of the Universitat Autònoma de Barcelona (CEEAH 4781).
To collect qualitative data about the community's timeline (last 120 years), the ecological calendar, and observed climate change impacts, we used semi-structured interviews and discussed interview responses in focus groups with elder community members.
Semi-Structured Interviews
Two types of semi-structured interviews were conducted with different samples. First, to get a deep understanding of local livelihoods, important historical events, and the local ecological calendar, we targeted people who had knowledge about the locality (i.e., local experts). Specifically, we conducted interviews with local elders and people having a local authority role, including four tribal chiefs of over 67 years of age. One of the elders had tribal and spiritual authority over two villages (In Tourha and Belbachir). All the four interviewees were men. In these interviews, we asked about local livelihoods, including the activities people do for a living, the timing (i.e., yearly, seasonally) and location of the activities, and the household members or community groups in charge of or participating in those activities. We also asked about the local timeline, including the history of the study site, important events in the community that everyone remembers, and when these events happened (e.g., in relation to national events).
Second, to document climate-related changes in ecological calendars we selected informants using a "quota sampling" (Sudman, 1966) aiming at capturing gender, age, and livelihood diversity. In total, 19 people were interviewed in the five study villages: Ifri (4), Belbachir (6), Bordj El Hawess (3), In Tourha (2), In Abarbar (4). Sample size differs across villages depending on the number of people available and willing to participate. Our sampling is biased toward men (12 out of 19) because in the study site strong reservations exist preventing women to speak to foreigners. Participants included breeders, tourist guides, farmers, and craftsmen (Table 1). Age,Gender,and Profession (n = 19) Both young and old people were interviewed, with the distinction between the two groups being locally defined. In general, the Tuareg consider people under 50 to be young.
The purpose of semi-structured interviews was to investigate the perceived changes in elements of the atmospheric (i.e., temperature, precipitation, seasons, air masses), biophysical (i.e., freshwater physical systems, soil, wild fauna and flora, land cover change and degradation), and socioeconomic system (i.e., livelihoods, species cultivated, livestock, human health, infrastructure). We inquired about what changes the interviewee had noticed in the environment and since when they have noticed these changes. We asked informants to describe the changes observed and to report if they perceived the change to be directly related to climate change. The protocol's full details are available online (Reyes-García et al., 2020).
Focus Group Discussions
We organized Focus Group Discussions (FGDs) to validate, through the group collective memory, observations collected from individuals. The FGDs meetings were organized with the help of village chiefs who invited a group of mostly elder community members with a long experience in the community ( Figure 2). Three FGDs were organized: Ifri and In Abarbar (seven participants), Bordj El Hawess (four participants), and Belbachir and In Tourha (five participants). In FDGs, we discussed observations reported in semi-structured interviews that were either contradictory or unclear.
Ambiguous observations were presented to the groups to assess whether there was a consensual perspective, and we noted the result as "Agreed" or "Disagreed" without the need of or after debate (Reyes-García et al., 2020).
Temporal information on activities mentioned in participants' responses during the first set of semi-structured interviews were organized and synthesized in a chronological manner to produce the community timeline and its calendar. The LICCI classification system (Reyes-García et al., 2020) was used to classify responses from the second set of semi-structured interviews and FGDs.
Data
Data for this article was uploaded to the LICCI database and was uploaded to Zenodo (Miara, 2022).
Community Timeline
The oldest remembered events are two political events: a war between the Tuareg and the Chaanba (Arab-speaking tribes from the north of the Sahara) in 1900 and the start of the French colonization in 1911. Nevertheless, the local Tuareg chronology is mainly marked by natural disasters (N = 10) that have generally afflicted the region, or that have impacted specific localities of regional importance (e.g., the capital, the biggest oases). The most common natural disasters mentioned are floods or drought causing significant material and human losses (Figure 3).
According to our participants, there was a great drought lasting several years that resulted in a large famine across the region in 1940. The Tuareg have a clear memory of this period, when many people died of starvation. remember a strong drought that occurred in 1980, when many Tuareg from Algeria fled to neighboring countries (i.e., Libya, Niger, Chad and Mali).
The promotion of Illizi as an administrative province (wilaya) and the town of Illizi as its provincial seat in 1984 also mark the local timeline. Informants recall that after these events, the local population of the region benefited from a large state budget which allowed the construction of roads, houses, and infrastructure.
In 1988, after years of drought, fires destroyed many palm trees in the oasis of Aharhar (near the Tassili Mountains), having an important impact on local date production. A period of repeated droughts lasted until 2001, when a very severe drought was experienced. In 2006, the region suffered from very heavy floods which caused the death of many people including the legendary singer of the Tuareg, Othman Badi. The Tuareg say that the height of these flood waters reached 3 m in the city of Illizi. Finally, floods in 2019 also caused significant material damage.
Ecological Calendar
The Tuareg structure their ecological calendar in two seasons divided by temperature and precipitation: a cold and a hot season (Figure 4). Traditionally, the cold season is also rainy and lasts for about 2 months. The longer hot season also includes a rainy period.
For the Tuareg, the beginning of the ecological calendar is the onset of the cold and rainy season, when pastoralists and other livestock owners take their herds of cows and camels into wild pastures in the Tassili Mountains. This pastoral movement relates to the growth of grass in the mountains. The animals, which have all tribal ownership marks, are left unaccompanied in these grazing lands for the length of the rainy season (Figure 4, "Settled grazing"). At the end of the rainy season owners return to the area to pick them up. The animals do not risk being lost or stolen as this practice is carefully framed and severely enforced by tribal laws.
During the cold and rainy period, while cows and camels are in the mountains, Tuareg practicing nomadic pastoralism settle in the lowlands, near waterflows (wadis) where they benefit from the availability of water and grass to feed their sheep. At this time, the Tuareg plant temporary gardens with vegetables such as tomatoes (Solanum lycopersicum L.), salad (Lactuca sativa L.), and zucchini (Cucurbita pepo L.) for self-consumption. They also plant bechna (Panicum miliaceum L., millet), a cereal crop widely used as fodder and food. The Tuareg mostly appreciate millet's nutritional value as fodder for sheep and goats. Millet is also very important to the Tuareg as food, and it is used in several local food recipes, including local bread (mella). Millet is considered the ally of Tuareg women who practice force-feeding (a beauty ritual to fatten young women), mixing it with dates and camel milk. The Tuareg also use millet as medicine, particularly against constipation (Miara et al., 2019).
The period after the cold and rainy season and before the rainy period that occurs during the hot season is considered as the most difficult period for the Tuareg. This is the season when sandstorms occur, sometimes with an intensity that can kill livestock and dry out grazing plants. During this period, the Tuareg harvest their millet, but stop all movements. Animals are placed in shelters to protect them from the very frequent and sometimes 10.1029/2022GH000620 7 of 13 devastating sandstorms. Despite these potential impacts, sandstorms are also considered an important part of the seasonal cycle as, according to some participants, they clean the air and the soil and prepare them for the following cycle. During this time, community members have to hand-dig shallow wells to obtain the water infiltrated into the ground from waterflows (wadis) that have dried up.
Just before the start of the rainy period that occurs during the hot season, the Tuareg settle near or in the oases and villages with their animals and begin the harvests of dates from cultivated date palms in the oases, a practice called amaris. Such harvest is deeply associated to praying rituals for the rain that should occur during the hot season. Indeed, for the Tuareg date harvest is accompanied by various religious rituals including prayers and charity actions (sadaka) during which part of the date harvest is given to the poorest community members. These practices are preconceived to help ensure good rains. The rains of the hot season are irregular, sudden, strong, and sometimes devastating, but also make it possible to fill the wells and wadis and ensure drinking water until the onset of the cold and rainy season. These rains are of great importance to Tuareg pastoral practice. Tuareg think that the best years are the years when rain falls in abundance during the hot season. After the rains, the Tuareg move around the desert allowing the livestock to graze on the abundant grass. However, if the hot season rains fail, Tuareg will migrate away from their tribal lands to near permanent waterflows where the livestock can graze.
In addition to millet, the Tuareg also grow wheat. Traditionally, the Tuareg sow wheat towards the end of the hot season and harvest it at the start of the rains that occur during the hot season. Like millet, wheat holds a special importance among the Tuareg for being the base of bread making and other special dishes including couscous, which is eaten every Friday after visiting the mosque. In the past, when traditional wheat varieties were sown, the wheat cycle lasted 7 months. Currently non-local, short-cycle varieties are being used for which the wheat cycle is considerably shorter (Figure 4).
Tuareg celebrations are linked to the ecological calendar, as well as to daily cycles, and the Muslim lunar calendar. The Tuareg celebrate Muslim religious events including Aid El Kebir, the feast of sacrifice where Muslims offer sacrifices (sheep, cattle, camels) to God, and Aid el Seghir, which celebrates the end of the youth month of Ramadhan. The Tuareg also celebrate El Mawlid Nabawi, the day of Prophet Mohamed's birth. During these celebrations, Tuareg local tribes have song and dance competitions. The Sebiba is another festive day of Achoura, where Allah saved the prophet Josephus from the Pharaoh who was drowned in the sea. On this day, the Tuareg local tribes also celebrate dance and song competitions. Daily at night, the ritual of tindi is practiced by Tuareg women and men. This ritual consists of singing songs that speak of the courage and strength of men as well as historical accounts. Women sing and play the drums and men dance to the rhythm of these songs.
Perceived Climate-Related Changes
Our semi-structured interviews resulted in 20 LICCIs. There was no consensus in the listing of 15 indicators of climate change impacts, but all the indicators were validated during FGDs ( Figure 5).
Changes in elements of the atmospheric system, including a higher cloud cover, colder temperature, a delay in the start of the cold and the hot seasons, less wind, sandstorm intensity, and rain were mentioned by informants. In particular, respondents mentioned that the fight against drought is increasingly difficult as digging deeper and new wells or rationing the use of water become less effective in provisioning it. Some local community members, but not all, also mentioned changes in the physical system around them that were directly linked to climate change, especially a decrease in the river's volume. The Tuareg also reported changes in abundance of local fauna and flora, including increased presence of invasive species.
Interestingly, although the visual analysis of Figure 3 suggests that floods (a sign of water abundance among the Tuareg) were more abundant before the 1970 s, and have only happened twice since then, a decrease in the number of flood events was not mentioned by the Tuareg as one of local indicators of climate change impact. Similarly, informants did not mention that drought and drought-related events are more common in recent times, although this trend is evidenced from the community timeline.
In contrast, two of the indicators of climate change impacts mentioned directly relate to temporal seasonal shifts of the hot and cold season and, in particular, to the seasonality of the rainy periods (Figures 4 and 5). The Tuareg have noticed a shift in the cold season and its associated rainy period, which has shifted from mid-December to mid-February in the past to early February-late March in the present. Informants also reported that the rainy period of the hot season had shifted from May-August to August-October (Figure 4). This change has a direct impact on agricultural activities, specifically shifting the moment when wheat is shown and thus shortening the wheat growing season (from October-May to February-May; Figure 4). Despite the reported changes in climate and environment, the Tuareg mentioned that other cultural and livelihood activities are carried out at the same times as in the past. Some informants believe that the present is a temporary challenging period due to "God's wrath" and pray that climatic and ecological conditions will soon revert to their former state. However, informants told us that the number of pastoralists and herds has decreased, as many people have decided to settle down, transitioning into sedentary or semi-sedentary pastoral or non-pastoral lifestyles. Sedentary or semi-sedentary Tuareg rely on agricultural activities to a larger extent than nomadic pastoralists.
Discussion
The main result of this work is that, while both longitudinal (across years) and cyclical (yearly) temporal changes are perceived by the Tuareg of Illizi, only cyclical changes are consciously identified and related to climate change.
As for other Indigenous peoples and local communities (e.g., Leclerc et al., 2013;Ruggieri et al., 2021), the Tuareg local timeline is clearly dominated by ecological events (Figure 3). Both extreme events happening decades ago and community responses to those events remain anchored in Tuareg collective memory, potentially informing present and future reactions to similar events. For example, the Tuaregs have a clear memory of the extreme droughts occurring in the 1940's, which were also recorded by the philosopher Albert Camus (Kassoul & Maougal, 2006), when many people died of starvation after several drought years.
The temporal analysis of the Tuareg community timeline seems to indicate a shift in the periodic occurrence of extreme climatic events, and specifically a reduction of the welcome floods and an increase in drought events in the last 50 years. This pattern corresponds to observations done with instrumental data. For example, tree ring record analysis also suggests that the drought events occurring in the 1980 s and then in the 2000 s have been identified as the most severe droughts experienced in the region since the Middle Ages (Touchan et al., 2008), in consonance with Tuareg's perception of the gravity of these droughts. Moreover, according to weather predictions under climate change models for the region (Barkhordarian et al., 2013;Niang et al., 2014), these patterns will likely be aggravated in the future, and affect larger areas and neighboring communities as hyper-arid dryland areas expand (IUCN, 2019).
However, it should be noted that the interdecadal trends signaling an increase of extreme drought events are implicitly inferred from the timeline, and not explicitly identified by community members when directly asked about climate-driven changes. Informants relate current climate hardship to divine origins, and they consider that this temporary situation will revert to past conditions with appropriate moral and spiritual behavior. Association of climate change impacts with God's will and resorting to prayer when crises encountered is given a theological explanation, including weather driven crises, has also been observed amongst other African communities (Cuní-Sánzchez et al., 2012Haron, 2017;Mubaya et al., 2012), in Europe (Gómez-Baggethun et al., 2012 and in Asia (Byg & Salick, 2009).
Climate change impacts the intensity, duration, timing, frequency, or quantity of various elements of the atmospheric system (e.g., sunshine, precipitation, temperature, wind, etc.) leading to temporal shifts in the beginning and end of locally defined seasons (IPCC, 2022). As ecological calendars are used to keep track of time-based seasonal changes in the habitat, it is not surprising that changes in the succession of cyclical events are quickly identified, particularly for people who depend on these calendars for their livelihood activities (Ahmed & Atiqul Haq, 2019;Chambers et al., 2021;Keyston Foundation, 2020;Savo et al., 2016). For the Tuareg, changes in the ecological calendar seem to be already impacting agricultural activities, specifically shortening the wheat growth season. In turn, this shift impacts the yields of most wheat local varieties, which are not adapted to current 10.1029/2022GH000620 10 of 13 conditions and are being abandoned. Far from being an isolated case, the result dovetails with previous research showing how changes in climatic conditions can lead to agrobiodiversity loss Ruggieri et al., 2021). Local reports of climate change impacts on the agricultural calendar may facilitate cooperation between state authorities and local dryland populations around the implementation of the aspects of National Climate Change plans relating to agricultural planning by establishing a common ground for decision-making and action.
Perceived cyclical changes also impact pastoralism (e.g., pastoral transhumance itineraries), even though pastoralist activities are carried out at the same time as in the past. Pastoralism is mostly impacted by a decrease of the number of herders and herds, with former nomadic herders becoming sedentary or semi-sedentary. This shift is often fueled by an extreme event (Snorek, 2016). Semi-sedentary pastoralists complement their animal's diets with fodder bought from governmental agencies who offer reduced prices to support pastoralist activities. Tuareg also benefit from subsidies to practice agriculture (e.g., sustained prices for seeds) and to obtain aid for well excavation (Snorek et al., 2017), which enable new sedentary lifestyles. Sedentary life brings a new comfort, but informants also mentioned that sedentarism is accompanied by a weakening of old traditions such as tindi. While the increasing frequency of extreme events may lead to increased sedentarism in the future, conversion of rangelands to cultivated lands intensifies the degradation of dryland ecosystems (IUCN, 2019). National Climate Change adaptation plans should envision alternatives to this potentially reinforcing feedback loop.
In contrast, for the Tuareg who continue to lead a nomadic life, cultural traditions and religion continue to be central to their daily activity. The non-urbanized landscape around these mobile Tuareg is considered holy and pure, especially in contrast to urban areas. The Tuareg believe life in the desert keeps them away from sins and allows them to better consecrate time to prayer and to God. As in other parts of the world (e.g., Castagnetti et al., 2021), their ecological calendar interweaves ecological and spiritual cycles. For example, prayers offered during the date harvest aim both to thank God for the harvest and to ask for rain. Spiritual cycles shape the relation people have with the land, contribute to respectful and sustainable landscape management practices, and strengthen local identity (Castagnetti et al., 2021).
Previous research shows that religious and spiritual practices are important to cope with recurrent disturbance and have contributed to develop institutional devices that are used in environmental extremes, such as sharing resources with the most needy, a collective response to crises that contributes to the maintenance of long-term resilience of social-ecological systems (Gómez-Baggethun et al., 2012). Here, we observe that spiritual values offer some cultural resilience to climate change impacts as they affect some peoples' choices to continue pastoralist lifestyles. At the same time, in Tuareg cultural practice there is little to explain climate change beyond that it is "God's wrath" or "God's will." Combined with the lack of perceived decadal trends, a sense of hopelessness can hamper adaptation and give rise to inaction or feelings of inevitability. As new case studies emerge reporting the influence of cultural preferences, access to information, and wealth as determinants of the adaptation strategies taken by Indigenous peoples and local communities (e.g., Amani et al., 2022;Cuní-Sánchez et al., 2012Hayati et al., 2010;Kaganzi et al., 2021), further research is needed to understand if being able perceive interdecadal trends also determines adaptation at local scales.
Conclusions
Indigenous peoples' and local communities' climatological and ecological knowledge has allowed them to adapt their livelihoods to local, sometimes harsh, conditions. Our study documents LICCIs. This information has the potential to contribute to the Algerian National Climate Change plan, as one of its goals is to identify climate change impacts on society. In that sense, our study shows that the Tuareg of the Algerian desert observe changes in the local weather and ecological systems, although only changes in seasonal cycles are consciously identified and related to climatic changes. We observe various adaptation strategies to seasonal changes impacting agricultural practices, whereas inter-decadal increased frequency of extreme events seems to lead to a gradual abandonment of nomadic pastoralism. These results can inform climate change adaptation planning across expanding hyper-arid areas of dryland ecosystems.
Data Availability Statement
Data used in this article has been uploaded to Zenodo (Miara, 2022) [Dataset]. Original audio recordings and notes are not published to ensure anonymity.
|
2022-10-22T15:07:47.127Z
|
2022-10-20T00:00:00.000
|
{
"year": 2022,
"sha1": "71effe42c1c19939a4e7b490fccc8313844233c4",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "7c0986a0549a83285330f4720ba35b01f1dbd5dc",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
202881922
|
pes2o/s2orc
|
v3-fos-license
|
Establishment and Preliminary Application of the Forward Modeling Method for Doppler Spectral Density of Ice Particles
Owing to the various shapes of ice particles, the relationships between fall velocity, backscattering cross-section, mass, and particle size are complicated. This affects the application of cloud radar Doppler spectral density data in the retrieval of the microphysical properties of ice crystals. In this study, under the assumption of six particle shape types, the relationships between particle mass, fall velocity, backscattering cross-section, and particle size were established based on existing research. Variations of Doppler spectral density with the same particle size distribution (PSD) of different ice particle types are discussed. The radar-retrieved liquid and ice PSDs, water content, and mean volume-weighted particle diameter were compared with airborne in situ observations in the Xingtai, Hebei Province, China, in 2018. The results showed the following. (1) For the particles with the same equivalent diameter (De), the fall velocity of the aggregates was the largest, followed by hexagonal columns, hexagonal plates, sector plates, and stellar crystals, with the ice spheres falling two to three times faster than ice crystals with the same De. Hexagonal columns had the largest backscattering cross-section, followed by stellar crystals and sector plates, and the backscattering cross-sections of hexagonal plates and the two types of aggregates were very close to those of ice spheres. (2) The width of the simulated radar Doppler spectral density generated by various ice crystal types with the same PSD was mainly affected by the particle’s falling velocity, which increased with the particle size. Turbulence had different degrees of influence on the Doppler spectrum of different ice crystals, and it also brought large errors to the PSD retrieval. (3) PSD comparisons showed that each ice crystal type retrieved from the cloud radar corresponded well to aircraft observations within a certain scale range, when assuming that only a certain type of ice crystals existed in the cloud, which could fully prove the feasibility of retrieving ice PSDs from the reflectivity spectral density.
Introduction
The importance of clouds in the Earth-atmosphere system is self-evident. Cloud microphysical processes affect cloud distribution and lifetime. Ice clouds affect the Earth-atmosphere radiation balance by affecting long-wave and short-wave radiation transmission and changing the atmosphere's thermodynamic structure [1][2][3]. Further understanding of ice cloud microphysical properties was clearly identified as essential to improving weather and climate change prediction, as well as the aerosol-cloud microphysical processes and the nonlinear relationships between them. The ice phase process is crucial to cloud and precipitation formation and development, and most surface precipitation begins as ice particles [4]. Accurate particle size distribution (PSD) information is vital to precipitation, and cloud radiation affect predictions in large-scale models. Compared to other ice cloud retrievals, radar parameters can be more representative when studying clouds [3]. One of the most powerful tools for detecting non-precipitating and weakly precipitating clouds, the vertically pointing millimeter-wavelength cloud radar (MMCR), has a short wavelength and a high gain, which can effectively penetrate cloud layers to continuously observe horizontal and vertical cloud structure changes under different dynamic conditions. This enables collection of more precise information on clouds.
MMCR detects a Doppler spectrum that is a function of the backscattering cross-section and a number of all particles in the radar detection volume with respect to their fall velocity. If the relationships between particle size and fall velocity and backscattering cross-section are known, Doppler spectral density data can be used to obtain the PSD characteristics of the cloud. However, radar-detected radial velocity includes particle fall velocity as well as vertical air motion, which is well-known as one of the most difficult physical quantities to determine in meteorology. Identifying and eliminating air motion presents difficulties in PSD retrieval using Doppler spectral density. If a small particle in a cloud (such as a liquid water droplet) is small enough, its fall speed relative to the vertical air motion can be neglected, which means that it can be used as a tracer to indicate clear-air motion [5,6]. On this basis, it is possible to use Doppler spectral density data to retrieve cloud PSD.
The relationships between fall velocity, backscattering cross-section, and particle sizes are easier to calculate for liquid particles than for ice particles because of their uniform shapes. Many studies focus on the raindrop size distribution retrieval using a Doppler spectral density data [7][8][9]. Liu et al. [7] analyzed the accuracy of raindrop size distributions retrieved from Doppler spectral density data, observed by cloud radar. However, calculating fall velocity and backscattering cross-section for ice particles is complicated because of the complex shape of solid particles and fall velocity's sensitivity to changes in ambient temperature and humidity. This leads to many difficulties in interpreting and applying cloud radar data above the 0 °C level. Yang et al. [10] reviewed several classical computational approaches to light scattering simulations of non-spherical ice crystals and discussed the strengths and weaknesses associated with each approach. Liu [11] built a database of microwave single-scattering properties for several non-spherical particle shapes, at frequencies of 15 to 340 GHz, calculated using the discrete dipole approximation method (DDA). However, he only calculated certain size particles rather than a continuous relationship. In applied remote sensing, most previous studies focused on the study of PSD bulk characteristics and the relationship between PSD and microphysical parameters due to the inability of obtaining complete and specific PSD information, such as the relationship between radar reflectivity , ice water content (IWC), and ice effective radius , through means of remote sensing [12][13][14][15]. Zhong et al. [16] indicated that IWC retrieved using radar Doppler spectra is more reliable than IWC obtained using classic -IWC relationships. In order to obtain more detailed information, numerous studies used dual-frequency and triplefrequency radar due to the self-richness of these radar data. However, only the retrieval of raindrop size distribution (DSD) or part of the ice PSD and the identification of some microphysical processes in the cloud could be achieved [8,[17][18][19]. So far, research on ice particle retrieval using MMCR Doppler spectral density in China was not found. Therefore, we establish the relationship between ice particle microphysical parameters and Doppler properties to verify the feasibility. At the same time, the results were compared with aircraft data, in order to evaluate the performance of China's first cloud radar with a solid-state transmitter.
In this study, we established relationships between fall velocity, backscattering cross-section, and particle size of six typical ice crystals, based on a review of the existing literature. Additionally, the relationship between ice particle microphysical properties and radar-observed Doppler spectral density was preliminarily explored. Data and methods are presented in the second sections of the paper. Section 3 mainly analyzes the relationships between particle microphysical parameters and particle size, based on the calculation results, and Doppler spectra were simulated with the given PSD. Section 3 also presents a qualitative analysis of MMCR spectral density data and a brief comparison with aircraft observations. Section 4 provides discussion and conclusions. The MMCR operated in a vertically pointing mode and had a vertical resolution of 30 m. A solidstate transmitter enabled the radar to make continuous observations. Four operational modes were applied to improve cloud and precipitation radar detection capabilities-precipitation mode (M1), boundary mode (M2), middle level mode (M3), and cirrus mode (M4). Varying radar pulse widths and coherent and incoherent integration techniques were used to enable the detection of low-level clouds and high-level weak clouds. During detection, the radar circulated through the four observation modes and converted reflected signals processed using fast Fourier transform into Doppler spectral data for storage. Table 1 shows the MMCR's major operational parameters. Considering that particles above the zero degree Celsius level have relatively small fall velocities and weak reflectivity, we used the M3 mode data with high sensitivity and radial velocity resolution. with an approximately 18 km circling diameter, to 700 m around the observation site, to observe cloud and precipitation vertical structures. The primary instruments in the aircraft included a modified cloud combination probe (CCP), a two-dimensional stereo probe (2D-S), a high-volume precipitation spectrometer (HVPS), and an Aircraft Integrated Meteorological Measurement System (AIMMS-20), which could provide meteorological data such as three-dimensional wind vectors, three-dimensional aircraft position (i.e., latitude, longitude, and altitude), ambient temperature, and ambient relative humidity. The CCP consists of a cloud droplet probe (CDP), a grayscale optical array imaging probe (CIPgs), and a hotwire liquid water content sensor. Using a two-dimensional shadow cast technique, the CIPgs detects cloud particles with diameters of 15-2000 μm. The 2D-S is an optical array imaging probe that records projected areas of three-dimensional ice particles and PSDs from 10 to 1280 μm, with resolutions of 10, 20, 50, 100, and 200 μm. To mitigate the ice crystal shattering problem, we modified probe tips and applied an arrival time algorithm to the collected data to remove artifacts. HVPS data cover 150-47,075 μm, with resolutions of 150 and 300 μm. Below this range, 2D-S data are used. The data set detected by the aircraft was used to compare with radar retrievals, including liquid water content and ice particle size distribution.
Methods
When ignoring the effects of vertical air motion and turbulence, the relationship between the radar-detected Doppler spectral density ( ) ( mm · s · m ) and PSD ( ) ( m ) could be expressed as where C = 10 | + 2| /( | − 1| ) is a constant related to the wavelength λ and complex permittivity ε of precipitation particles, and has a unit of cm 4 . ( ) (cm 2 ) is the particle's backscattering cross-section, (m·s −1 ) is the radar-detected radial velocity, and represents particle fall velocity (both and positive velocities are downward). Radar-observed radial velocity can only be regarded as particle fall velocity in static air, after air speed (positive speed is upward) is determined and eliminated using the small particle tracing method (i.e., = + ). As long as the shape of the particle can be determined, particle size can then be calculated based on the relationship between particle fall velocity and size. Thus, the relationship between Doppler spectral density and N(D) can be established based on the relationship between -D and -D. IWC (g·m −3 ) and mean volume-weighted diameter (μm) can then be calculated after N(D) is retrieved: However, as mentioned in the introduction, ice PSD retrieval is a very complicated task. There are three prerequisites for the realization of PSD retrieval by using radar Doppler spectrum, the determination of the shape of the particles, the calculation of the -D and -D relationships. In this study, six different shapes of ice crystals were selected according to the growth habits, and their falling speed and backscattering cross-section at different sizes were calculated. We also did some simulation work to evaluate the influence of some factors that might affect the accuracy of PSD retrieval (including radar sensitivity and air turbulence). We used a certain power spectrum to make a preliminary attempt to perform PSD retrieval of different shapes of ice particle, and analyze the differences between them. In order to apply these attempts to the actual observation of clouds and further verify the feasibility of the method, the data obtained during a stable stratiform precipitation period was selected to retrieve PSD, and the aircraft data collected during the same period were compared with the radar retrieval results. Since the ice layer involved in the altitude of the flight was very thin, the real shape and size distribution of the ice particles could not be determined. Therefore, the retrieval could only be performed under the assumption that there is only one shape of particles that exist in the cloud. In addition, the retrieved PSDs were used to calculate the ice water content and compared with the results given by other studies.
Determination of the Shape of Ice Particles
Ice crystal fall velocity and scattering characteristics are known to be closely related to their shapes, which are related to ambient temperature and supersaturation with respect to ice. Magono and Lee [20] classified and named ice crystals found in nature as early as 1966. Over the past 70 years, researchers developed many charts of ice crystal growth habits through laboratory research, in situ observation, or a combination of both. Most researchers agree that, when temperatures are greater than −18 °C, the basic behavior of ice crystals goes from forming plates (0 °C~−4 °C), to columns (−4 °C~−8 °C), to plates (−8 °C~−22 °C). Bailey and Hallett [21] provided a detailed diagram of ice crystal growth habits by combining laboratory and field observations. They pointed out that ice crystal growth habits are dominated by various forms of polycrystals with two distinct habit regimes below −20 °C: plate-like from −20 °C to −40 °C and columnar from −40 °C to −70 °C. On the basis of these results, this study mainly discusses six typical ice particle types (four single crystal types and two aggregates). These types are hexagonal plates (−8 °C~−25 °C, low ice supersaturation), hexagonal columns (below −40 °C, ice supersaturation of 10-25%), sector-like plates (−10 °C~−25 °C, high ice supersaturation), stellar crystals with broad arms (about −20 °C, near water supersaturation), and plate and column aggregates. The six particle type shapes were thus chosen to represent complicated ice types ( Figure 1). When calculating the backscattering cross-section of individual ice crystal particles, their shape and fall attitude should be determined first. On the basis of the particle shape parameters given by Auer Jr and Veal [22], the thickness of hexagonal and sector plates was L = 2.02D 0.449 , that of stellar crystals was L = 2.028D 0.431 , and the relationship between hexagonal column length L and half-width a was a = 3.48L 0.5 [23]. These measurements (a, L, and D) were all in micrometer. The aspect ratio of the two kinds of aggregates was 0.6 [24]. Additionally, the vertical orientation of all ice crystals was horizontal along the long axis, when falling and horizontal orientation is random.
Fall Velocities
There are usually two ways to obtain the fall velocity of a single ice crystal particle-direct laboratory measurements or field observations (mainly large-scale ice particles or snowflakes) and calculation of the drag force based on aerodynamic principles. Fall velocity could then be calculated by combining empirical relationships (such as mass and projected area-dimension power laws) and the measured or calculated results can be expressed in the form of an empirical power law [25]. Obviously, results obtained through direct measurement are more accurate; however, they are not applicable to particles of all scales, i.e., the application range is limited. Therefore, many researchers are working to determine a fall velocity equation based on particle size, mass, and projected area [26][27][28][29]. Heymsfield and Westbrook evaluated the four most recent calculation methods at that time and pointed out their shortcomings [30]. They made a simple correction to the method proposed by Mitchell to reduce sensitivity to area ratio and thus obtained a more accurate and simpler equation [29]. We used their proposed calculation method to calculate particle fall velocity. In general, when the force of gravity on the particle is in balance with the drag force, can be calculated as follows: 1. The modified best number can be calculated as * = . when the values of , , , , and D are given. Here, is the dynamic viscosity of air, is the density of air, m is mass, and g is gravity. is defined as the particle's area ratio, i.e., the ratio of projected area (A) to the particle's circumscribed circle area. The m-and A-D relations are obtained using the formula compiled by Mitchell (1996), and the coefficients used are given in Table 2. Here, D is the particle's maximum dimension (the diameter of the circumscribed circle of the particle).
Backscattering Cross-Section
After establishing the relationship between particle fall velocity and particle size, it is necessary to know the relationship between backscattering cross-section and size to determine the PSD using Doppler spectral density. As the ice crystal shapes are complex, it is essential to establish a scattering model using a simple theory, a simple calculation, and reliable results, to obtain the backscattering characteristics of non-spherical ice crystals. One approach is to simplify particle shapes by treating them as spheres or ellipsoids and then applying a suitable scattering theory. For example, spheres can be calculated using the Mie scattering theory, and ellipsoids can be calculated using the T-matrix method [31]. Although the T-matrix method can effectively calculate the scattering characteristics of spherical particles, it can only be applied to a limited particle size range, beyond which numerical stability problems are likely to occur during calculation [32]. This makes the processing complex when calculating particles of different shapes, so it is difficult to apply to the scattering calculation of particles of complex shapes. Another method is to use a simplified scattering theory, such as the Rayleigh-Gans approximation theory (RGA), rather than simplified particle shapes [33]. RGA ignores internal higher-order interactions of electromagnetic waves, greatly simplifying mathematical problems. The RGA method was adopted in this study. In the RGA theory, the backscattering cross-section of particles with arbitrary shapes to plane waves in the s direction can be written as: where k is the wave number, K = (ε − 1)/(ε + 2), ε is the complex permittivity of ice, and A(s) is the area of ice crystals intersected by the plane. Hogan and Westbrook [24] made some improvements to the RGA method in 2014, creating a self-similar Rayleigh-Gans approximation (SSRGA) that could be used to calculate the backscattering cross-section of aggregates. Although aggregate structures are complex and difficult to predict, Hogan and Westbrook [24] found that the aggregation process of the crystals was self-similar and could be described by a power law. Thus, they proposed an equation to calculate the backscattering cross-section of particles in the centimeter and millimeter bands: where x = kD, κ is the kurtosis parameter, and β is the prefactor of the power law. Hogan and Westbrook [24] also pointed out that, when the ice crystals collide into aggregates, the overall shape of the aggregates affects the scattering properties rather than the shape and size of the individual crystals making up the aggregates. Of the six types of ice crystals calculated using the parameters given in Table 2, the backscattering cross-sections of four kinds of single ice crystals were calculated using Equation (4), and the backscattering cross-sections of two kinds of aggregates were calculated using Equation (5). The kurtosis and power-law prefactor parameters could be found in Table 1 of Hogan and Westbrook [24].
Fall Velocities and Backscattering Cross-Section of Different Types of Ice Particles
To establish the relationship between Doppler spectral density data and PSD, we calculated the masses, fall velocities, and backscattering cross-sections of ice crystals with different sizes, based on the methods mentioned in Section 2.2. To compare and verify the calculation results, we equated ice crystal particles of different shapes to solid ice spheres of the same mass, with an equivalent diameter represented by . Figure 2a shows the relationship between mass and the maximum dimension of six ice particle types within the physical size limits. Figure 2b shows the corresponding relationship between maximum diameter and the ice sphere equivalent diameter of different particles. These figures show that, when the maximum diameter was same, hexagonal plates had the largest mass, followed by the two aggregate types and sector plates, with the stellar plate crystals having the smallest mass. The masses of the two aggregate types were almost equal when D was less than 2500 μm, and the masses of the sector plates were similar to those of hexagonal columns when D was less than 2000 μm. As the air viscosity coefficient mainly depends on air density, air density changes would affect particle fall velocity. Thus, even if the particles are of the same size, the fall velocity would change with height. We take the fall velocity of particles at a height of 4.5 km under standard atmosphere, as an example for analysis. Figure 3a shows the -relationship of ice crystals and an ice sphere. An ice sphere falls approximately 2-3 times faster than ice particles with the same . Among the six ice crystal types, the two aggregates have the greatest fall velocity, followed by hexagonal columns, hexagonal plates, sector plates, and stellar plates, which have the smallest fall velocity. Due to size limitations, sector-like plate and stellar plate crystals have relatively small maximum fall velocities, which slowly increase with particle size. By contrast, the fall velocity of hexagonal columns increases the fastest with increasing size. At the same time, if we suppose that all ice particles in a cloud were spherical, ice particle sizes corresponding to the same fall velocity would be much smaller than actual ice particles of different shapes. This would impose serious errors on the following calculation, causing it to deviate from the real situation, when retrieving PSD spectra using radar data. The backscattering cross-sections of single ice particles and aggregates were calculated using the method mentioned above, and the results are shown in Figure 3b. The values of backscattering crosssection of different types were closer to each other, when the size of the ice particles was small. For the same , the hexagonal columns had the largest backscattering cross-section, followed by stellar plates and sector plates, while the backscattering cross-section of the hexagonal plates, two kinds of aggregates and ice spheres were relatively small and almost equal to each other. Additionally, we found that the backscattering cross-section area had little correlation to the projected area of particles with the same volume. The backscattering cross-section only depended on the integral of the projected area in the electromagnetic wave propagation direction. For ice crystals of the same volume, if they had the same density, their projected area was large or small (their thickness was thick or thin), the differences of their backscattering cross-sections were too small to neglect. Therefore, it is crucial to choose the mass parameters for ice particle types, which would significantly affect the calculation results. However, ice crystal shapes are very complex; thus, it is obvious that we could not cover them all in the calculation. Therefore, we can only select the typical scale parameters to obtain a more representative relationship between backscattering cross-section and particle size.
Doppler Spectral Density and PSD Retrieval Simulations
From the above results, if PSD is given, the Doppler spectrum could be calculated, based on Equation (1). Many studies proposed various functions to describe the PSD of ice crystals, such as the negative power function, the power function, and the Gamma function [34,35]. Gunn and Marshall used the exponential function to describe ice crystal PSD, which is widely used since then [36]: where N( ) (m −3 mm −1 ) is the number of particles per unit volume per unit, N0 (m −3 mm −1 ) is the intercept parameter, and Λ (mm −1 ) is the shape parameter. We used the Marshall-Palmer constants given by Platt at −10 °C~−5 °C, with N0 and λ values of 9560 m −3 mm −1 and 1.32 mm −1 , respectively [37]. For the single ice crystals, we assumed that D ranges from 100 to 5000 μm, and the average PSD was used to calculate the Doppler spectral density produced by four kinds of single ice crystals (based on Equation (1)). The PSD used for Doppler spectrum simulation are shown in Figure 4a. By using Equation (7), the equivalent reflectivity values generated by the four crystal types were 25.8, 24, 24.7, and 12.9 dBZ (hexagonal plates, hexagonal columns, sector-like plates, and stellar crystals). Additionally, the Doppler spectrum affected by air turbulence at different intensities was calculated. According to Gossard et al. [9], the convolution of Doppler spectrum in clear-air and the air turbulence could be written as: where and represent the Doppler spectral density affected by turbulence and in clear-air, respectively.
is the intensity of turbulence. As can be seen in Figure 4b, air turbulence would broaden the Doppler spectrum while weakening its peak. The stronger the turbulence, the more severe the spectral distortion. For the turbulence of the same intensity, a narrower Doppler spectrum has more severe distortion. Due to the high sensitivity of mode 3 of our radar, the sensitivity has a limited effect on the detection of Doppler spectrum, which could be ignored in the retrieval of PSDs. Comparing the Doppler spectra generated by different types of ice crystals with the same PSD in Figure 4b, the width of the spectra was mainly inversely proportional to the rate at which the falling velocity increased with the particle scale. The faster the velocity increase with size, the wider the generated Doppler spectrum and vice versa. The value of was jointly determined by particle backscattering cross-section and ∂D/ ∂ (as shown in Equation (1)), which was proportional to the σ and inversely proportional to the rate of velocity change with particle size.
To further study the effect of turbulence on PSD retrieval, the affected Doppler spectra were used to invert the new PSD, and compared to the original given PSD. According to Figure 5, the retrieved PSD (the dashed lines) were significantly wider than the original one (the black solid line). Additionally, the value of the retrieved N(D) was also deviated. The concentration was overestimated when the size of particle was small and when the particle size was large, the concentration was underestimated. It could be easily seen that stronger turbulence had a greater impact on the inverted PSD, which would cause the particle number to seriously deviate from the true value. Moreover, different types of ice crystals have varying degrees of sensitivity to turbulence. Compared with sector crystals and stellar crystals, hexagonal plates and hexagonal columns are less affected by turbulence. Figure 5. Particle size distribution (PSDs) retrieved from the Doppler spectra affected by turbulence, shown in Figure 4b. The solid black line is the PSD given by Platt (1997) at −10 °C~−5 °C (same as Figure 4a).
Retrieved Ressults from MMCR
Reflectivity and Doppler spectral density data from the M3 mode, with a higher speed resolution and sensitivity, were used to retrieve the PSD and IWC. The high-velocity resolution meant that a fine PSD could be produced. On the basis of the relationships that we established, more sophisticated Doppler spectra would yield more accurate PSDs. Figure 6a shows Doppler spectral density data of a beam obtained at 5:38 (UTC) by MMCR. The side lobe echo is highlighted by a red circle. The raw Doppler spectral density data from the four work modes were post-processed and used to recalculate reflectivity and retrieve the vertical air motion. The data post-processing included quality control (QC), and recalculating Doppler moments [17]. QC for Doppler spectra included dealiasing singly wrapped aliased Doppler spectral density data and detecting and removing artifacts produced by pulse compression. After QC, we directly estimated the vertical air velocity using the velocity bin of small particles, such as liquid droplets and small ice crystals, assuming that these particles could be considered tracers of clear-air motion in the measured spectra since the falling velocity of the particles themselves could be ignored relative to the air speed, when the particle size was small enough [5,6]. We calculated the attenuation coefficient according to Wang's algorithm and conducted attenuation correction of Doppler spectral density, bin by bin, from the first range to the end [38]. In addition, since the occurrence of stable stratiform precipitation during observation, the effects of turbulence were neglected when performing the PSD retrieval.
Reflectivity and retrieved air speed profiles based on "small-particle tracers" are shown in Figure 6b. On the basis of the profile and Doppler spectrum, we inferred that the melting layer was at approximately 4.15-4.5 km. Additionally, the Doppler spectrum width suddenly narrowed at 4.5 km, indicating that echoes above this height were mainly generated by ice particles. Below the melting layer (4.15 km), we inverted the Doppler spectrum using the -D relation of raindrops given by Gossard [6] and the σ-D relation calculated using the extended boundary condition method [39]. Above the melting layer (4.5 km), we assumed that there was only one ice crystal type in the cloud and used the microphysical relations established in Section 2 to derive the PSD. In addition, the size of all ice particle types was controlled within the limit of the particle scale that exists in nature. Under the assumption that the ice crystals in the cloud were all of the same shape, the inversion results of the concentration of ice crystals of different shapes are shown in Figure 7. Generally, the change trends of PSD widths of the six particle types are similar to those of the Doppler spectral width with height. However, the concentrations of different types of particles retrieved from the same height are different. For Doppler spectral density data, the corresponding velocity represents particle size; thus, the droplet spectrum width is mainly determined by particle fall velocity. In other words, the smaller the particle size corresponding to the same velocity, the more left broadened the derived PSD spectral width. Similarly, the slower the fall velocity increases with particle size, the larger the particle size corresponding to the large fall velocity and the more right broadened the derived PSD spectral width. By comparing the PSD results of the six particle types, we could see that the PSD spectra were incomplete, due to the limitation of the particle physical scale (only a part of them existed). The hexagonal plates and two kinds of aggregates were mainly affected by the lower limit of the physical scale, and their minimum discernible values were 70, 240, and 300 μm, respectively. Additionally, the hexagonal columns, sector plates, and stellar plates were mainly affected by the upper limit of the physical scale, with maximum discernible values of 800, 500, and 300 μm, respectively. There must be no aggregates of columns in the inversion results when is less than 300 μm, and the aggregates must be present when is greater than 930 μm. It is important to note that the physical scale constraint did not allow all Doppler spectra density data to be involved in PSD retrieval when only one ice particle type was assumed to be present in the cloud. This is also the reason why the values calculated using the PSD retrieved for different ice crystals were slightly less than the reflectivity values shown in Figure 6b. Moreover, it can be seen from Figure 7 that the concentration of small particles was largest at each height, and the particle concentration decreased with increasing particle size. The conclusion of the simulation demonstrated that particle number was inversely proportional to the particle's σ and directly proportional to ∂D/ ∂ . The sector plates had the highest concentration when was less than 100 μm and had a smaller than the stellar crystals for the same falling speed; thus, the smaller backscattering cross-section coupled with a larger / , resulted in a high concentration. Although the two kinds of aggregates only had the PSD of large-size particles, the aggregates of plates had a slightly higher concentration than that of column aggregations. As the two have almost identical backscattering cross-sections, the difference was mainly caused by the fall velocity. Considering the existence of echo generated by particles with small speed, there must be single crystals within the radar detection volume. Figure 8 shows retrieved IWC and profiles for the six ice particle types. IWC values derived from the hexagonal plates and hexagonal columns showed a consistent trend with height, with relatively uniform changes below 6 km and gradual decreases with increasing height above 6 km. Due to physical size limitations, hexagonal plates had a narrower PSD spectrum than hexagonal columns. However, the small particle concentration for hexagonal plates was smaller than that of hexagonal columns, and the concentration of large particles was larger; thus, hexagonal plates had an IWC of about 0-0.12 g·m −3 , whereas the IWC of hexagonal columns was approximately 0-0.08 g·m −3 . Additionally, sector plates and stellar plates had relatively consistent IWC change trends, with the overall changes relatively uniform below 7 km, and the IWC increasing sharply above 7 km. Figure 7 shows that the sector plates had wider PSD spectra and a larger concentration; thus, their IWC was larger, ranging 0-0.1 g·m −3 below 7 km. Stellar crystals had an IWC of approximately 0-0.04 g·m −3 , which was roughly half that of the sector plates. The IWC variation trend of the aggregates were also very similar, although they only had the PSD spectra of large particles, as the large particle concentration was slightly larger than that of other ice crystals, especially for the aggregates of plates. The aggregates of columns and plates had IWC values that were not particularly small, at 0-0.08 and 0-0.06 g·m −3 , respectively. Additionally, the profiles of different ice crystals were similar to the PSDs shown in Figure 7. The more the PSD spectrum widens to the right, the bigger the would be, and the narrower the PSD spectrum becomes to the left, the smaller the would be.
Comparison with Aircraft Detection
Because the aircraft's maximum flight altitude was about 4900 m, to compare the inversion results of MMCR with the PSD observed by the aircraft, the particle concentration above 4.7 km (above the melting layer) observed by the 2D-S and CIP probes was first averaged and then compared with the PSD retrieved from MMCR data at 4.5 km. The results are shown in Figure 9 (note that both the aircraft and radar results are denoted by the maximum particle size because of the way the probe takes measurements). The 2D-S and CIP probes have consistent measurements. The particle concentration decreased with increased particle size, and the concentration of 400-1200 μm particles detected by the two probes was almost equal. Compared to the radar inversion results, the trend in concentration variation with particle size was roughly the same; however, the radar inversion concentration decreased more quickly with increased particle size and was much larger than that observed by aircraft, for small particles. Focusing on the concentration of particles smaller than 2000 μm, the size ranges of different ice crystal types with almost identical concentrations, as observed by aircraft, were as follows-hexagonal plates (800-1200 μm), hexagonal columns (600-1000 μm), sector plates (200-600 μm), stellar plates (400-600 μm), aggregates of columns (1000-1600 μm), and aggregates of plates (1000-1600 μm). In fact, because of the aircraft probe's small sampling volume and the large range bin observed by radar, inconsistencies in particle concentration were inevitable. Additionally, the aircraft data's time (altitude) average also led to some concentration differences between observation and reality. Using the same retrieval method, the liquid hydrometeor PSD was retrieved from Doppler spectra density data below 4.2 km, based on microphysical relationships of liquid droplets. Then, liquid water content (LWC) and were calculated using Equations (2) and (3). Figure 10a shows the LWC detected by the aircraft's HVPS probe, and the LWC and values in Figure 10b were retrieved from MMCR. The trend of LWC with height of the two was basically the same, with aircraftobserved LWC within 0-0.02 g·m −3 and the radar-retrieved results within 0.02-0.05 g·m −3 being below 3 km. Above 3 km, both aircraft-obtained and radar-retrieved LWC values increased with height; however, radar-based LWC results were still larger than those of the aircraft overall. It is worth noting that the HVPS had a detection range for particles larger than 75 μm and had a 150-μm resolution, making it inevitable that some small particles were missed, resulting in the underestimation of LWC. The aircraft's temperature detection results showed that the altitude of 0 degree Celsius layer was about 4.5-4.7 km. It could also be seen in Figure 10a that the water content increased significantly near this altitude. Above this altitude, the liquid water content observed by the aircraft was about 0.35 g·m −3 , which was very close to the inversion results of the ice water content of hexagonal plates, hexagonal columns and sector plates. Additionally, increased with height below 2 km and decreased with height above 2 km, with the peak value appearing at 1.7 km. Figure 6a also showed that the widest Doppler spectrum was at 1.5 km.
Conclusions and Discussion
To use Ka-band cloud radar Doppler spectral density data to quantitatively analyze the microphysical and dynamic structure properties of cloud ice processes, this paper established the relationship between fall velocity, backscattering cross-section, and particle size of six typical ice crystal types, analyzed the microphysical properties of various particles and their influence on cloud radar reflectivity and Doppler spectral density, and applied the established relationship to the retrieval of ice microphysical properties using Doppler spectral density data on the east side of the Taihang mountain in the Hebei Province, China. We then obtained PSDs at different heights and compared them with the results from aircraft observation. The main conclusions of this study were as follows-for particles with the same equivalent diameter, the fall velocity of aggregates was the largest, followed by hexagonal columns, hexagonal plates, sector plates, and stellar crystals. Hexagonal columns had the largest backscattering cross-section, followed by stellar crystals and sector plates, and the backscattering cross-sections of hexagonal plates and the two aggregate types were very close to those of ice spheres. However, the ice spheres fell two to three times faster than ice crystals with the same , which meant that considering different ice crystal types as ice spheres while retrieving the PSD using Doppler spectral data would lead to different degrees of particle scale underestimation. The spectral width of the radar Doppler spectrum generated by the same PSD was mainly affected by particle fall velocity's increasing rate with increased particle size. The faster fall velocity increased with particle scale, the narrower was the Doppler spectrum. The value of was determined by particle backscattering cross-section and the rate of velocity change with scale, which was directly proportional to the particle backscattering cross-section value and inversely proportional to the rate of fall velocity increase. Additionally, turbulence had a great influence on the PSD retrieval. We assumed that only a certain type of ice crystal existed in the cloud, and the PSD comparison showed that the radar inversions of each ice crystal type corresponded well to aircraft observations within a certain scale range, indicating a great possibility of the existence of this type of particle within that range and fully verifying the feasibility and reliability of ice PSD retrieval from the Doppler spectra. The LWC variation trend with height between radar inversion and aircraft observation was basically the same, whereas the aircraft-measured value was slightly smaller than that of radar inversion. Additionally, height average was carried out to facilitate the comparison with aircraft observations, making the situation different from reality, which made comparison with radar results more difficult.
Additionally, many studies gave coefficients of the statistical relation between and IWC [12,13,15,[40][41][42]. We chose coefficients from several studies to compare against our MMCR-derived IWC results. The relationships between (dBZ) and IWC (g/m 3 ) of ice clouds used in this study are listed in detail in Table 3. Reflectivity above 4.5 km was used to calculate IWC based on five -IWC values. As shown in Figure 11, hexagonal plates and hexagonal columns had IWC values very close to the results from Atlas et al. [41] and slightly less than the results from Protat et al. [13], Mace et al. [15], and Liu and Illingworth [42]. Lower values led to the underestimation of IWC for the two kinds of aggregates, and degree underestimation decreased with increasing . The IWC retrieved from sector plates and stellar plates when values were less than −5 dBZ was larger than that calculated from other studies, and was less when the values were greater than −5 dBZ. Almost all ice crystal types had IWC values less than those from other studies, when the reflectivity was stronger than 0 dBZ, and this underestimation tended to become more serious when reflectivity was greater than 10 dBZ. Mid-and high-latitude ice cloud The above work is a preliminary attempt to establish a forward modeling method for Doppler spectral density data of solid precipitation particles. In the future, more microphysical parameters of precipitation particles can be used to establish a more complete relationship, and can aid in the interpretation and analysis of radar Doppler spectral density data. In addition, this study used one set of shape parameters for the calculation of each ice crystal type; however, there are many other ice crystal proportions in nature. Moreover, it could be seen that the turbulence had a great effect on the PSD retrieval. Although we neglected the influence of turbulence in the inversion during the observation, due to the relatively stable stratiform precipitation, the effects of turbulence are still worthy of attention and discussion. Additionally, because our field measurement only obtained one dataset of aircraft and radar joint observations for the retrieval of PSD using Doppler spectral density data, our work assumed that there was only one particle type in the cloud. Further verification requires the support of long-term observation and statistical analysis. Subsequent simulation and inversion can be carried out with a mix of all kinds of ice crystals in proportions based on observations. It is also possible to convert the concentration ratio of different types of crystals to the ratio of the Doppler spectrum generated by different kinds of particles (based on the -D and -D relationship we established) and to calculate the PSDs of various kind of particles. In short, with such a set of microphysical parameters and size relations of various ice crystal particles, more Doppler spectral density data could be analyzed to statistically study the ice cloud properties and the microphysical processes in the cloud.
Conflicts of Interest:
The authors declare no conflict of interest.
Appendix A
a half-width of columnar crystals A projected area area ratio (the ratio of projected area to the particle's circumscribed circle area) maximum diameter equivalent ice sphere diameter volume-weighted diameter g gravity air speed fall velocity radial velocity detected by radar equivalent reflectivity Doppler spectral density ( ) number of particles per unit volume per unit and backscattering cross section ε complex permittivity λ wavelength ice density L length of ice crystals dynamic viscosity of air air density mass Reynolds number * modified best number ( ) area of ice crystals intersected by the plane k wave number κ kurtosis parameter β prefactor of the power law computed by intercept parameter of exponential function Λ shape parameter of exponential function Doppler spectral density affected by turbulence intensity of turbulence
|
2019-09-17T01:08:58.428Z
|
2019-08-26T00:00:00.000
|
{
"year": 2020,
"sha1": "1df9e5839885748c9438fb5fcd7d4520ac107e50",
"oa_license": "CCBY",
"oa_url": "https://amt.copernicus.org/preprints/amt-2019-319/amt-2019-319.pdf",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "d7261034728f8aae9a545aaf9d8aaffe1c9846d8",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science",
"Chemistry",
"Computer Science"
]
}
|
239128228
|
pes2o/s2orc
|
v3-fos-license
|
Strategic entrepreneurial choice between competing crowdfunding platforms
This paper investigates strategic entrepreneurial choice between the UK Big 3 platforms–Crowdcube, Seedrs and SyndicateRoom–that exemplify the three main equity crowdfunding (ECF) shareholder structures identified in the literature. ECF has become a strategic choice for both entrepreneurs and angel and venture capital funds as it offers mutually beneficial advantages to both, especially under the co-investment ECF model where these funds co-invest alongside the crowd. The multinomial probit results show that large founder teams are more likely to choose the co-investment model (SyndicateRoom) but are less likely to opt for the nominee ownership structure (Seedrs). Although less heterogeneous teams are more likely to choose the Seedrs and Crowdcube ownership structures, our results suggest that the probability of choosing the co-investment model (SyndicateRoom) monotonically increases as teams become more heterogeneous. The conclusion is that larger and heterogeneous teams are more likely to raise ECF funds from campaigns explicitly involving professional investors.
Introduction
Young startups and ventures can raise outside equity from a variety of sources in the entrepreneurial finance market (see Cassar, 2004;Cooper et al, 1994;Schwienbacher, 2013). The traditional sources include business angel (BA), venture capital (VC), and private equity (PE) investors. Over the past decade, equity crowdfunding (ECF) has emerged as a novel source of outside equity in an effort to democratize entrepreneurial finance (Kleinert & Mochkabadi, 2021). It may have the advantage of being stable and resilient in period of crisis (Cumming et al, 2021), and was initially provided by a geographically dispersed crowd of mainly non-accredited investors who exhibit heterogeneity in their investments (Hornuf et al., 2021). In this context, ECF has been viewed by some as an equity funding mode of last resort for discouraged entrepreneurs based on a pecking order view of outside finance (Walthoff-Borm et al., 2018a). The implication of this view is that the platform's main role is to act as gatekeepers to mitigate adverse selection problems and thus protect the crowd. The above study analyses early (2012-15) campaigns in the UK when the pure ECF model (where institutional investment was absent/ minimal) still prevailed. For example, Zhang et al. (2018) find that the share of accredited institutional investors such as BA and VC in UK ECF was a mere 8% in 2015. However, with the rise of co-investment in ECF campaigns, their share rose to 25% in 2016 and to 49% in 2017 and levelled off at around 50% since (Zhang et al., 2018). This level of institutional involvement in ECF casts doubt on the notion of ECF as equity funding of last resort in the post-2016 period.
In this context, other researchers argue that ECF may be a strategic or first choice rather than a last resort for startups (Cummings et al., 2020;Junge et al., 2021;Stevenson et al., 2021). Stevenson et al. (2021) focus on the concept of entrepreneurs as strategic fund seekers rather than as startups striving to satisfy the criteria of traditional funders like BA or VC. They argue that strategic entrepreneurs seek "funding fit" by choosing ECF for reasons that highlight new forms of nonfinancial value. We complement that by pointing to the attractions of ECF campaigns for entrepreneurs. ECF enables them to raise BA/VC funding more cheaply as the latter co-invest at the share price agreed with the platform and platform fees are lower than BA or VC syndicate fees. Traditional BA or VC funders as sole funders use their positions as monopoly providers and to lowball the purchase price for their stakes and also enjoy power vis-à-vis the entrepreneur in the aftermath of their stakes. This is a very important value consideration that is not discussed in the Stevenson et al. (2021) study.
This paper adopts the view of entrepreneurs as strategic fund seekers but in the context of the highly developed ECF market in the UK. This market has two distinctive features. The first is that this has been a predominantly coinvestment ECF market since 2016 (Zhang et al., 2018). BA, VC, PE and other early stage investors such as family offices widely invest in ECF campaigns and often pre-commit funds prior to campaigns going public. Coinvestment has attractions for both traditional investors and for the startups. On the one hand, BA and VC funds can diversify their investments across a wider number of startups by making smaller but still significant contributions to a larger number of ECF campaigns. On the other, these smaller equity stakes imply that entrepreneurs are less subject to direct control by such investors than in the traditional BA or VC stakes (except for the Syndi-cateRoom co-investment model-see below). Moreover, BA or VC stakes provide certification effects that attract more crowd investors (Ralcheva & Roosenboom, 2016). This paper complements and extends their study by highlighting value and control considerations in funding fit choice in the context of the UK ECF market.
3
The second feature of the UK market is its very distinctive ownership structures embedded within ECF platforms that provide a wide choice to strategic entrepreneurs seeking outside equity. Cumming, Vanacker and Zahra (2019c) highlight that ECF platforms embrace the three shareholder structures that are adopted by the Big 3 UK ECF platforms. The first is the direct ownership scheme pioneered by Crowdcube-the UK's largest ECF platform-since 2011 where the investors are the legal and beneficial owners. The second or nominee account model introduced by Seedrs in 2012 is one where the platform as legal owner acts on behalf of all the investors who are the beneficial owners. The nominee model involves an active post-campaign corporate governance role for the platform (Coakley et al., 2021a). The third ECF model is the co-investment or lead investor model pioneered by SyndicateRoom in 2014. Here the lead investor conducts due diligence, commits 25-40% of the target capital prior to the campaign going public and monitors the ECF firm in the wake of a successful campaign. 1 Over time, the distinctions between platforms has lessened. Each of the platforms has adopted some of the features of their rivals. One notable feature formally adopted by all Big 3 platforms is the co-investment model or what is called private launch by Seedrs and Crowdcube. 2 This is most developed in the case of SyndicateRoom where it is known as the lead investor model but elements of it were later adopted by the other two platforms. The significant difference is that due diligence is led by the professional investor. Thus, one can argue that the UK market has become a predominantly co-investment rather than a pure ECF market in that BA, VC, PE and other early stage investors such as family offices widely invest in ECF campaigns and often pre-commit funds prior to campaigns going public. Apart from the Zhang et al. (2018) data referred to above, the British Business Angel Association estimate that approximately one third of all UK and 43% of Londonbased business angels have co-invested on ECF platforms. These levels of co-investment based on thorough due diligence are clearly at odds with the view of ECF as equity of last resort. Rather they suggest that both ECF platforms and traditional equity providers are developing synergistic relationships in the seed and growth stage financing ecosystem.
The paper's major contribution is that it analyses the details of strategic entrepreneurial choice among distinctive ECF platforms as their preferred outside equity option. Co-investment is key here as this step often precedes the public launch of UK initial ECF campaigns in recent years. On the one hand, it enables entrepreneurs to focus on the post-campaign shareholder structure they want for their startups as they transition to what Cumming et al. (2021) call the ECF firm, while on the other, they may strategically choose a platform to signal quality and increase the likelihood of campaign success. Thus, our paper complements the Stevenson et al. (2021) study by analysing founder team choice among three competing platform structures employing a large quantitative dataset. The study employs data from 1291 (successful and unsuccessful) initial campaigns that have been conducted on Crowdcube, Seedrs and SyndicateRoom over the 2013-2018 period. It broadly follows Chemmanur and Paeglis (2005) in quantifying founder (management) team characteristics and in highlighting team heterogeneity. 3 They proxy management team resources by team size and qualifications (e.g. holding an MBA) and team heterogeneity by average tenure and tenure heterogeneity. In addition, we employ proxies capturing age and nationality heterogeneity.
The results suggest that founder team size is negatively associated with the probability of choosing the Seedrs nominee model and with a significantly higher probability of choosing the Syndicate Room co-investment model. One obvious attraction of Seedrs for solo founders and small teams is that the platform as nominee assumes responsibility for all corporate governance and related administrative tasks. Our results complement and lend support to the findings in Cumming et al. (2019b) in which ownership structures are an important determinant of success.
The average marginal effects for team heterogeneity (tenure, nationality and age) are all positive and significant at the 1% level. They imply a higher probability of choosing the SyndicateRoom co-investment shareholder structure across all models, highlighting the role of a business angel as lead investor on this platform. By contrast, heterogeneous teams are less likely to pick the Seedrs nominee platform for their initial campaign. The clear implication is that larger and heterogeneous teams are more likely to raise ECF funds from campaigns explicitly involving professional investors.
The rest of the paper is organized as follows. Section 2 discusses the relevant literature and outlines the hypotheses to be tested. Section 3 gives details of our data and empirical methodology. Section 4 discusses the results of multinomial probit analysis while the final section concludes.
Literature and hypotheses
This section provides first a brief summary of the UK ECF platform structures and then discusses existing literature findings. Finally, it formulates hypotheses that will be tested in the empirical section. It argues that founder team with specific characteristics may choose strategically a specific platform to signal startup quality.
UK ECF platform structures
There is a growing interest in the equity crowdfunding (ECF) literature in both corporate governance and competing platform shareholder structures. Studying them has become fundamental (Buttice & Vismara, 2021). Cumming, Vanacker and Zahra (2019c) highlight that ECF platforms embrace the three shareholder structures that are adopted by the Big 3 UK ECF platforms. The first is the direct ownership scheme pioneered by Crowdcube -the UK's largest ECF platform-since 2011. Here the platform's post-campaign role as an intermediary is minimal and so the startup directly communicates with its investors who are the legal owners. The big attraction of this model is that the ECF shareholders enjoy direct (legal) ownership of the shares and their names appear on the share register. However, it also has downsides. One is that post-campaign corporate governance (e.g. decision making on a host of issues such as calling extraordinary meetings or decisions about follow-on funding) can impose a heavy administrative burden especially in the case of large shareholder numbers. Another is the challenges of attracting large investments to provide ECF campaigns with traction in their early stages. This may have an impact on the type of investor it may attract in the future .
The second or nominee account model introduced by Seedrs in 2012 is one where the platform as legal owner acts on behalf of all the investors who are the beneficial owners. The nominee model involves an active post-campaign corporate governance. Coakley et al. (2021a) discuss the role of the nominee model and conceptualise it as having similarities with venture capital (VC) or business angel (BA) syndicates. However, both VC and BA syndicates are limited to qualified (professional and high net worth) investors. The latter have to pay fees of between 5 and 20% to the syndicate lead investor on AngelList and an additional 5% to the platform (Agrawal et al., 2016). By contrast, Seedrs nominee campaigns encourage the involvements of ordinary investors-the crowd-by granting them full voting and ownership rights and also involve low campaign fees.
The third ECF model is the co-investment or lead investor model pioneered by Syndi-cateRoom in 2014. Its model resembles the BA syndicate in being open to qualified investors only but its campaign fees are similar to those of Seedrs and Crowdcube. Here the lead investor conducts due diligence as well as committing 25-40% of the target capital prior to the campaign going public. The SyndicateRoom model is closer to the VC/BA syndicate in that its campaigns are limited to qualified investors only, but it only charges a nominal fee for participating in a campaign.
Related literature
This paper links to a number of distinct literatures. The first and most central of these is the role of the founder team in young startups and ventures. While the central role of the founder team has not yet been widely investigated in the ECF context, it has received considerable attention in later stage entrepreneurial financing. When a new firm is founded, one of the most important factors that provides the basis for its success is its founder team. The existing literature establishes that the founder team composition is possibly the most important factor for the long-term success of a firm (Eisenhardt & Schoonhoven, 1990;SØrensen and Stuart, 2000;Agarwal et al., 2020).
One interesting question in this literature is whether solo ventures outperform founder teams. In general, the results suggest that teams perform better than solo founders because of the wider set of skills they possess (Lazear, 2005;Levine et al., 2017). One notable exception is the paper by Greenberg and Mollick (2018). They find that solo founders outperform founder teams in terms of survival and do no worse in terms of revenue generation using a sample reward-based crowdfunding campaigns on Kickstarter. However, since reward-based crowdfunding campaigns are generally less risky than ECF campaigns (Coakley and Lazos, 2021), it may not be possible to generalise these findings.
In this vein, Coakley et al., (2021c) examine whether solo founders are more likely than founder teams to succeed in an initial ECF campaign and subsequently are less likely to fail. The results for a large sample of initial ECF campaigns on the Crowdcube, Seedrs and SyndicateRoom platforms show that solo founders have a lower probability of conducting successful initial ECF offerings than founder teams and have a higher probability to fail in the long run. They conjecture that founder teams enjoy more success as their human capital quality may likely attract professional investors that can act as a certification effect. Moreover, the monitoring role of professional investors helps to minimise moral hazard concerns and lowers the probability of failure for founder teams. In contrast with their study, this one is based on a pre-campaign decision-making setting where teams choose shareholder structures (platforms) and extends the range of team characteristics. It also includes a continuous team size variable rather than a binary variable capturing founder team versus solo founder firms, which also allows us to identify how small changes in the number of team members could potentially affect the strategic entrepreneurial choice between competing crowdfunding platforms. This paper is also linked to a long-lasting debate (and literature) on whether management team heterogeneity-sometimes also called diversity-positively or negatively affects the performance of publicly traded firms. The empirical results are mixed. On the one hand, one strand of the literature documents a negative effect of team heterogeneity on performance (Chrobot-Mason et al., 2009;Li & Hambrick, 2005). On the other, a different strand finds a positive effect on performance (Bantel & Jackson, 1989;Murray, 1989). Jin et al. (2017) acknowledge the mixed results about diversity and argue that entrepreneurial teams are closely related to project teams in which there is a need for diversity. They conduct a meta-analysis study in an entrepreneurial setting and find that diversity positively affects venture performance.
The other literature to which our paper connects is that on ECF shareholder structures. A few studies have examined shareholder structures both within and across platforms. Cumming et al. (2019b) employ data from the Crowdcube platform and their two stage Heckman results suggest that ownership and control separation from its dual class share structure lowers the likelihood of both short-and long-term success. Coakley et al. (2021a) investigate both the inter-platform and intra-platform impact of nominee versus direct ownership initial campaigns. They establish that nominee initial campaigns are more likely to succeed, raise more funds, and to attract overfunding relative to direct ownership campaigns. They also find that nominee campaigns enjoy greater long run success in terms of successful seasoned equity crowdfunded offerings. Our study both complements this study and extends it by analysing the full range of platform shareholder structures available in the UK in a pre-campaign context. Rossi et al. (2019) employ data from a sample of 185 platforms in Australia, Austria, Canada, France, Germany, Italy, New Zealand, the UK and the US. Their empirical findings establish that the direct ownership approach has a negative effect on campaign success while the nominee account approach has no significant effect in this regard. Walthoff-Borm et al. (2018b) find that ECF firms which employ the nominee account realize lower losses. There has been less research on the co-investment shareholder structure model possibly because, as Rossi et al. (2019) point out, fewer campaigns have been conducted on co-investment or syndicate-like platforms. For this reason, it is important to include such a platform in our study. The importance of co-investment is highlighted in Wang et al. (2019) in which professional investors exchange information with the crowd which in turn may improve the efficiency of the ECF market.
Team size
Co-investment. Signalling theory posits a direct association between signaller quality and signal effectiveness (Spence, 1973). Existing ECF literature suggest that founder teams are high quality signallers who are able to send effective signals that reduce information asymmetry. As a result, they are more likely to conduct successful offerings (Ahlers et al, 2015;Vismara, 2016). Thus, large size teams may choose strategically to raise capital via the co-investment model to strengthen further their signal of quality via the presence of professional investor (Ralcheva and Rossenboom 2016). Moreover, it is easier for them to comply with and satisfy professional investor due diligence. Solo founders for example, are less attractive for Angels and Venture Capitalists (Graham, 2006). This leads to the following hypothesis: H1A Founder team size is positively associated with the probability of choosing the (Syn-dicateRoom) co-investment shareholder structure.
Nominee Solo founders and small founder teams are the least well equipped to cope with the post-initial campaign administrative burden associated good corporate governance especially when they have raised outside equity involving an ECF direct ownership campaign. For this reason, there are good grounds for leading one to presuppose that will choose a nominee platform structure (e.g. Seedrs platform) that will assume this administrative burden on their behalf if the campaign is successful. This is further supported by findings in Rossi and Vismara (2018) in which post-campaign services matter for campaign outcomes. Conversely, the size of the founder team is expected to be inversely associated with the nominee shareholder structure as larger teams can more readily share the administrative tasks.
H1B Founder team size is inversely associated with the probability of choosing the (Seedrs) nominee shareholder structure.
Team heterogeneity
A strand in the literature focuses on whether the founder (management) team matters most in professional investor investment criteria. A recent study by Gompers et al. (2020) argues that the management team is the most important criterion in selecting investments based on responses to a survey of 885 institutional venture capitalists (VCs) across 681 firms. Even though results are mixed about the association between heterogeneity-sometimes called diversity-and team performance, a meta-analysis study by Jin et al. (2017) concludes that diversity positively affects venture performance. This suggests a positive relation between heterogeneity and team quality.
There is evidence in ECF firms backed by a professional investor are less likely to fail (Signori & Vismara, 2018). Most investments in startups take place via a professional investor network (Gompers et al., 2020). Heterogeneous teams consist of members from different cohorts and so they are more likely to have large networks that can be multi-beneficial (Wood et al, 2019). Networks matter for ECF success as well (Vismara, 2016). On the one hand, heterogeneity reflects quality in the startup ecosystem while, on the other, it is more likely for heterogeneous team to be part of a professional investor network. This makes it more likely for heterogeneous teams to choose strategically-and pass the professional investor due diligence step-to raise capital via the co-investment model in order to signal quality. This suggests the following hypothesis: H2 Founder team heterogeneity is associated with a higher probability of choosing the (SyndicateRoom) coinvestment shareholder structure.
Team advanced education and experience
Management literature suggests that advanced education and experience are key factors in founder team human capital. The general consensus is that highly educated and experienced teams perform well compared to their less skilled and experienced counterparts. In other words, they represent high quality signallers who are able to send effective signals to investors and raise capital successfully. Existing evidence suggest that experience is one of the most important selection criteria for VCs (Zacharakis and Meyer, 2000). Education matters for the involvement of professional investors especially in the technology sector (Levie and Gimmon, 2008) which dominates ECF (Coakley et al., 2021b). In other words, founders with these characteristics are more likely to meet professional investor investment criteria.
Experience and education play a very important role in entrepreneurial finance. The empirical studies of Barbi and Mattioli (2019) and Piva and Rossi-Lamastra (2018) show that this holds in ECF campaigns also. They argue that past experience and educational level are the most salient team human capital elements in this respect. This is particularly likely to be the case for startups engaged in complex (e.g. bio-sciences or technology) projects that may benefit from professional investor advice as on the SyndicateRoom platform. As a result, experienced and highly educated teams may choose strategically the co-investment platform to strength signal quality. This leads to the following hypothesis.
H3 Founder team experience and advanced education are associated with a higher probability of choosing the (SyndicateRoom) co-investment shareholder structure.
Data
Our empirical results are based on a sample of successful and unsuccessful ECF campaigns launched on the three major platforms of the UK (Crowdcube, Seedrs and SyndicateRoom) covering the period 2013-2018. The data end in 2018 as SyndicateRoom changed from being a crowdfunding platform to become a fund management firm specialising in startups in 2019. This study obtains ECF data from TAB, which has been used in previous ECF studies (Ralcheva & Roosenboom, 2019). The unique registration number for UK firms is used to match TAB data with founder data from the UK Companies House. We follow a similar approach as in Coakley et al. (2021b) and identify a founding team member to be the one listed as Director on the UK Companies House website. We remove Seasoned Equity crowdfunded offerings. This results in a dataset that consist of 1291 initial ECF campaigns.
Empirical specifications
In investigating the effect of founder team characteristics such as team size and heterogeneity on ECF platform choice, we conjecture that strategic founder teams may opt out for one of the three main shareholder structures that platforms offer in the UK ECF market: Crowdcube, Seedrs or SyndicateRoom. This type of decision-making process leads us to model platform choice by employing a multinomial probit (MNP) regression. The MNP model is used with discrete dependent variables that take on more than two outcomes that do not have a natural ordering (Cameron & Trivedi, 2005).
Based on our analytical framework, we can assume firm i's utility for choosing platform j, U ij (i = 1, … , n;j = 1, 2, 3) is a function of team, firm-level and campaign characteristics and a stochastic error. The utility of choosing ECF platform j is: where x ij is a vector of covariates and the errors are assumed to be normally distributed, where y i is a random variable that indicates the choice made. The MNP model is an extension of the binary probit model that allows the coefficients of the explanatory variables to vary across the choices and allow us to assess whether team size and founder team heterogeneity characteristics are associated with higher probabilities of choosing a specific platform by firms. Since we are not interested in the coefficients of the multinomial model per se but rather in the change in the probability associated to changes in team characteristics, the results are presented in terms of average marginal effects (AME). 4
Variables
In the multinomial framework, the dependent has three outcomes associated with each particular platform shareholder structure: Crowdcube (direct ownership model), Seedrs (nominee model) and SyndicatedRoom (co-investment model). In the direct ownership model on Crowdcube, the investors become the direct owners of the shares although only a small proportion of the owners-typically those with large investments-enjoy voting rights leading to a wedge between ownership and control rights. 5 In our sample, startup firms can choose between any of these platforms to launch their initial ECF campaigns. We conceptualize their choice as a categorical variable ranging from 1 to 3: (1) Crowdcube, (2) Seedrs and (3) SyndicatedRoom. The key independent variables used in the empirical model relate to team size, team heterogeneity and firm and campaign level characteristics.
A set of control variables is used to account for unobserved heterogeneity relying on the findings of existing studies. They are firm (pre-money) valuation, start-up status, headquarters location based on a London dummy, diversification across sectors, technology dummy capturing whether the firm operates in the Technology Hardware & Equipment sector, target capital, equity offered and year dummies. Errors are clustered at the industry level as in Hornuf et al. (2018). It also accounts for investor preference in specific industry groups as evidence suggest in Johan and Zhang (2021). (1)
Empirical results
This section first reports basic descriptive statistics for our sample of 1291 startups over the 2013-2018 period. It then proceeds to present and discuss the key results of our multivariate empirical analysis on the effect of founder team characteristics on the probability of startups choosing one of the Big 3 UK platforms to launch their initial campaigns. Table 1 reports the definitions of the variables employed in the empirical analysis.
Descriptive statistics
The key variables of interest include four measures of team heterogeneity: team size, tenure, nationality and age heterogeneity following Chemmanur and Paeglis (2005). Note that team size is a continuous variables, which encompasses the Greenberg and Mollick (2018) distinction between solo ventures (team size = 1) and founder teams (team size > 1). Other variables of interest include experienced team (dummy equal to 1 for above median founder team age) and advanced degree. 6 Table 2 presents the basic descriptive statistics for all variables.
The average founder team size in our sample has 2.34 members. Crowdcube campaigns account for 60% of our sample, Seedrs for 29%, and SyndicateRoom for 12%. The average team tenure and age heterogeneity are 1.2 and 9.2 years, respectively. Around 28% of campaigns are conducted by firms that include at least one non-UK founding team member. By construction, half our sample is formed by an experienced team (i.e. average team age exceeds the sample median team age of 43). Only 7% of firms have at least one founder team member that holds a Doctor or Professor title. The average pre-money valuation of startups in our sample is £3.17 m and 79% of them are less than 5 years old (startups). The sample firms are geographically concentrated with 46% located in London. Some 48% of firms operate in the Technology sector and are mainly undiversified (i.e. with a strong focus on a single sector). The average target capital is £0.32 m and the average equity offered is around 14%.
Before conducting a multivariate analysis, we test for the presence of multicollinearity among the variables by reporting the values of their correlation coefficients in Table 3.
The table shows that there are no high pairwise correlations between the variables, except for the case of correlations between team characteristics which is to be expected. For this reason and to avoid concerns of high correlations, we analyse team characteristics separately in the following sections.
Given the central importance of platforms within our analysis, we focus on the differences across the 3 platforms by performing multiple-sample multivariate tests on means under the null of equal means for all platforms. This yields a Wald chi-squared statistic. The results are reported in Table 4.
The test statistic overwhelmingly suggests statistically significant differences across platforms both in terms of team characteristics and firm-level variables. These results justify our focus on platforms and are consistent with existing studies that document platform effects in ECF (Rossi et al., 2019). There are two exceptions. The results indicate no significant differences across platforms in terms of diversification and a focus on the Technology sector.
Multivariate analysis
Tables 5 to 10 shows the average marginal effects (AME) from multinomial probit regressions predicting the choice of a platform for launching an ECF campaign) by founder teams in terms of size and heterogeneity. In each table, Model 1 excludes the control variables, Model 2 includes the control variables, and Model 3 includes the control variables plus year fixed effects. Looking at the model fit statistics, it is clear from all the Tables 5 to 10 that both the log likelihood and R 2 increase as we move from Models 1 to 3, indicating a better fitting model. In other words, Model 3 demonstrates increased explanatory power over Model 1 in all cases. Both Models 2 and 3 have a lower Akaike Information Criterion (AIC) and Bayesian information criterion (BIC) compared to Model 1, suggesting that fit is improved in both models after including control variables while statistical significance of our key independent variables remains largely unchanged. Table 5 presents the results using a continuous variable to measure team size. The results reveal that the average marginal effect (AME) on founder team size is significantly positive for SyndicateRoom but significantly negative for Seedrs, both at the 1% level. The positive value implies that larger teams are more likely to choose Syndicate-Room and this supports H1A. By contrast, team size is inversely associated with the probability of choosing the (Seedrs) nominee structure in line with H1B. The implication is that smaller teams are more likely to launch their campaigns in Seedrs. Finally, the Crowdcube AME for team size is statistically insignificant indicating that team size does not matter for choosing Crowdcube (see Fig. 1 also). Instead, Table 5 indicates that firm valuation, equity offered and being a startup all have statistically positive impacts on the choice of Crowdcube.
An alternative way of exploring the relationship between team size and platform choice is by computing and plotting predicted probabilities as team size increases. These relationships are depicted in Fig. 1. (3) includes all control variables plus year fixed effects. Z-statistics adjusted for clustering at industry level are reported in parentheses. Significance at the 10%, 5%, and 1% level is indicated by *, ** and *** Figure 1 shows that, for the mean team size, the probability of choosing Crowdcube is just under 60% but its slope is close to zero, consistent with an insignificant AME in Table 5. By contrast, the probability of choosing Seedrs for the mean team size is around 30%, and that for SyndicateRoom is only 10%. The probability of choosing Seedrs drops substantially as team size increases, portraying graphically this negative relationship presented in Table 5. Conversely, the probability of choosing SyndicateRoom increases monotonically suggesting a clear and strong positive relationship. These two results lend further support to Hypotheses 1A and 1B.
The separate results for tenure and nationality heterogeneity are summarized in Tables 6 and 7, respectively.
The AME coefficients capturing the team tenure and nationality heterogeneity are all positive and significant at the 1% level for the SyndicateRoom co-investment shareholder structure across all models in both Tables 6 and 7. This supports H2. The implication is that heterogeneous teams are more likely to conduct their campaigns on a platform that has a business angel as lead investor. This is consistent with the Jin et al.
(2017)'s meta-study in which diversity is positively associated with venture performance. By contrast, the team tenure and nationality heterogeneity coefficients are statistically insignificant for both Crowdcube and Seedrs but those for firm valuation and equity offered are once again (as in Table 5) significantly positive for Crowdcube only.
We also examine the relation between the predicted probability of choosing a platform for specific values of founder team heterogeneity. This is illustrated in Fig. 2 where the vertical (horizontal) axis reports predicted platform probabilities (heterogeneity values). Figure 2 shows an upward slope for SyndicateRoom (starting at around 0.15) reflecting a positive relation between tenure heterogeneity and the probability of choosing the coinvestment model. By contrast, the relationship for Seedrs exhibits a downward slope like that for Crowdcube. Crowdcube slope is roughly flat in line with the insignificant coefficient in Table 6. Seedrs exhibits lower probabilities for the same values of tenure heterogeneity Fig. 1 Analysis of the predicted probabilities of ECF platform choice by team size This figure shows predicted probabilities of ECF platform choice across team sizes (mean + 3 standard deviations) with 95% confidence intervals based on the estimated full Model (3) in Table 5 Table 6 Marginal effects of tenure heterogeneity on ECF platform choice This table shows average marginal effects from multinomial probit regressions predicting the choice of an ECF platform to launch a campaign. Model (1) excludes control variables, model (2) (2) 1 3 compared to Crowdcube. It is worth noting that teams with tenure heterogeneity of around 5.5 years have the same predicted probability of choosing either SyndicateRoom or Seedrs.
The results for age heterogeneity are summarized in Table 8.
The results show that age heterogeneity exhibits a significantly positive AME for the SyndicateRoom platform while yielding corresponding significantly negative AME for the Seedrs platform, both at the 1% level. One can think of age heterogeneity as reflecting the general experience of the founder team. Figure 3 depicts the relation between the predicted probability of choosing a platform for specific values of founder team age heterogeneity.
The patterns are quite similar to those in Fig. 1. In line with previous results, Fig. 3 shows an upward slope for SyndicateRoom reflecting a positive relation between team age heterogeneity and the probability of choosing the co-investment model.
Next, we focus on the relation between highly educated and experienced teams and the probability of choosing a specific platform structure to conduct their campaigns. The results are summarized in Tables 9 and 10 for teams for experienced teams and advanced degree, respectively.
The results suggest that experienced and highly educated teams are more likely to conduct their campaigns on the SyndicateRoom platform as their AMEs are positive and significant at the 1% level in both Tables 9 and 10. This supports our hypothesis H3. The Table 9 results also show that experienced teams are less likely to choose a nominee platform to raise capital but the advanced degree coefficient is insignificant in Model 3 in Table 10.
Our results so far lend support to and complement the findings of existing studies which document platform effects in the ECF market . While their focus is on what happens during the campaign, our focus is on the pre-campaign stage decision. Our results may shed more light on digital corporate governance studies as well (Cumming Fig . 2 Analysis of the predicted probabilities of ECF platform choice by tenure heterogeneity This figure shows predicted probabilities of ECF platform choice across different levels of tenure heterogeneity (sample mean + 3 standard deviations) with 95% confidence intervals based on the estimated full Model (3) in Table 6 Table 8 Marginal effects of Age heterogeneity on ECF platform choice This table shows average marginal effects from multinomial probit regressions predicting the choice of an ECF platform to launch a campaign. Model (1) excludes control variables, model (2) It is worth noting that Crowdcube coefficient is insignificant in most of the cases. Crowdcube offers a dual shareholder structure. It started offering the direct scheme at its inception, it gave another option to entrepreneurs though by offering the nominee in February 2015. Coakley et al. (2021c) focus on the effect of nominee on ECF outcomes in the long and short run at inter and intra platform level. Their findings reveal that nominee offerings are more likely to be successful in the short and long run. Put differently nominee may be an effective signal that may reflect team quality. Given that Crowdcube offers both nominee and direct, this may help explain the insignificant coefficients on Crowdcube. A future intra-platform analysis may shed more light on this.
Our findings reveal that heterogeneous, highly educated and experienced teams are more likely to conduct campaigns on a platform that requires the commitment of professional investor. 77 CCAF report that the share of ECF investment by institutional investors increased from just 8% in 2015 to 25% in 2016 and to 49% in 2017 and was around 50% in 2018. A possible explanation may lie in the positive association between signaller quality and signal effectiveness (Connelly et al, 2011;Spence, 1973). Chemmanur and Paeglis (2005) argue that later stage teams with these characteristics are high quality teams and have a positive effect on IPO outcomes. The commitment of a professional investor may be an effective signal for reducing information asymmetry in ECF (Ralcheva & Roosenboom, 2016). As a result, heterogeneous teams may be high quality signallers who may choose the co-investment platform to enhance signal quality for their startups. Table 8 Table 9 Marginal effects of experienced team on ECF platform choice This table shows average marginal effects from multinomial probit regressions predicting the choice of an ECF platform to launch a campaign. Model (1) excludes control variables, model (2)
Robustness tests
To address the increase in co-investment in the final years of our sample, we restrict our sample to the 2016-2018 period and the summary results are presented in Table 12 while the corresponding summary results for the full sample period are given in Table 11. They provide further support for our main previous findings: larger and heterogenous firms are more likely to choose the co-investment ECF model to raise outside equity on ECF platforms. This is corroborated by the fact that all of the SyndicateRoom marginal effects in Table 12 are significantly positive at the 1% level with the exception of nationality heterogeneity. However, there are now marked differences between the Crowdcube and Seedrs findings for the two samples. Table 11 for the full sample confirms that the marginal effects for Seedrs were largely significantly negative whilst those for Crowdcube were mostly insignificant. Now by contrast, Table 12 reveals that the Crowdcube and Seedrs marginal effects are reversed. They are mostly significantly negative for the Crowdcube platform (excluding nationality heterogeneity) whilst those for Seedrs are now virtually all (excluding team size) insignificant. The implication is that as co-investment has increased, larger and more heterogeneous teams are less likely to choose the Crowdcube platform.
As a final robustness check, Table 13 in the Appendix include results of models including all team characteristics simultaneously. The main results remain largely similar.
Conclusions and discussion
This study extends the existing ECF literature by examining strategic entrepreneurial choice among competing platform or shareholder structures. It employs firm and campaign data from Crowdcube, Seedrs and SyndicateRoom for the period from January 2012 to December 2018. The multinomial probit results suggest that larger and heterogenous founder teams are more likely to conduct campaigns on a platform that employs the coinvestment model. They lend support to the findings of Cumming et al. (2019b) and Rossi et al. (2019) in which platform shareholder structure matters in ECF. They are also in line with entrepreneurial studies in which founder team is possibly the most important factor when BA or VC funds choose to invest in a firm (Van Osnabrugge, 2000;Sudek, 2006;Gompers et al., 2020).
The theoretical implication of our study is that founder team characteristics matter for platform selection. The main equity crowdfunding platforms in the UK are Crowdcube, Seedrs, Syndicate Room. Each has its individual characteristics including governance structures and campaign support. One of the main aspects to consider from a founder team perspective is whether it will want to use a platform that allows crowdfunding investors to hold shares directly or through a nominee structure. We try to shed light on this issue for the first time through a detailed empirical analysis of the ECF UK market, as the decision to launch a campaign on a specific platform will therefore have potential short-and long-term implications for the startup. As for practical implications, our study may help platforms improve their due diligence process. Due diligence plays a very important role for a sustainable crowdfunding market (Cumming et al, 2019a). In other words, our findings may help platform filter out startups which are less likely to receive investments from professional investors. The latter account for half the investment in the UK ECF market (Zhang et al, 2018) and they exchange information with inexperienced investors. This improves the overall efficiency of the ECF market (Wang et al, 2019) and may help create a sustainable and flourishing ECF market.
Although this paper presents the first attempt empirically to study the implications of human capital on choosing a crowdfunding platform in the UK context, some of its limitations offer the opportunity for further research. In the current paper, we only observe firms matched with ECF platforms, but we have no information about startups which are rejected by the platforms. Analysis of whether and how platforms and human capital interact during the pre-and post-campaign process will be useful for a more in-depth analysis in a context of a diverse ecosystem of ECF platforms and interplay among them. In addition, another direction for future study, could be to link market timing and founder characteristics in ECF. Cerpentier et al. (2021) find that ECF firms set higher targets and as a result they raise more capital in hot markets compared to their counterparts in cold markets. Founders choose specific shareholder structure to signal quality thus their decisions may differ between hot and cold markets. Future research may shed more light on this. Table 11 Summary of key marginal effects for regression models for the full sample (2013)(2014)(2015)(2016)(2017)(2018) Average marginal effects (AME) are computed by averaging over the sample (i.e. changes are averaged across observed values). Marginal effects at the mean (MEM) are computed based on sample means of independent variables. Significance at the 10%, 5%, and 1% level is indicated by *, ** and ***
|
2021-10-21T15:03:32.616Z
|
2021-10-19T00:00:00.000
|
{
"year": 2021,
"sha1": "1ba33c345d689ca679637d921c878f66a3d245a0",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10961-021-09891-0.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "0d28c81132118c52dddeb802f3719332a148f72e",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
}
|
232111334
|
pes2o/s2orc
|
v3-fos-license
|
Pharmacological Management of Neurogenic Bowel Dysfunction after Spinal Cord Injury and Multiple Sclerosis: A Systematic Review and Clinical Implications
Neurogenic bowel dysfunction (NBD) is a common problem for people with spinal cord injury (SCI) and multiple sclerosis (MS), which seriously impacts quality of life. Pharmacological management is an important component of conservative bowel management. The objective of this study was to first assemble a list of pharmacological agents (medications and medicated suppositories) used in current practice. Second, we systematically examined the current literature on pharmacological agents to manage neurogenic bowel dysfunction of individuals specifically with SCI or MS. We searched Medline, EMBASE and CINAHL databases up to June 2020. We used the GRADE System to provide a systematic approach for evaluating the evidence. Twenty-eight studies were included in the review. We found a stark discrepancy between the large number of agents currently prescribed and a very limited amount of literature. While there was a small amount of literature in SCI, there was little to no literature available for MS. There was low-quality evidence supporting rectal medications, which are a key component of conservative bowel care in SCI. Based on the findings of the literature and the clinical experience of the authors, we have provided clinical insights on proposed treatments and medications in the form of three case study examples on patients with SCI or MS.
Introduction
Neurogenic bowel dysfunction (NBD) is a prevalent issue for people with neurological disorders; changes in bowel motility and sphincter control can present a major problem for people with spinal cord injury (SCI) and multiple sclerosis (MS). The reported prevalence of NBD varies, with most reports of constipation occurring in the range of 30-40% of people with chronic SCI. However, some studies have found the prevalence of constipation to be closer to 80%, and upwards of 75% of individuals with SCI experience fecal incontinence [1,2]. NBD is also prevalent in people with MS. A systematic review found the prevalence of constipation to range from 18-43%, and fecal incontinence occurs in 3-51% of people with MS, based on studies with over 100 patients [3]. In the general population, constipation and fecal incontinence have been reported to be 19.7% and 4.3% respectively, in a 70,000-plus population-based sample, with increasing prevalence in older age patients [4]. Thus, it is clear that bowel dysfunction is far more prevalent in people with SCI and MS and requires special attention.
Bowel dysfunction due to SCI or MS has a substantial negative impact on quality of life [5]. Even when a bowel program is in place to effectively manage NBD, it can be onerous and time-consuming and may take up to 1-2 h per session, repeated every day or alternate days. It can interfere significantly with a person's education, work, and social life and presents a major challenge to quality of life, independence, and community reintegration after SCI. Loss of bowel control is a source of anxiety and distress [6,7]. Treatment of bowel dysfunction rates highly for patients in both clinical and research domains of SCI and MS [8,9]. Regaining bowel function has been ranked similarly in priority to regaining walking after SCI [10].
The major symptoms of NBD are fecal incontinence and constipation. Fecal incontinence is the accidental passing of bowel movements, including solid stools, liquid stools, or mucus. This often occurs if muscles in the rectum and anus are not functioning to store and hold back a bowel movement due to muscle injury or nervous system damage, as well as a loss of rectal sensation [11]. Constipation is defined as a reduction in the frequency of stools, but a lack of a daily bowel movement is not necessarily equivalent to constipation as some people have as few as three bowel movements per week. Symptoms of constipation could include difficulty with stool passage, infrequent bowel movements or passage of hard stools [12].
Generally, people with higher and more severe injuries tend to have more significant bowel dysfunction, particularly constipation [13]; the studies by Liu [14,15] found that severity of NBD was significantly higher for people with higher American Spinal Cord Injury Association Impairment Scale (AIS) score classification and that people with AIS A SCI were at 12.8 times greater risk of severe NBD than those with AIS D.
There are two distinct patterns in the clinical presentation of bowel dysfunction in SCI: injury above the conus medullaris results in upper motor neuron (UMN) bowel syndrome, while injury at the conus medullaris and cauda equina results in lower motor neuron (LMN) bowel syndrome [2,16]. The upper motor neuron bowel, or hyperreflexic bowel, usually occurs with injuries above the sacral spinal cord and is characterized by loss of voluntary (cortical) control of the external anal sphincter, which remains involuntarily overactive, thereby promoting retention of stool. Transit time is prolonged throughout the colon. Fecal incontinence occurs concomitantly in many cases due to reduced or absent anorectal sensation and lack of voluntary control of the external anal sphincter muscle. Although there is the loss of supraspinal control, the nerve connections between the spinal cord and the colon remain intact; therefore, there is preserved reflex coordination and stool propulsion. Stool evacuation in these individuals occurs in response to stimulation of reflex activity, such as the presence of feces in the rectum, a suppository, enema, or digital rectal stimulation causing rectal distension.
The lower motor neuron bowel, or areflexic bowel, usually occurs with injuries at the sacral spinal cord or below and is characterized by the loss of centrally mediated (spinal cord) peristalsis and loss of reflex activity, resulting in slow stool propulsion and impaired reflex stool evacuation. Segmental colonic peristalsis occurs only due to the activity of the enteric nervous system, which is slower and less efficient without the centrally mediated peristalsis. The result is increased transit time through the distal colon and rectum with the production of drier and round-shaped stool. Lower motor neuron bowel syndrome is commonly associated with constipation. There is also a substantial risk for fecal incontinence due to the atonic external anal sphincter and lack of sensation and voluntary control over the external anal sphincter muscle.
In MS, the pattern of bowel dysfunction is similar to the pattern described for SCI. The neurological lesion is, however, less well defined in MS. The presence of bowel symptoms in MS is correlated to the expanded disability status scale [17], to the degree of spinal atrophy [18], and to disease duration, but not particularly with the type of MS [19]. The precise neuropathological mechanism in NBD and MS is not completely defined, but one study theorizes that at the cortical level, demyelination within the frontal lobe may affect a person's voluntary control over bowel movements [20]. Regardless, it has been noted that severe constipation is often one of the first presenting symptoms of MS [21].
A regular bowel program helps to ensure that evacuation occurs regularly-facilitating continence and reducing constipation. Prevention of constipation will reduce symptoms, such as abdominal pain and bloating and minimize the development of anorectal morbidities associated with NBD, including hemorrhoids, anal fissure, rectal abscess, and rectal prolapse.
A comprehensive bowel program will combine a number of interventions in an individualized routine and may include a specific diet to ensure adequate fiber and fluid, digital rectal stimulation, digital removal of stool, stimulation of the gastrocolic reflex, and use of oral or rectal (suppositories, enemas) medication. The different components of a bowel program are illustrated in Figure 1. Such a program will usually be performed on a daily or alternate day basis, depending on the needs of the individual. Undertaking physical activity, including standing and passive movements, may also help to reduce constipation. Some medications that are being used for other medical conditions or symptoms may also contribute to constipation. If these additional medications cannot be eliminated, stool softeners or oral laxatives may be used to modulate stool consistency and promote stool transit. Neurogenic bowel guidelines [22,23] recommend that a conservative bowel program should be developed initially in the rehabilitation phase following injury and that a comprehensive evaluation of bowel function and management is undertaken at least annually. The evaluation may include a patient history (including a detailed history of current Neurogenic bowel guidelines [22,23] recommend that a conservative bowel program should be developed initially in the rehabilitation phase following injury and that a comprehensive evaluation of bowel function and management is undertaken at least annually. The evaluation may include a patient history (including a detailed history of current bowel routine management, stool form, continence and time spent on evacuation, diet and fluid intake, relevant medical conditions and medications, the extent of care provision and home adaptations) and a detailed physical examination (including neurological examination to determine level and completeness of SCI as well as an abdominal and rectal examination). In some centers, comprehensive assessment tools, such as the International Spinal Cord Society (ISCoS) Bowel Data Set, are used to collect this information in a standardized manner.
A recent systematic review by Musco et al. [24] assessed the literature on all NBD treatments for adults, including both pharmacological and non-pharmacological approaches. From the results of the six studies included in the section on pharmacological treatments, there were statistically significant increases in weekly bowel movements and a decrease in colonic transit time with the use of 2 mg of prucalopride among individuals with SCI. However, there were no significant improvements in the duration of bowel care or the reduction of fecal incontinence and the need for digital evacuation of stool. In addition, the review found that mechanical evacuation (tap water enema) without oral stimulant laxatives was superior in bowel control (time required for evacuation) compared to irritant and stimulant-medication groups. Furthermore, from the six studies, only three included populations of individuals with SCI and none with MS, presenting a need for further investigation and clinical insights on the effectiveness of pharmacological management in NBD among both populations.
Hence, the objective of this investigation was to first assemble a list of current pharmacological agents (medications and medicated suppositories) used in current practice through the clinical expertise of our team, which included members from the United States, Europe, and Canada. Second, we systematically examined the current literature to determine the potential in managing NBD of individuals specifically with SCI or MS. We also reviewed literature outside of our designated populations of interest and with regards to other methods of bowel management to inform our approach and help us provide guidance for healthcare professionals as to when it is appropriate and timely to prescribe medication for NBD. Based on the findings of the literature and the clinical experience of the authors, we have provided clinical insights on proposed treatments and medications in the form of three case study examples on patients with SCI or MS.
List of Current Pharmacological Agents
We generated a list of current pharmacological agents (medications, medicated suppositories) prescribed for adults with NBD through a combination of clinical expertise from the United States, Canada and Europe and web-based searches on the drug monographs to define generic and trade names and common side effects.
Literature Search and Study Selection
We searched the electronic databases Ovid MEDLINE ® , EMBASE, and CINAHL for relevant literature dated from 1980 through June 2020, using search terms related to adult bowel dysfunction (e.g., constipation, bowel/fecal incontinence), spinal cord injury (e.g., paraplegia, tetraplegia, spinal cord injury/dysfunction), Multiple Sclerosis (or MS), and the brand names/generic names of all medications used for bowel dysfunction suggested by the author team and the university health librarian. We also identified additional studies through hand-searching the reference lists of included studies and reviews. Studies on medications for colonoscopy preparation were excluded as they do not reflect treatments for daily bowel management.
Two reviewers independently assessed titles and abstracts of citations for inclusion and the quality of the studies, with disagreements resolved by a third person. Review articles were only included if it was a systematic review. All articles were limited to English only. Animal studies and articles describing the neurophysiology of bowel were excluded. Duplicate studies were identified and removed using RefWorks management software (Ex Libris, Ann Arbor, MI, USA).
Inclusion Criteria
Three principles guided study inclusion: (1) studies were included if the population of interest was people with SCI or MS, (2) if they measured any outcomes related to bowel or bowel-related dysfunction (e.g., using the NBD or Wexner scores, or reporting the number of occurrences of fecal incontinence or constipation, colonic transit time, or duration/frequency of bowel movements), and (3) if the independent variable or inquiry of interest was some form of medication (e.g., prucalopride) and/or medicated suppository (e.g., bisacodyl). We endeavored to include all research designs, but qualitative studies and case reports were excluded. Results published only in abstract form or in conference proceedings could be included if adequate details were available for quality assessment (e.g., risk of bias) and if the area of inquiry had relatively little published information. Mixed populations were acceptable if the sample consisted of at least 20% people with SCI or MS.
Data Extraction and Synthesis
We extracted information from included studies and constructed evidence tables showing the study characteristics, outcomes, adverse effects, and quality ratings/risk of bias for all included studies. We presented the studies using a hierarchy of evidence approach, where the best evidence is presented first in tables and is the focus of any results, point estimates, or conclusions. If no literature was found for a commonly used medication (e.g., oral laxative), then practice guidelines or meta-analyses were sought in non-NBD populations (e.g., individuals with idiopathic chronic constipation).
Validity Assessment (Risk of Bias)
We used the grading of recommendations, assessment, development, and evaluations (GRADE) system to provide a systematic approach for evaluating the evidence [25]. We assessed the internal validity (risk of bias) of trials, observational studies, and systematic reviews, which include an evaluation of randomization, allocation concealment, blinding, the similarity of compared groups at baseline, loss to follow-up, and the accounting for any statistical confounds.
A study with a high attrition rate (e.g., 15% or greater) or a low response rate (lower than 50%) was automatically rated as a high risk of bias. Systematic reviews were rated on the clarity of review question, specification of inclusion and exclusion criteria, use of multiple databases for searching, sufficient detail of included studies, adequate assessment of the risk of bias of included studies, and providing an adequate summary of primary studies. Observational studies were rated on non-biased selection, loss to follow-up, pre-specification of outcomes, well-described and adequate ascertainment techniques, statistical analysis of potential confounders, and adequate duration of follow-up. Table 1 provides an overview of current medications identified by our expert clinicians. A number of oral medications were identified. Docusate sodium is a commonly used stool softener that draws water into the stool, making it easier to pass. Osmotic softeners, such as polyethylene glycol (PEG), are laxatives that increase the moisture in the stool to make it easier to pass and are usually taken once or twice per day or as needed. Stimulant laxatives activate contractions of the intestinal wall, thereby promoting transit. Commonly used oral stimulant laxatives include bisacodyl and sennosides. Prokinetic agents stimulate the contraction of the muscle cells of the gut and promote transit. Like stimulant laxatives, prokinetic agents are medications that increase digestive tract muscle activity to move the stool through digestion. Secretory drugs increase intestinal fluids, which then accelerate intestinal transit. Narcotic antagonists are used to treating opioid-induced constipation without blocking the effect of narcotics on pain. Medicated suppositories and enemas are also commonly prescribed for NBD. Stimulant suppositories contain medications (such as bisacodyl) that stimulate the bowel reflex. Suppositories are usually inserted 15-30 min before planned bowel emptying. The time to bowel movement is influenced by the type and route of administration. For example, oral bisacodyl may produce a bowel movement within 6-12 h, a rectal bisacodyl suppository within an hour and a rectal bisacodyl enema within 20 m. However, the medication used and even the base that the medication is dissolved in can affect how quickly the medication is absorbed. For example, bisacodyl is a water-soluble polyethylene glycol base (e.g., Magic Bullet) that allows shorter times to empty than bisacodyl in a vegetable oil base [26,27]. Lubricating suppositories contain non-medicated substances (such as glycerin), which hold water in the bowel to make the stool softer, so it is easier to expel.
Systematic Review
We initially found 1850 articles, and after duplicates were removed, we reviewed 1576 potentially relevant records through our searches for medications (including medicated suppositories and enemas) and NBD in SCI and MS. We assessed 62 articles for eligibility at the full-text level and ultimately included 28 studies that assessed the effects of medication on NBD in the MS (n = 2) and SCI population (n = 26).
Indication and Efficacy by Medication from the Systematic Review
Detailed abstraction tables are available in the online supplementary. A summary of the evidence is provided below.
Oral Laxatives
Oral laxatives are the first-line treatment for constipation; however, no studies were found testing them specifically in SCI and MS, so we resorted to previous reviews conducted on the effects of medications on constipation in the general population. Luthra et al. [28] conducted a network meta-analysis to compare the efficacy of different medications in people with chronic idiopathic constipation. They found 33 RCTs conducted with 17,214 patients and found that stimulant laxatives bisacodyl and sodium picosulfate was ranked first after 4 weeks, and prucalopride was ranked first after 12 weeks of treatment. Similarly, Alsalimy et al. [29] found that senna and lactulose were superior to placebo when studied in long-term care patients. Paré and Fedorak [30] reviewed the literature and found that both nonstimulant and stimulant laxatives provided better relief than a placebo, albeit with minor side effects. In another meta-analysis, Nelson et al. [31] tested the number needed to treat (NNT) chronic constipation and found that osmotic and stimulant laxatives had an NNT of 3, lubiprostone had an NNT of 4, and prucalopride and linaclotide both had an NNT of 6. Note, none of these studies examined the long-term efficacy of these medications.
Given the lack of evidence in NBD populations, the prescription of oral laxatives relies on the above evidence from the general population and expert opinion. Oral laxatives are applicable to both areflexic and reflexic bowel management. In an individual with constipation after MS and SCI, we recommend starting with a simple agent, such as magnesium hydroxide (Milk of Magnesia) or PEG, which may have fewer adverse effects. Start the night before the bowel routine (typically every other day, or 3X/week), then reassess this regimen's effectiveness after a few weeks. It should be evaluated whether the oral medications are moving the stools toward their ideal consistency (soft, formed, bulky) and have resulted in improved evacuation. If not effective, a stimulant laxative can be tried. If the patient is in earlier stages of their injury (e.g., undergoing inpatient rehabilitation), more frequent assessments (every few days) and changes may be required.
Oral medications may address constipation but may not necessarily treat fecal incontinence. This may be due to the less predictable timing of results following oral medications. The goal of treating incontinence in NBD is to trigger a bowel evacuation at a patientpreferred time, so the movement does not occur as an unexpected or unplanned event, thus becoming incontinence. While there are no studies specifically on oral medications and fecal incontinence in the MS and SCI populations, a systematic review in adults with symptoms of fecal incontinence [32] found that medications, such as lactulose and loperamide, seemed to perform better than a placebo on measures of bowel function, such as frequency, urgency, and reduction in diarrhea, though more participants experienced adverse effects (e.g., constipation, abdominal pain, diarrhea, headache, and nausea).
Prokinetic Drugs
When oral laxatives are not effective, prokinetic drugs may be an alternative. Evidence for prokinetic drug studies was found for prucalopride, metoclopramide and neostigmine in SCI (1 RCT for prucalopride, 2 RCTs and one observational study for neostigmine, and two observational studies for metoclopramide). Metoclopramide stimulates the muscles of the gastrointestinal tract through dopamine and acetylcholine receptors and is approved for use to treat nausea and vomiting associated with chemotherapy, gastroesophageal reflux disease or diabetic gastroparesis. Though metoclopramide has been shown to be an effective drug to stimulate a one-time increase in gastric emptying in SCI [33], its role in ongoing neurogenic bowel management has not been established. Similarly, intravenous or intramuscular neostigmine has been shown to induce bowel evaluation in SCI but has not been tested in routine bowel management [34,35]. It is possible that metoclopramide or neostigmine may have a potential role in one-time bowel preparation procedures, such as colonoscopy in SCI.
Given that metoclopramide and neostigmine are not used for current neurogenic bowel management, the rest of this section will focus on prucalopride, a prokinetic agent that acts with high selectivity on serotonin type 4 receptors to initiate peristalsis, colonic mass movements, and facilitates defecation [36]. A systematic review of the general population found ten phase III trials that supported its efficacy and safety of prucalopride for the treatment of chronic idiopathic constipation and four phase IV trials, including one, which demonstrated efficacy over 24 months [37]. Prucalopride is recommended for idiopathic constipation if patients are not responsive to laxatives as the drug can have a high-cost [37]. Currently, tablet formulations of prucalopride have been approved in many countries and their regulating agencies, including the US Food and Drug Administration, Health Canada, and the European Medicines Agency.
A low-level of evidence, comprised of one RCT, may support the use of prucalopride to treat NBD after SCI; however, while confidence intervals were presented, no formal statistics were undertaken, which limits the interpretability of this study. Individuals who were treated with prucalopride may have experienced dose-dependent improvements in bowel movement frequency and perception of treatment efficacy. The greatest efficacy was observed at 2 mg daily dose where patients reported a 0.6 increase (95% CI 0.2 to 1.2) in weekly bowel frequency, a 73 median effectiveness rating (0 = ineffective and 100 = extremely effective), and a 38.5 h median decrease in colonic transit time [38]. Although patients receiving prucalopride perceived a higher treatment efficacy than those receiving the placebo, bowel frequency remained unchanged following a 4-week regimen of daily 1 mg prucalopride [38].
These outcomes should also be interpreted with caution as 50% of the 2 mg prucalopride group withdrew from the study, which introduces substantial bias [38]. In Krogh et al.'s study [38], adverse events were reported by 6/7 in the placebo group and by 7/8 and 6/8 in the 1 and 2 mg groups, respectively. Individuals receiving 1mg prucalopride treatment experienced the following complications more frequently than the placebo group: flatulence, bradycardia, headache, and diarrhea. Among those receiving the 2 mg prucalopride treatment, the following adverse effects were more common than in the placebo group: bradycardia, headache, abdominal pain, and diarrhea [38]. The primary medication-related reactions cited for withdrawal within the 2 mg group were headaches in combination with either abdominal pain or diarrhea [38]. The brand name Resotran monograph states hypersensitivity to Resotran, renal impairment requiring dialysis, and intestinal perforation or obstruction as contraindications [39]. Krogh et al.'s study [38] recommends starting individuals with SCI on a 1 mg daily dose before transitioning them to a 2 mg daily dose. The authors speculate that this protocol could potentially reduce dose-dependent increases in adverse events observed in the study [38].
Potassium Channel Blocker
Fampridine is a potassium channel blocker that can enhance synaptic transmission, and it has been approved for use to improve walking for adults with MS, but in a case series, 1 out of 23 MS participants reported improvements in urinary and fecal incontinence after six months of use [40]. Two of the four RCTs in SCI showed improvements in the number of bowel movements [41,42], but this was a secondary outcome of these studies. Currently, the mechanism by which fampridine may facilitate bowel function is unclear. While fampridine is not currently used for bowel management in current practice, the possible improvements in bowel function are intriguing; the mixed results warrant the need to study the effect of fampridine on bowel function in future studies.
Suppositories and Enemas
Rectal medications are typically a key component of bowel care of SCI patients with reflexic bowel or upper motor neuron lesions [23]. Rectal medications (suppositories, enemas) chemically stimulate the anal sphincter reflex to evacuate stool, and thus, the presence of an intact reflex is usually required. Suppositories are solid forms of rectal medication, while enemas are liquid, which are more difficult to insert if a patient has poor dexterity. Thus, the suppository is often first-line, especially for an individual doing their own bowel care. Rectal medications treat the dual problem of constipation and fecal incontinence. As these medications control the timing and predictability of bowel movement, they can have substantial benefits on the management of fecal incontinence. A number of cross-sectional studies demonstrate that rectal medications are used to treat more severe cases of NBD as those using rectal medications were associated with cervical injuries [6], poorer quality of life [43], extended hospitalization [44], longer bowel care [6,45], and presence of fecal incontinence [6].
Despite the common usage of suppositories, there is relatively little research on their effectiveness in SCI or MS. The small number of prospective controlled trials that have been conducted support the usage of suppositories; time to flatus, defecation sessions and total bowel care time all decreased [26,27,46]. We found only one crossover trial comparing different types of suppositories in SCI [47] that showed no significant difference in total colonic transit time between docusate sodium and benzocaine mini-enemas and mineral oil enemas, though both had a significantly shorter colonic transit time than bisacodyl or glycerin suppositories.
Of the two variations of bisacodyl suppositories, polyethylene glycol-based (PGB) bisacodyl outperformed hydrogenated vegetable-oil-based (HVB) bisacodyl across multiple outcomes and studies. Individuals receiving PGB bisacodyl had flatus 12.8-15 m after administration [26,27], 20-32 min long defecation sessions [26,27] and a total bowel care times of 43-66 min [26,27,46]. These outcomes were 44.8-58.7% faster than when HVB bisacodyl was given to the same individuals to initiate bowel care. Stiens et al. [27] attributed this difference to PGB suppositories' more effective ability to readily dissolve from body heat, distribute bisacodyl on mucus membranes, and sustain reflex propulsion of stool. Despite the documented benefits of the PGB formulation, HVB bisacodyl suppositories are more commonly used, primarily due to the fact that the HVB version generally costs less and is easier to obtain.
When analyzed against docusate sodium and benzocaine mini-enemas in a repeated measures study with a randomized sequence of the agent, PGB bisacodyl produced comparable results [26]. The authors of this study also stated that a docusate sodium-benzocaine mini-enema was more difficult for those with limited dexterity as the serrated edge of the enema could cause anal mucosal perforation during insertion, and it required squeezing for administration [26]. In contrast, Dunn and Galka [48] demonstrated that individuals with SCI had significantly shorter evacuation times with docusate sodium-benzocaine enema than with bisacodyl. However, the type of base (HVB or PGB) of the bisacodyl suppository was not stated, which could alter these interpretations. This information was once again missing in Amir et al. [47], where bowel evacuation time was longer after bisacodyl than mineral oil enemas, docusate sodium-benzocaine enemas, or glycerin suppositories. Although in the same study, bisacodyl did reduce the difficulties of evacuations better than glycerin suppositories [47].
A bisacodyl suppository is typically used as a first-line rectal medication as it is relatively inexpensive, easier to handle than a full-sized enema, and has some evidence of its effect. The suppository is easy to insert even for individuals with impaired dexterity and does not require voluntary contraction of the external anal sphincter for retention [27]. The suppository acts as a contact irritant to enhance gastric motility, increase the fecal water content, and reduce transit-time within the large intestine [49]. The bases act as a vehicle for delivering bisacodyl, the active ingredient. Prior to insertion of a bisacodyl suppository, the rectum should be digitally checked for feces. If present, the feces should be manually evacuated. In addition, the anal canal should be lubricated with a water-based jelly. Within the SCI population, a 10 mg bisacodyl suppository is commonly prescribed as it facilitates independent care [27]. Typically, one bisacodyl suppository is used every 1-2 days for immediate effect, with a bowel movement following 15-60 min after use.
Contraindications for bisacodyl suppository use in the general population are ileus, intestinal obstruction, acute abdominal conditions, including appendicitis, acute inflammatory bowel diseases, severe abdominal pain associated with nausea and vomiting, severe dehydration, and anal fissures or ulcerative proctitis with mucosal damage [50]. Two studies in SCI found that the insertion of rectal medications significantly increased systolic blood pressure [51,52]. This agrees with a retrospective chart review that indicated that rectal medication users had a four-fold increase in the likelihood of reporting autonomic dysreflexia than individuals with SCI, who spontaneously defecated [44]. Care may be necessary when using rectal medications on individuals who are susceptible to autonomic dysreflexia.
An alternative to a suppository, a mini-enema may be used as a first-line rectal medication given that their smaller size and dose may be less irritating and easier to insert. A small tube is inserted, and the liquid contents are squeezed into the rectum. The use of a suppository or mini-enemas may be dependent on local medical practices and reimbursement coverage.
If bowel care is taking too long or is ineffective, then the patient may progress to an enema if the patient is able to self-administer or if a caregiver can assist with administration. Alternatively, a suppository in a water-soluble base (polyethylene glycol) could be considered if that were not already being used. Such PGB suppositories (e.g., Magic Bullet) are generally more expensive but can reduce the time to bowel evacuation by allowing the medication to disperse within minutes after insertion. If bowel evacuation is still taking longer than desired, then one may need to adjust other parts of the bowel program (fluids, fiber, positioning, oral laxatives, etc.).
Narcotics Antagonist
More than 50% of individuals after SCI [53] and MS [54] have chronic pain stemming from neuropathic or musculoskeletal pain. Opioids are still a common choice option for pain management in SCI and MS, especially in refractory cases, although it is increasingly discouraged for non-malignant pain due to its risk for addiction. Opioids, together with immobility, compounds the risk of constipation. No literature was found specific to SCI and opioid-induced constipation or narcotic antagonist. The American Gastroenterological Association (AGA) Guidelines on the Medical Management of Opioid-Induced Constipation [55] recommend laxatives as the first-line agent. In patients with laxative refractory opioid-induced constipation, the AGA recommends using peripherally acting opioid receptor antagonists, which do not enter the central nervous system but block the opioid receptors in the gut (e.g., naloxegol, methylnaltrexone, naldemedine).
Discussion
The first observation from this study was the stark discrepancy between the large number of agents currently prescribed (Table 1) and an extremely limited amount of literature. Despite the common prescription of oral laxatives and narcotic antagonists, there were no studies with NBD and the best evidence was extracted from idiopathic constipation guidelines, which have serious limitations. There was evidence (low-quality) that polyethylene glycol-based bisacodyl suppositories produced faster outcomes than vegetable-based bisacodyl suppositories. While there was a small amount of literature in SCI, there was little to no literature available for MS. There are few randomized controlled trials evaluating medications for NBD in SCI. Many medications commonly used for NBD are generic and are unlikely to receive large funding for adequate research trials to take place. Given that many of these medications are considered "gold standard", it is unlikely that there will ever be a study on these medications to compare with placebo given the ethics of withholding gold standard for the sake of research. Only 42% (12/28) of included studies had any control conditions at all (including case-control studies using retrospective data as controls from chart reviews). Thus, it is difficult to make firm assertions based on the research evidence alone, and any results, positive or negative, should be interpreted with caution, taking into consideration any methodological concerns of the study itself.
There are inconsistencies with how NBD is scored between studies. For example, some studies use validated scales, but many rely on self-report (patient bowel journals) to determine bowel dysfunction. Bowel dysfunction in MS is often scored using the Rome criteria [56], but none of the studies we found testing medications on bowel dysfunction used this scale. Standardized and validated measures, such as the International SCI Bowel Function Basic Data Set or the NBD score, used consistently across researchers and clinicians, would produce more detailed descriptions and objective outcomes for comparison [57]. Variations in measurement approaches may be necessary for dysfunction-specific reasons or to meet experimental standards of any particular study, but a key set of bowel measures with a low data collection burden could be used, thus helping researchers and clinicians to embrace collection and reporting of such outcomes [58].
The time period during which bowel dysfunction is measured also varies greatly. We found studies asking participants about their bowel dysfunction over the last week, the last month, the last three months, the last year, or with no interval at all (i.e., have you ever had bowel dysfunction?) Without any decision on what is an appropriate time period to study, we are left with no standard interval for comparison between studies.
Clinical Insights
Because the literature provides little guidance on how and when to prescribe medication for the management of NBD in MS or SCI, we will be providing clinical insight in this section based on our clinical experience and understanding of the literature and guidelines. It is important to remember that pharmacologic treatment is only part of a bowel program for NBD in MS or SCI. As noted in the other manuscripts in this special edition and highlighted in recently published clinical practice guidelines, [22,23] modifications to optimize bowel regulation should not be solely focused on medication changes.
Case 1
History: A 55-year-old female with MS has a power wheelchair and is dependent on transfers and toileting. She has infrequent defecation about 3-5 times per week and abdominal discomfort/bloating. When she has bowel movements, she is able to sense the need to defecate, but she is not able to control the BM (incontinence), and she cannot get to a toilet; thus, the BM occurs in her briefs. She lives with a 65-year-old husband, who is unable to help care for her due to his own health problems. Thus, she has homecare assistance three times per day. When she has a BM into her briefs, she must wait until homecare comes next to get cleaned up. On examination, she has irritation/erythema of the skin of the buttocks with some breakdown and some soiling with stool in the briefs she is wearing. She requires a mechanical lift for transfers and has the weakness of upper limbs, no functional movement in lower limbs, and she needs partial assistance to turn in bed for the exam. She cannot assist at all in lowering pants for examination. She has a relatively preserved sensation of the perineal area and weak anal contraction. There is hard stool present on the rectal exam. She also has significant spasticity in the lower limbs.
Proposed treatment: The main issue here is lack of mobility and independence, thus not being able to toilet when a bowel movement is about to occur. Defecation occurs at times when no assistance is available, leading to being left for up to several hours in soiled briefs with resulting skin breakdown. The second issue is that the infrequency of bowel movements is causing hard stools and discomfort, which may be triggering her spasticity. The goals of treatment would be to have regular, predictable bowel movements, either daily or every second day, in a timely fashion, assisted by her home care workers. If starting with an every-other-day routine, give oral laxative (such as polyethylene glycol 17 mg) every 2 nights, then the next morning administer a rectal bisacodyl suppository, with digital stimulation as needed until the bowel routine is finished. This will allow for a regularly scheduled routine so that bowel incontinence does not occur later when no supports are available and will allow for less discomfort with bloating from infrequent bowel movements. If this approach is not successful, then she may switch the laxative to a more stimulating product, such as sennosides and may switch to a daily schedule if she still has unplanned bowel movements on off days.
Case 2
History: A 35-year-old male who had a traumatic SCI 15 years ago has a C7 AIS A injury. Since the injury, bowel care has consisted of digital anorectal stimulation performed every other day by a caregiver. However, for the last couple of years, the time for bowel care has increased to more than one hour. The patient has episodes of fecal incontinence approximately two times per month. He has vague abdominal discomfort and bloating that makes breathing difficult. Stools are usually hard (type 2 on the Bristol stool chart). For the last year, the patient has taken opioid analgesics because of neuropathic pain and abdominal discomfort.
Proposed treatment: In order to target difficult rectal evacuation and frequent fecal incontinence, first-line treatment will be a stimulant rectal laxative, either as suppository or enema.
In the present case, oral laxatives will most likely be added to counteract symptoms of prolonged colonic transit. The first choice would be an osmotic laxative. If this failed, we would suggest adding a stimulant laxative and, finally, a prokinetic agent.
If there is insufficient relief of symptoms, an opioid antagonist should be prescribed to treat opioid-induced bowel dysfunction. Long-term, additional focus should be given to optimizing this patient's analgesic regimen using non-opioid options. If the pharmacological treatment failed, consider transanal irrigation or a stoma.
Comments: The case illustrates that NBD usually includes symptoms of constipation as well as fecal incontinence. Treatment with rectal laxatives or an enema is the rational choice as it targets both poor evacuation and fecal incontinence. Patients with spinal cord lesions above the sacral spinal cord often have prolonged transit throughout the colon, which makes oral laxatives or prokinetics a necessary supplement to rectal laxatives. The case also illustrates that NBD is not a stable condition as constipation tends to become increasingly severe with time since injury. Prokinetics and opioid antagonists are usually not prescribed until standard osmotic and stimulant laxatives have failed to provide symptom relief.
Case 3
History: A 65-year-old female had a ground-level fall two years ago that resulted in an injury to the cauda equina. She has bowel movements once or twice per day. Defecation is difficult and usually lasts at least 45 min. Afterward, she has a strong feeling that rectal evacuation was incomplete. Stool consistency is normal. She has no bloating or abdominal pain. Her daily activities are restricted by the need to keep near a toilet because she has fecal incontinence several times per week. She has no other significant medical problems. On examination, there is reduced perianal sensation and very weak voluntary contraction of the anal canal.
Proposed treatment: The first choice of treatment would be a stimulant rectal laxative administered daily, preferably in the morning, to keep her continent during the day. If this failed, the patient should be offered transanal irrigation.
Comments: Lesions at the conus medullaris or cauda equina often cause poor evacuation of the rectum as well as fecal incontinence. In most cases, transport through the proximal colon is less severely affected. Rational treatment aims at restoring rectal evacuation by rectal laxatives (suppositories or enema) or by transanal irrigation. Oral laxatives are usually not needed unless stools are hard, and then they would be prescribed.
Recommendations for Future Research
Researchers have suggested that to increase the data quality and effectiveness of clinical research studies, the use of large data sets (like SCI model systems) can facilitate comparisons among treatments, patients, centers, and countries [59]. As SCI and MS are technically "lower frequency" conditions compared to stroke, cancer, or heart disease, it can be difficult to get sample sizes that are large enough to have any statistical power.
The SCI model systems database network has helped contribute to research with greater statistical power, and thus we can have more confidence in results that are generalizable.
Some additional suggestions for areas in which SCI and MS research can improve include: • Matched control research would increase the number of studies with a control group and would also help to establish sorely needed norms in SCI and MS research. Both neurological diseases affect many-body systems and understanding what norms are for individuals with NBD for colon transit time, bowel evacuation time and frequency after nutritional additions, an exercise intervention, or medication changes would be extremely useful; • Standardizing a bowel treatment training program and evaluating learning and behavioral changes. Education research is rare, and the components of what constitutes a quality bowel training program have not yet appeared in the published literature; • Research on the long-term effects of bowel medications or medications to reduce side-effects in NBD is much needed. Individuals with NBD can experience more severe bowel-related symptoms over time, although it is not known whether this is due to aging, medications becoming less effective, or the development of conditions, such as megacolon (colonic dilatation) [60]; • Research on biomarkers that precede constipation, incontinence, or more serious bowel problems, such as fecal impaction.
|
2021-03-05T05:14:45.878Z
|
2021-02-01T00:00:00.000
|
{
"year": 2021,
"sha1": "8504ca8405e0058a3c71c8fb5e9d9ab22583cd45",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0383/10/4/882/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8504ca8405e0058a3c71c8fb5e9d9ab22583cd45",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
244914722
|
pes2o/s2orc
|
v3-fos-license
|
484. Identification of Early Features to Differentiate Hospitalized Children Admitted for Suspected MIS-C from Alternative Diagnoses
Abstract Background Multi-system inflammatory syndrome in children (MIS-C) is a rare consequence of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). MIS-C shares features with common infectious and inflammatory syndromes and differentiation early in the course is difficult. Identification of early features specific to MIS-C may lead to faster diagnosis and treatment. We aimed to determine clinical, laboratory, and cardiac features distinguishing MIS-C patients within the first 24 hours of admission to the hospital from those who present with similar features but ultimately diagnosed with an alternative etiology. Methods We performed retrospective chart reviews of children (0-20 years) who were admitted to Vanderbilt Children’s Hospital and evaluated under our institutional MIS-C algorithm between June 10, 2020-April 8, 2021. Subjects were identified by review of infectious disease (ID) consults during the study period as all children with possible MIS-C require an ID consult per our institutional algorithm. Clinical, lab, and cardiac characteristics were compared between children with and without MIS-C. The diagnosis of MIS-C was determined by the treating team and available consultants. P-values were calculated using two-sample t-tests allowing unequal variances for continuous and Pearson’s chi-squared test for categorical variables, alpha set at < 0.05. Results There were 128 children admitted with concern for MIS-C. Of these, 45 (35.2%) were diagnosed with MIS-C and 83 (64.8%) were not. Patients with MIS-C had significantly higher rates of SARS-CoV-2 exposure, hypotension, conjunctival injection, abdominal pain, and abnormal cardiac exam (Table 1). Laboratory evaluation showed that patients with MIS-C had lower platelet count, lymphocyte count and sodium level, with higher c-reactive protein, fibrinogen, B-type natriuretic peptide, and neutrophil percentage (Table 2). Patients with MIS-C also had lower ejection fraction and were more likely to have abnormal electrocardiogram. Conclusion We identified early features that differed between patients with MIS-C from those without. Development of a diagnostic prediction model based on these early distinguishing features is currently in progress. Disclosures Natasha B. Halasa, MD, MPH, Genentech (Other Financial or Material Support, I receive an honorarium for lectures - it’s a education grant, supported by genetech)Quidel (Grant/Research Support, Other Financial or Material Support, Donation of supplies/kits)Sanofi (Grant/Research Support, Other Financial or Material Support, HAI/NAI testing) Natasha B. Halasa, MD, MPH, Genentech (Individual(s) Involved: Self): I receive an honorarium for lectures - it’s a education grant, supported by genetech, Other Financial or Material Support, Other Financial or Material Support; Sanofi (Individual(s) Involved: Self): Grant/Research Support, Research Grant or Support James A. Connelly, MD, Horizon Therapeutics (Advisor or Review Panel member)X4 Pharmaceuticals (Advisor or Review Panel member)
Session: P-23. children,immunocompromised,etc) Background. Multi-system inflammatory syndrome in children (MIS-C) is a rare consequence of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). MIS-C shares features with common infectious and inflammatory syndromes and differentiation early in the course is difficult. Identification of early features specific to MIS-C may lead to faster diagnosis and treatment. We aimed to determine clinical, laboratory, and cardiac features distinguishing MIS-C patients within the first 24 hours of admission to the hospital from those who present with similar features but ultimately diagnosed with an alternative etiology.
Methods. We performed retrospective chart reviews of children (0-20 years) who were admitted to Vanderbilt Children's Hospital and evaluated under our institutional MIS-C algorithm between June 10, 2020-April 8, 2021. Subjects were identified by review of infectious disease (ID) consults during the study period as all children with possible MIS-C require an ID consult per our institutional algorithm. Clinical, lab, and cardiac characteristics were compared between children with and without MIS-C. The diagnosis of MIS-C was determined by the treating team and available consultants.
P-values were calculated using two-sample t-tests allowing unequal variances for continuous and Pearson's chi-squared test for categorical variables, alpha set at < 0.05.
Results. There were 128 children admitted with concern for MIS-C. Of these, 45 (35.2%) were diagnosed with MIS-C and 83 (64.8%) were not. Patients with MIS-C had significantly higher rates of SARS-CoV-2 exposure, hypotension, conjunctival injection, abdominal pain, and abnormal cardiac exam (Table 1). Laboratory evaluation showed that patients with MIS-C had lower platelet count, lymphocyte count and sodium level, with higher c-reactive protein, fibrinogen, B-type natriuretic peptide, and neutrophil percentage (Table 2). Patients with MIS-C also had lower ejection fraction and were more likely to have abnormal electrocardiogram.
Conclusion.
We identified early features that differed between patients with MIS-C from those without. Development of a diagnostic prediction model based on these early distinguishing features is currently in progress.
Disclosures. Natasha B. Halasa, MD, MPH, Genentech (Other Financial or Material Support, I receive an honorarium for lectures -it's a education grant, supported by genetech)Quidel (Grant/Research Support, Other Financial or Material Support, Donation of supplies/kits)Sanofi (Grant/Research Support, Other Financial or Material Support, HAI/NAI testing) Natasha B. Halasa, MD, MPH, Genentech (Individual(s) Involved: Self): I receive an honorarium for lectures -it's a education grant, supported by genetech, Other Financial or Material Support, Other Financial or Material Support; Sanofi (Individual(s) Involved: Self): Grant/Research Support, Research Grant or Support James A. Connelly, MD, Horizon Therapeutics (Advisor or Review Panel member)X4 Pharmaceuticals (Advisor or Review Panel member) Background. Coronavirus disease (COVID-19) caused by SARS-COV2 represents global public health concern, with varied severity of illness in different ages and racial groups. This study aims to describe clinical presentation and outcomes in children aged 0-21 years in a community hospital setting in New Jersey.
Pediatrics Institutional COVID-19 Review
Methods. This is a retrospective medical record review of pediatric patients (0-21 years) admitted to Saint Barnabas Medical Center between March 2020-December 2020 with confirmed diagnosis of COVID-19 infection. Diagnosis of COVID-19 infection is based on ICD-10 diagnosis code. Data was extracted from electronic medical records, including demographics, pre-existing conditions, presenting symptoms, treatments used and outcomes.
Conclusion. This review supports clinical findings from other studies and also suggests certain racial ethnicities may be disproportionately impacted, which warrants further exploration to determine genetics vs environmental factors that lead to increased predisposition to severe illness. Background. We sought to describe the range of Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) infection in children.
Characteristics Associated with SARS-CoV-2 Infection in Children
Methods. Patients < 18 years of age who had a positive nasopharyngeal polymerase chain reaction (PCR) for SARS-CoV-2 at a single health system in central Pennsylvania from 3/19/2020-12/31/2020 were identified. Using a random number generator, 150 additional patients < 18 years of age who had a negative PCR test were also identified. Asymptomatic patients and those without clinical data in the electronic medical record were excluded from analysis. Demographic characteristics, symptoms present at the time of testing, and outcomes were compared between PCR-positive and negative patients. Odds ratios were calculated using univariable and multivariable logistic regression models to patients with positive vs. negative PCR tests.
Results. We included 544 patients in analysis, 412 (76%) of which had a positive SARS-CoV-2 PCR. PCR-positive patients were statistically more likely to have a known contact, no comorbidities, and to present with cough, cold-like symptoms, headache, or loss of taste and smell. All patients who presented with loss of taste and smell were PCR positive at time of presentation. Positive patients were statistically less likely to present with fever or emesis than negative patients. Multivariable regression identified increased age, cough, cold symptoms, headache, and non-white race as predictive of PCR positivity. Patients who tested positive were statistically less likely to be admitted to the hospital and less likely to require respiratory support than negative patients.
Conclusion. Loss of taste and smell is a specific, though uncommon, indicator of SARS-CoV-2 infection in the pediatric population. Headache, cough, and cold-like symptoms are also suggestive of SARS-CoV-2 infection, while fever and gastrointestinal symptoms appear less common. This data suggests that screening questions developed for adults may be less applicable in children. Future research, including more dedicated and prospective studies, is warranted to identify patients in whom a positive SARS-CoV-2 test is sufficiently likely to warrant isolation and testing.
Disclosures. All Authors: No reported disclosures
Experience with Remdesivir for Treatment of SARS-CoV-2 in Patients with Liver Cirrhosis
Patricia Saunders-Hao, PharmD, BCPS AQ-ID 1 ; Sumeet Jain, PharmD 2 ; Bruce Hirsch, MD 3 ; Pranisha Gautam-Goyal, children,immunocompromised,etc) Background. Remdesivir is a nucleotide analogue antiviral that was FDA approved for the treatment of hospitalized patients with coronavirus disease 2019 (COVID-19) caused by severe acute respiratory syndrome coronavirus 2 (SARS CoV-2). Remdesivir has been associated with elevations in serum aminotransferase levels but most cases being mild to moderate and reversible upon discontinuation. Although national COVID-19 guidelines and the American Association for the Study of Liver Diseases (AASLD) currently recommend remdesivir for use in hospitalized patients requiring supplemental oxygen, data is limited using remdesivir in patients with chronic liver disease. Here, we describe our experience with remdesivir in patients with liver cirrhosis.
Methods. Patients with liver cirrhosis who received remdesivir were identified either prospectively or retrospectively by primary or secondary ICD-10 codes indicating liver disease. Data collected included patient demographics, underlying cause of cirrhosis, co-morbidities, Child-Pugh score, laboratory values (serum aminotransferase levels, serum creatinine) during and following remdesivir, adverse reactions attributed to remdesivir, and mortality (in-hospital, 30-day, and 90-day).
Results. A total of 4 patients with underlying liver cirrhosis completed a 5-day course of remdesivir treatment. On admission, Child-Pugh class was A for 1 patient, B for 2 patients, and C for 1 patient. Causes for cirrhosis were nonalcoholic steatohepatitis (NASH), hepatic amyloidosis, and chronic hepatitis B. There were no acute elevations in aminotransferase levels or adverse events attributed to remdesivir therapy. Mortality was high with 50% in-hospital mortality. Of the 2 other patients who survived to discharge, one was discharged to home hospice and the other was readmitted within 30 days and expired during that admission.
Conclusion. Since there is limited data available using remdesivir in patients with advanced liver disease, we did not identify any safety concerns related to remdesivir in our cirrhotic patients. Mortality was high illustrating the poor outcomes of patients with advanced liver disease and COVID-19. Patients with cirrhosis should be offered remdesivir if clinically appropriate.
Disclosures. All Authors: No reported disclosures
|
2021-12-07T16:07:43.820Z
|
2021-11-01T00:00:00.000
|
{
"year": 2021,
"sha1": "ffa2ea1286f48723fdd8f190f7c86cbbcb1b658e",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1093/ofid/ofab466.683",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "952334607df50614ceb949441d0ae8db8cd3fa53",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
140555230
|
pes2o/s2orc
|
v3-fos-license
|
An evacuation building project for Cascadia earthquakes and tsunamis Un proyecto de edificio de evacuación para los terremotos y tsunamis Cascadia
1 Ecola Architects, PC, P.O. Box 1160, 368 Elk Creek Road, Suite 409, Cannon Beach, Oregon 97110, USA, jay@ecolaarchitects.com 2 Oregon Department of Geology and Mineral Industries (DOGAMI), 800 NE Oregon St., #28 Suite 965, Portland, OR 97232, USA, yumei.wang@dogami.state.or.us 3 Chinook GeoServices, Inc., 1508 Broadway Street, Vancouver, Washington 98663, USA, Marcy@Chinookgeoservices.com 4 The Gartrell Group, 107 SE Washington St. Suite 453, Portland, OR 97214, USA, tim@gartrellgroup.com 5 BERGER/ABAM Engineers Inc., 700 NE Multnomah Street, Suite 900, Portland, OR 97232-4189, USA, javier.moncada@abam.com 6 Degenkolb Engineers, 707 SW Washington St., Suite 600, Portland, OR 97205, USA, KYu@degenkolb.com 7 Coastal and Ocean Engineering, School of Civil & Construction Engineering, Oregon State University, Corvallis, OR 97331-3212, USA, harry@engr.orst.edu Fecha de entrega: 16 de marzo 2011 Fecha de aceptación: 10 de mayo 2011
Introduction
Low lying coastal communities along the Pacific Northwest are at-risk of a tsunami inundation generated by Cascadia Subduction Zone (CSZ) earthquakes.These communities were developed long before scientists understood the existing tsunami hazards.As such, about 100,000 people are in the tsunami inundation hazard zone each day in Oregon.Some of these 100,000 people are in the high hazard portion of the inundation zone nearest to ocean and river channels with long travel distances to safe and higher elevation land.In addition, many of these communities attract tourists who come to visit the ocean beaches, which are high risk areas.Coastal communities have been responding to the tsunami risk by developing emergency operation plans that include establishing evacuation routes and areas, and educational outreach programs.
Palabras clave: edificio de evacuación, tsunami, zona de subducción Cascadia, estructura resistente a tsunamis New tsunami inundation maps are needed for Cannon Beach, Oregon (CB).Using improved scientific data and methods (e.g.Sumatra 2004, Chile 2010, Japan 2011), new tsunami hazard maps can change risk hazards from Cascadia generated tsunamis than previous maps.The 2008 CB evacuation map shows much of the downtown, the elementary school, fire station, police station and City Hall at risk from distant and local tsunamis (Figures 1a, 1b and 1c).In addition, vulnerability studies have shown that certain populations such as visitors and elderly are also particularly at risk.It should also be noted that people will be disoriented from the earthquake and that evacuation times before the tsunami arrive range from 10 to 30 minutes for Cascadia events.An increased tsunami risk means (DOGAMI 2008) that Cannon Beach and other coastal communities can no longer rely solely on the strategy of evacuation to higher ground but must look at tsunami evacuation buildings, structures and berms.
Tsunami Evacuation Buildings (TEBs) can be an important element to insuring that schools, essential facilities, and government buildings are able to meet their everyday purposes, and continue to function after the earthquake and tsunami.While this approach has not been done in the United States, it has been done in Japan (see Figure 2a).
People who cannot safely evacuate the tsunami inundation zone should be able to evacuate to a TEB.An estimated dozen or more TEBs should be available in Oregon alone.TEBs must be able to withstand prolonged strong shaking and should be reinforced concrete structures with deep scour-resistant foundations and a minimum of two stories (Figure 2b).The lowest story should be open space on the ground floor to allow for water and debris passage.Or, the lowest floor should be designed to be sacrificial, such as with break away walls.The elevation of the bottom of the second story should be higher than the anticipated tsunami inundation elevation.The roof may be designed for general purposes, such as for parking or recreation space.It may also be designed for emergency Raskin, J., Wang, Y., M. Boyer, M., Fiez, T., Moncada, J., Yu, K. and Yeh, H. (2011).Obras y Proyectos 9, 11-22 approaching distant tsunamis and started tsunami education efforts.The city of Cannon Beach joined in these efforts by establishing the Emergency Preparedness Committee which developed an Emergency Operations Plan, identified evacuation routes and areas, created on-going community outreach and education programs, established shelter sites (along with seismic evaluations of the shelters), as well as other recommendations to the city to strengthen emergency response.The Fire Station was relocated to high ground and contains the Emergency Operations Center (EOC) for the community.
Cannon Beach then turned its attention to relief and long term disaster recovery.It was aided in this effort by a workshop funded by the Cascadia Earthquake Emergency Workgroup, in which Oregon Natural Hazards Workgroup, Oregon Emergency Management and USGS that brought community leaders, the school district, utility companies, health care providers, and the business community together.Out of this workshop, the city created the "Prepare for Emergency Recovery Committee" (PERC), a staff committee focused on relief and immediate post disaster recovery efforts, and the Long Term Disaster Recovery Committee, which is an advisory committee looking at developing pre-disaster mitigation strategies.
New City Hall/TEB
The decision to look into rebuilding the existing City Hall as a TEB was due to the lack of availability of an alternate site above the inundation zone and to the fact that it was well situated to provide refuge to residents and visitors in the downtown and midtown areas, both highly populated and vulnerable areas.It is also very visible from a major beach access.
The existing building is 810 m 2 and if replaced is large enough to provide refuge to at least 800-1,000 people on the second level and possible roof terrace.
Figure 3 shows the developed conceptual design which incorporates the primary elements of a TEB.The building was raised on columns to allow water pass beneath the structure.The second floor level was set to be above not only the most likely tsunami event, but most of the rare tsunami events as well.A roof terrace was provided for additional refuge area and for high inundation depths.Exterior stairs were placed as a purposes, such as for evacuees, heliport, emergency storage of food or medical supplies, emergency generator, emergency vehicles and so on.TEBs may be designed with energy dissipation or deflection structures facing the ocean to allow water to flow past the structure.In addition, TEB design should accommodate rapid ingress by foot traffic during tsunamis and be readily identifiable to evacuees.Accommodation must also be made for wheelchair access.TEBs should allow for a minimum of 0.5 m 2 per evacuee.Because tsunamis are rare, TEBs should serve a daily purpose.
The existing Cannon Beach City Hall is expected to be critically damaged during a local and distant tsunami.Replacing the City Hall with a TEB would allow the community to accommodate evacuation needs and rely on its continued function.A new Cannon Beach City Hall TEB would serve as a demonstration project for other coastal communities with high tsunami risks.
In order to better understand elements needed to do a City Hall/TEB an ad hoc design committee was formed.The members of the committee include engineers, an architect, and scientists who do dynamic evacuation modeling.This paper presents and describes a TEB conceptual design which identifies issues to be addressed.
Cannon Beach, a tsunami ready community
Cannon Beach is a small community located on the northern Oregon coast eighty miles west of Portland.The city has a full time resident population of 1,690 residents that is augmented by around 3,000 part time residents.Tourists visiting the city can range from several thousands to tens of thousands on any given day.Economic activity is centered on tourism.The city has high risk factors for tsunamis because a majority of the population and economic activity is located in the tsunami inundation zones and has many visitors who also tend to gather in the tsunami inundation zones.In addition, the population has a fairly high percentage of retirees.
Cannon Beach, a Tsuanmi Ready community, has been active in preparing for the Cascadia subduction zone earthquake and tsunami as well as distant tsunami.Starting in the 1990's the Cannon Beach Rural Fire District installed a series of siren/loudspeakers to warn visitors of very visible design feature to make the building readily identifiable as a tsunami refuge.The lower level is open to provide parking.The building also was designed to serve other functions so that the lower level can shelter the Farmers Market and the roof terrace is a public open space where Haystack Rock is visible.Accessibility is being planned for with the use of elevators designed to be functional after the earthquake.Emergency power and supplies will also be included.Strategies for wave dissipation can be provided for in the parking lot in front of City Hall.
were placed on the south side of the building and made very visible to people evacuating along Hemlock street, the main street in town.The city of Cannon Beach has design review guidelines for commercial and public buildings.An initial attempt to meet these guidelines included to have a gable roof over part of the roof and using cedar shingles, which are a common siding material in town.One design element that needs further study is creating an attractive ground story.This level will need to be used for parking in order to meet the zoning ordinances for off street parking.However, possible secondary uses of the covered area include a new Cannon Beach Farmer's market which may provide additional design parameters to create a pleasing space under the building.
Relocating the City Hall had been considered as an alternative when its vulnerability was first realized.This thinking changed when it became evident that there was not suitable available land within the city limits, or in close proximity.The existing location was good for its proximity to the citizens and providing them services.Relocating the City Hall would have required mitigating effects on the daily lives of the citizens.
Tsunami seawall for Cannon Beach City Hall
Seawalls, bulkheads and revetments are coastal structures that are used to protect shoreline land from erosion due to rising sea levels and waves.The shoreline structures are constructed of structural soil fill, geotextile fabric, large stones, steel, reinforced concrete or some combination of these materials.The optimal structure type is determined by the predicted water levels, wave climate, material availability and soil classifications.
Shoreline placed seawalls may have significant short and long term environmental impacts.Short term water quality can be decreased from construction activity.Long term sand circulation and displacement can affect habitat for marine life.Seawalls may limit beach access and may be intrusive on the shoreline views.Reinforced concrete seawalls can withstand higher wave forces than soil revetments and stone bulkheads.They are designed to absorb breaking wave energy or reflect waves seaward or upward.Raskin, J., Wang, Y., M. Boyer, M., Fiez, T., Moncada, J., Yu, K. and Yeh, H. (2011).Obras y Proyectos 9, 11-22 The conceptual design must meet the city's zoning ordinances.These ordinances included providing off street parking, setting the building back from the residential zone south by 6 m, providing landscaping, and a building height limit of 8.5 m.The conceptual design showed that all zoning ordinances can be met except for the 8.5 m height limit.This would require a variance from the City, but it is considered an acceptable request given the nature of the project.The city also has as Design Review requirement so that the aesthetics of the building must be acceptable to the community.
Disabled accessibility is a requirement for public buildings.The most straightforward solution is providing at least one elevator to the building, built to a high seismic resistance standard and provided with emergency power to insure function after the earthquake.The option of an accessible ramp was examined and it was determined that not enough space was available for the length of ramp required, especially if the ramp had to reach to the roof terrace as well.Therefore, large ample stairs were provided on the exterior of the building.
The building has to be readily identifiable as a refuge for a tsunami.The conceptual design solution was to make the stairs to the upper level and to the roof terrace a very distance and visible part of the design.The stairs When tsunamis propagate inland, wave fronts can take the shape of either a bore or surge.These destructive waves can carry debris, such as logs, creating high impact loads and cause extensive damage to wooden and unreinforced masonry structures.To dissipate some of this tsunami energy, the Canon Beach City Hall TEB will have two reinforced concrete seawalls along the west and east sides of the buildings.
The primary objective of the tsunami seawalls is to dissipate some of the tsunami energy and debris forces by wave front upward deflection and debris damming.The more wave and debris energy that can be absorbed or dissipated by the wall prior to reaching the building, the less robust the building will need to be.The seawall is not intended to completely prevent tsunami inundation of the City Hall, but merely to dissipate some of the tsunami energy.
More investigation is required to determine the exact location of the tsunami seawall relative to the Canon Beach City Hall.An estimate of the tsunami seawall location is near the building and approximately 200 m from shore.No environmental impacts are anticipated at this time and the wall is not expected to limit beach access.
Tsunami seawall design considerations
The City Hall will have two pile supported reinforced concrete walls.The ocean side west wall is rendered in Figure 4 and the landward east wall is rendered in Figure 5.Both of these figures are conceptual and are modified figures from the Shore Protection Manual (1984).The west wall is a combination of stepped and curved face and the east wall is a curved faced only.The curved face of the wall reflects the wave energy upward, causing the tsunami to dissipate some of its destructive energy before reaching the building.The wall absorbs impacts and damming debris forces.The west wall dissipates wave energy from the run-up and the east wall dissipates wave energy from the drawdown.The rip rap resists scour from the run-up and drawdown.Greater scouring has been observed to occur during drawdown.The seawalls are constructed of steel reinforced concrete.The west wall would be best located in the parking lot where a natural grade difference already exists.To build and design an effective and efficient tsunami seawall for Cannon Beach, more information is needed about the expected tsunami
Cannon Beach City Hall structural design considerations
Cannon Beach is located on the seismically active Oregon coast.It is likely affected by tsunamis and strong seismic shaking generated by a Cascadia subduction earthquake.In order to function as a refuge from tsunami, the proposed Cannon Beach City Hall/TEB must remain usable Raskin, J., Wang, Y., Boyer, M., Fiez, T., Moncada, J., Yu, K. and Yeh, H. ( 2011).An evacuation building project for Cascadia earthquakes and tsunamis.Obras y Proyectos 9, 11-22 15 wave and the soil types below the walls.The design tsunami wave shape, velocity, height and frequency govern the forces needed to design seawalls.Soil mechanics studies are needed to determine pile depths and width of wall.minimum: steel frames with dampers, post-tensioned reinforced concrete frames (Figure 6a), and posttensioned concrete shear walls (Figure 6b).The posttensioned reinforced concrete frames and post-tensioned concrete shear walls rely on the post-tensioning tendons to re-center the structure to its pre-earthquake position.When properly designed, the building with these systems tends to have limited residual displacement even for the MCE event.Since the steel structure is more prone to corrode in the coastal environment, post-tensioned concrete frames or a combination of concrete frames with concrete shear walls parallel to the direction of anticipated tsunami flow are more suitable for the TEB, and also compatible with the planned function at the ground level of either parking or farmer's market.following a major seismic event.This, in turn, requires the building to retain most of its pre-earthquake lateral-force resistance, experience little nonstructural damage, and be capable of resisting expected tsunami loading effects.
Seismic performance objective
There is limited guidance available to explicitly address required seismic performance of a TEB structure.FEMA P646 ( 2008) recommends that such structure be designed to meet Immediate Occupancy performance (as defined in ASCE/SEI 41-06) for the Design Basis Earthquake (DBE) and Life Safety performance for the Maximum Considered Earthquake (MCE).However, we feel that this recommended performance requirement could not guarantee the usability for a tsunami if the building experiences significant structural damage at the MCE level, with inadequate lateral resistance for a tsunami.A building that experiences substantial structural damage or out of plumb may be considered as Life Safe, however, it can not be re-usable without substantial repair.
To make evacuees feel comfortable entering the TEB and remaining in the structure during the aftershocks, the structure is expected to remain plumb with limited structural damage (especially near stairs and ingresses) that does not require any repair work prior to being occupied.Since a tsunami evacuation building is expected to remain functional and perhaps, be used for emergency response and/or medical care for a period of time, it is important to have higher-level performance of nonstructural components with limited damage.For the TEB, we feel that it is more appropriate to design the building to meet Immediate Occupancy performance level for the MCE event.As part of the design process, it is essential to perform verification analyses to ensure the performance objective is met using available performance-based earthquake engineering techniques such as ASCE/SEI 41-06.
Structural system and seismic design consideration
There are several structural systems founded on deep piles that can achieve the required seismic performance and allow tsunami debris to flow through at the lower levels and keep the hydrodynamic loads to a Figure 7 shows a conceptual layout of lateral force resistance system for the Cannon Beach TEB.It is expected that the site would likely experience liquefaction during the seismic shaking, which could result in differential settlement of ground soil.Also, significant scouring due to tsunami is likely to occur at the site.To minimize the undesirable effects of liquefaction induced differential settlement and scouring on the structural system, piled foundation is recommended as shown in Figure 8 to support
Tsunami-resistant design consideration
The concrete columns in the lower level are designed with circular cross-sections to minimize hydrodynamic loads and impact loads associated with waterborne debris.At the City Hall site, the tsunami inundation depth is estimated to range from 1.8 m at the most likely tsunami to 4.5 m to 9 m in rare events.Both the second level and a roof terrace are planned for refuge.Given the uncertainty involved in the tsunami modeling and estimate, if the first story height is set unconservatively low, the run-up water could potentially exceed the first story height, wash away the contents in the second story, and pose buoyancy and hydrodynamic uplift force on the 2 nd floor concrete slab.Thus, the design must consider carefully the story height for the first story.
The concrete floor framing and slab at the 2 nd and the roof levels are designed to accommodate refuge live load.The slab to beam connection at the ground and the 2 nd floor the columns of both seismic and gravity frames.Pile caps are interconnected with grade beams and ground slab to ensure lateral forces can be distributed to all the piles.In case that wood stud walls are used in the building, detailed connections with a fuse at the top and sides are needed to minimize the hydrodynamic loads on the building structure (Yeh 2007).Due to uncertainties involved in the estimate of impact forces associated with waterborne debris, the TEB design should incorporate considerations of the "tie force" strategy and the "miss column" strategy in the design to reduce the potential for progressive collapse if one column is severely damaged (FEMA P646 2008).
After a major seismic and tsunami event, the City Hall is expected to function for relief and post disaster recovery.It is important to ensure that the nonstructural systems including ceiling, communication system, fire suppression system, distribution lines and tall furnishings are properly braced to reduce the falling hazards and reduce the potential for loss of function.Seismic design shall follow the recommendations contained in ASCE 41-06.
Geotechnical and scour considerations
Deep concrete foundations should be selected for the TEB instead of shallow foundations such as mat foundations, because of the scour potential that can occur during a tsunami.Concrete structures better survived the tsunami inundation as observed in the 2004 Sumatra tsunami.
A local Cascadia Subduction Zone (CSZ) earthquake (up to M w = 9) would create high lateral soil forces on the foundations and residual liquefaction of the underlying saturated loose to medium dense sandy/silty soils.Foundation should be designed to withstand the earthquake forces and the dynamic total and differential settlement.
Residual liquefaction occurs when saturated deposits of loose to medium dense, cohesionless sands and silts, are subjected to strong earthquake shaking.If these saturated deposits cannot drain rapidly, there will be an increase in pore water pressure.With increasing oscillation, the pore water pressure can increase to the value of the overburden pressure.The shear strength of a cohesionless soil is directly proportional to the effective stress, which is equal to the difference between the overburden pressure and the pore Momentary liquefaction (enhanced scour) occurs at the ground surface because the saturated soil is easily transported as a liquid (Figures 9a and 9b).Enhanced scour can occur at the end of wave drawdown (wave retreat).During and after wave drawdown, the pore water pressures in the near surface soil increase and momentary liquefaction can occur.Momentary liquefaction occurs with the rapid reduction in total vertical stress (loss of wave height) in a soil saturated by water inundation.The shear strength of the saturated soil reduces to zero and the soil behaves like a liquid.The difference between residual liquefaction and momentary liquefaction is that residual liquefaction occurs below the water table .Scour can be reduced by placing gravel and/or rip 18 water pressure.Therefore, when the pore water pressure increases to the value of the overburden pressure, the shear strength of the soil reduces to that of a liquid (zero), and the soil deposits turn to a liquefied state.Deep foundations would need to extend below the depth of the liquefied soil.Soils subject to residual liquefaction should be neglected for their contribution to skin friction and lateral support.
Other earthquake geotechnical hazards include severe ground shaking, lateral spreading and rapid coastal subsidence.Lateral spreading is the downward horizontal movement of soil toward a slope that occurs over or within seismically liquefied soil.Coastal subsidence is defined as a large scale downward movement of the earth's surface.Coastal uplift can also occur in the form of significant upward movement.
The TEB could be affected by tsunamis from two sources, but the effect on the foundation would be similar.One tsunami source would be one that occurs because of the local CSZ earthquake.In this case, the ground shaking and potential tsunami could occur within minutes of each other.The second source would be a distant earthquake that occurs far away from the Oregon Coast without any local earthquake effects.Scour would occur for both tsunamis.The support for mat foundations could be eroded during either tsunami event.Deep concrete foundations could extend below the anticipated scour depth and would be the most appropriate building support method.
Tsunami scour depth is difficult to predict because of the many variables that govern the scour mechanism.The key governing parameters are flow velocity, the number of piles, the shape, alignment and size of the piles and properties of the soil around the pile.Other factors include the depth of the surge, the proximity to the shoreline and wave breaking height.Current codes (ASCE 7-05) give consideration to scour, but do not provide guidance for calculating the depth of scour.
Localized tsunami scour can be calculated as a percentage of still water depth (wave height) relative to soil type and proximity to the shoreline (FEMA 55, 2000).Floodwater velocity strongly affects scour depths as summarized in the EERI/FEMA NEHERP (2006) document.Raskin, J., Wang, Y., M. Boyer, M., Fiez, T., Moncada, J., Yu, K. and Yeh, H. (2011).Obras y Proyectos 9, 11-22 rap around piles.This reduction in scour occurs because gravel/rip rap is more permeable than sand, making the change in pore water head less dramatic.
Subsurface conditions on the site of the existing Cannon Beach City Hall are not specifically known.However, reports from nearby sites show variable layers of subsurface soils.The subsurface soils generally consist of fill underlain by layers of silt with varying amounts of clay and organics, gravel and sand.The layering is generally not similar between borings.Perched groundwater could be present at depths as shallow as 1.8 m.Static groundwater can be present at depths as shallow as 4 m based on nearby well logs, although it has been encountered at depths as deep as 9 m.
The proposed location of the TEB is likely in or near a preexisting drainage.The highly variable subsurface conditions observed in the vicinity are typical of coastal lagoon, fluvial and shoreline alluvial deposits.Therefore, it is difficult to extrapolate the actual subsurface conditions due to rapid lateral and vertical subsurface changes.Current thought is that the fill under the TEB location may be thicker than adjacent sites because of the possible presence of a preexisting drainage.The depth of the fill will be unknown until borings can be advanced at the proposed building location.Regardless of the thickness of fill at the site, deep foundations are necessary for the TEB that is proposed for Cannon Beach because of the potential for local CSZ earthquakes.The foundations would need to extend into firm material below anticipated seismic liquefaction and/or scour depths.The firm material at the site could consist of beach sand, dune sand or quaternary marine terrace deposits that are mapped in the area.
Development of a tsunami simulator for Cannon Beach
It is impractical to give warning and evacuate people from the direct effects of an earthquake, since the fault rupture and ground motion occur almost concurrently.In contrast, the lead-time between detection of a seismic signal and the resulting tsunami make warning and evacuation possible.For a local tsunami triggered by the Cascadia earthquake, the tsunami will arrive at the shore of Cannon Beach within 30 minutes.Note that for a mega event like M w = 9, ground shaking could last more than 5 minutes.Hence, effective time used for evacuation would be very limited and evacuation to the natural high ground may not be an option.Therefore, providing a tsunami shelter at a strategic location can be a viable means to save lives.
Proposed herein is the development of an integrated simulator to evaluate the effectiveness of building a tsunami shelter into the Cannon Beach City Hall.The simulator combines three models simulating the hydrodynamics of tsunami propagation and inundation, then dissemination of warning information, and human response to the warnings.The simulations are integrated and presented in a GIS (Geographic Information System) framework using realistic computer graphics.In addition to evaluating the effectiveness of a tsunami shelter, this simulator will be used as a tool for the City of Cannon Beach to improve both warning systems and evacuation tactics.The visual GIS presentation also makes the simulator ideal for educating the general public, including visitors, on the consequences of how they respond to tsunami warnings.
Simulation of tsunami scenarios
A comprehensive scenario simulator has been developed to support rational tsunami hazard and vulnerability analyses.The simulator integrates three modules: 1) hydrodynamic numerical simulation of tsunami propagation and run-up, 2) warning transmission simulation, and 3) evacuation simulation.Although the hydrodynamic simulation is deterministic, the other two components are probabilistic.Hydrodynamic simulation models for tsunami generation, propagation, and run-up have been used often in practice (e.g.Titov and Synolakis, 1998;Lin et al., 1999;Imamura, 1996).While the numerical algorithm itself is considered adequately accurate (e.g.Yeh et al. 1996), it remains difficult to determine practical tsunami-source conditions.Fortunately, Oregon has just completed a thorough investigation to estimate the most credible tsunami source for the Cascadia events; the study is an extension of a previous study (Priest et al., 1997) coupled with geological paleo-tsunami deposit data (Witter et al., 2008).Furthermore, Zhang and Baptista (2008) have conducted detailed numerical simulations specifically for the inundation in Cannon Beach.
The warning transmission module models both official Raskin, J., Wang, Y., Boyer, M., Fiez, T., Moncada, J., Yu, K. and Yeh, H. (2011).An evacuation building project for Cascadia earthquakes and tsunamis.Obras y Proyectos 9, 11-22 ("broadcast") and informal ("contagion") processes.The informal network (person-to-person oral communications) is the primary method of warning transmission, since official warnings (processed by government authorities and transmitted by loudspeakers, route alert vehicles, radio, and TV) are relatively slow in responding to a locally generated tsunami and might be totally destroyed by the earthquake causing the tsunami.In the model, informal communications are controlled by four parameters: 1) the number of households, 2) the distances among households, 3) the delay in initiating contact, and 4) preference parameters.Our model includes preferential contacts based on a probabilistic biased network model (e.g.Rapoport, 1979;Fararo,1981;Skvoretz, 1985).In addition, there are control parameters distinguishing "normal" days from those with stressed conditions during disasters.For example, the number of contacts (receivers) is larger during disasters, the communication distances between contacts are shorter, and the preference parameter is weaker.A majority of control parameters must be determined based on demographic data.Fortunately, thorough collections of such data are available for Cannon Beach (Wood, 2007).Additional parameters control the loudspeaker warning system (loudspeaker locations, audible distances, audience share, announcement frequency, and timing), route alert vehicles such as police cars and fire engines (routes and speeds, dispatch timing, audible distance, and audience share), and radio/television (audience share, announcement frequency, and timing).Evacuation simulation is modeled in two steps: 1) individuals' decisionmaking and preparation processes for evacuation, and 2) the actual evacuation process.The first step reflects: -the number of repeated warnings received (and from which channels) -evacuation actions taken by neighbors and friends -location of the household -prior knowledge and/or experience of tsunamis -time to evacuate after the decision is made Those parameters are assigned based on Wood (2007).The current model only simulates the evacuation of individuals moving on foot toward the closest shelters or high ground, but evacuation methods (e.g.motor vehicles) and potential setbacks (road blockage, bridge failure, etc.) can be introduced.
The integrated simulator uses a GIS framework to produce an animation of the tsunami run-up (typically occurring in multiple waves), warning transmission patterns, and individuals' protective responses.Figure 10 shows how the components interact.To evaluate the overall outcome, the program determines 1) the number and spatial distribution of households receiving a warning, 2) the temporal distribution of those warnings, 3) the cumulative effects of informal communication (oral and telephone) patterns, and 4) number of casualties.Once an area is inundated by a tsunami, a newly developed casualty model is applied to determine fatalities.The casualty model is based on whether a person can remain standing within the tsunami flow and incorporates age and gender differences (Yeh, 2010).Figure 11 shows an example of the animated display for the scenario simulator.Raskin, J., Wang, Y., M. Boyer, M., Fiez, T., Moncada, J., Yu, K. and Yeh, H. (2011).Obras y Proyectos 9, 11-22 The proposed development of the tsunami scenario simulator does not only provide quantitative evaluations for the effectiveness of the Cannon Beach tsunami shelter, but is also useful in identifying the effects of hazard mitigation measures (such as seawalls), emergency response resources (e.g.number and capacity of evacuation routes, locations of tsunami shelters) and emergency response procedures (e.g.amount of forewarning and routing of route alert vehicles).
Summary and next steps
The design concepts discussed in this paper set the stage for future design and construction of the proposed Cannon Beach City Hall TEB.It provides the basis for future action including subsurface exploration, tsunami evacuation modeling, additional development of the conceptual design to allow for structural design and tsunami wave dissipation design, and construction.The goal is to construct a Cannon Beach City Hall TEB by March 2014, which is the 50 year anniversary of the 1964 Great Alaska earthquake, which triggered a tsunami that damaged portions of Cannon Beach.The preliminary cost estimate to construct a 900 m 2 , two-story building with a roof terrace on deep foundations is on the order of US$4 millions.The purpose of the building is to save lives in Cannon Beach, augment local emergency response efforts, and allow for long term recovery for Cannon Beach and nearby communities.The broader goal is to be a demonstration project to help increase tsunami preparedness for all tsunami prone communities.This demonstration project would provide information for other coastal communities to better understand the many technical, social, design, and cost implications.This in turn, will allow coastal communities to develop appropriate and comprehensive tsunami evacuation and mitigation strategies.Raskin, J., Wang, Y., Boyer, M., Fiez, T., Moncada, J., Yu, K. and Yeh, H. (2011).An evacuation building project for Cascadia earthquakes and tsunamis.Obras y Proyectos 9, 11-22
Figure 2
Figure 2: a) Shirahama Tsunami Evacuation Structure.Photo by Professor Nobuo Shuto and b) schematic design of a Tsunami Evacuation Building (TEB) Figure 1: a) Portion of Cannon Beach tsunami evacuation map, b) index and c) inundation confidence levels from the Cannon Beach Inundation Mapping Study (DOGAMI 2008)
Figure 9 :
Figure 9: Tsunami scour, a) tsunami run-up height of 4.1 m, inundation depth of 0.95 m above the floor and scour depth of 1.2 m (photo provided by Harry Yeh) and b) enhanced scour
Figure 10 :
Figure 10: Schematic representation of the integrated tsunami scenario simulator
|
2017-07-18T12:21:37.448Z
|
2011-01-01T00:00:00.000
|
{
"year": 2011,
"sha1": "f058d7933956553f2cfc8721fd8ec66f552e1c64",
"oa_license": "CCBYNC",
"oa_url": "http://www.scielo.cl/pdf/oyp/n9/art02.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "f058d7933956553f2cfc8721fd8ec66f552e1c64",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Geography"
]
}
|
20212342
|
pes2o/s2orc
|
v3-fos-license
|
Insights into the Brain: Neuroimaging of Brain Development and Maturation
The study of how the human brain develops has always been a challenge and an interest to the scientific community. In recent years, new evidence has suggested that many neuropsychiatric disorders may originate from aberrations early in development. This discovery necessitates the application of methodologies that make possible the investigation of human brain development in vivo and across the lifespan. In this commentary, we present evidence that the advent of structural neuroimaging has specifically and significantly contributed critical information about the developmental trajectories of postnatal human brain development that would otherwise not have been possible. We believe that this is particularly relevant to present day research as it has become increasingly clear that growth trajectories within the brain might serve as an endophenotype for a number of factors, ranging from IQ to psychiatric illness. We highlight seminal early works that helped to jumpstart the field of developmental neuroimaging and which inspired incredible new advances in neuroimaging methodologies that are being developed and applied in the field today.
Introduction
helpful in identifying abnormalities that appear in individuals with major psychiatric disorders.
The first MRI studies utilized low-field MRI (e.g., 0.5T-1.5T scanners), which produced low-resolution, noisy brain images compared to the images acquired on present day scanners [8]. Nonetheless, these initial MRI scans could differentiate between gray and white matter, as well as separate (also known as parcellation) individual neuroanatomical structures such as the gyri and sulci. These early methods made it possible to make inferences about local brain growth.
Further improvements in MR technology, most notably higher magnetic fields (e.g., modern 3T and 7T scanners), produced images of sub-millimeter resolution, with a high signal-tonoise ratio, thereby enabling a more precise characterization of macrostructural features for even the smallest of anatomical regions. The introduction of additional MR methods has further made it possible to study microstructure in both gray and white matter. These modern neuroimaging methodologies, most particularly higher field MRI and Diffusion Tensor Imaging (DTI) are now powerful tools that accurately describe complex developmental trajectories of the many stages of structural brain development. Thus as is evident from this historical perspective, the application and development of increasingly refined image analysis methods has furthered our overall understanding of how the brain grows.
Another advantage of neuroimaging is its ability to extract and analyze multiple features across the entire brain simultaneously. In addition, because MR is noninvasive, largepopulation longitudinal studies are possible, which leads to a significant increase in statistical power for neurodevelopmental research. Moreover, the analysis of multiple features such as cortical volume, cortical thickness, and the microstructure of a given white matter tract, are all now possible using advanced image analysis techniques (see below).
While these features can serve as indirect representations of biological mechanisms and events that guide brain development and maturation, the neurobiological specificity of MRI remains limited. The MR signal is averaged over a few cubic millimeters (the resolution of single "voxel"), and is derived from the differential properties of imaged tissue. The amount of "free" and bound water, as well as the macromolecules within the voxel, can influence this signal, making it possible for MR to distinguish tissue containing cellular bodies and processes (gray matter), from tissue that contains myelinated axons (white matter). Further biological differentiation, however, is difficult when scanning in vivo subjects. Since, brain tissue is comprised of cell bodies (neuronal and non-neuronal), axons, dendrites, synapses, myelin, vasculature, and extracellular space, it is apparent that these cellular components are too small to study in vivo even with the highest spatial resolutions available today on human MRI scanners. Ex vivo imaging studies in both human and animal models have demonstrated the possibility of attaining greater image resolutions, which makes possible the investigation of more refined biological elements (e.g., [9]). However, similar to post mortem studies, due to the cross-sectional nature of the ex vivo image acquisitions, these studies provide limited information relevant to the underlying processes driving longitudinal maturational trajectories. It is nevertheless believed that the major sources of MR signal that change during brain development include, on the one hand, axonal and synaptic growth (until puberty), followed by synaptic pruning and cell loss, and, on the other hand, myelination growth (which continues until adulthood), followed, by myelin degeneration (with aging). Despite the limitations of in vivo neuroimaging, MRI-and DTI-derived parameters, such as cortical thickness or fractional anisotropy (both of which will be discussed in detail later) can serve as indirect indicators of these underlying biological components as they change over the course of the lifespan.
In the following commentary, our intention is to present evidence supporting the opinion that the evolution of MR structural neuroimaging measures has helped the scientific community to achieve a greater understanding of postnatal anatomical human brain development that would not otherwise have been possible. We decided to take a historical approach to demonstrate the influence that neuroimaging has made on the field by first introducing earlier methods and then proceeding to showcase some of the extraordinary new advances in neuroimaging methodologies that are being developed and applied in the field today. This commentary is separated into two primary sections: gray matter and white matter. As there are a significant number of studies, especially in the postnatal imaging field, which have studied human brain development with MR structural neuroimaging measures, we chose to only highlight select publications that we felt best represented singular and exemplar examples of the type of image analysis method discussed in the context of human brain development.
Total brain volume
The early application of MR methodologies primarily focused on volumetric analyses of whole brain cortical gray matter. Seminal developmental studies employing volumetric whole brain methods showed that overall total brain volume follows a curvilinear, inverted U-shaped pattern of growth from birth to adolescence, whereby early increases are followed by a gradual decrease with increasing age [10,11]. However, many of these early studies did not include scans of children younger than 4 years of age. Recent studies by Knickmeyer et al. [3] show that the initial increase in total brain volume after birth is particularly dramatic [3]. It increases roughly 101% within the first year alone. When compared to the adult brain, a 2-week-old human cortex is 36% of adult volume, 72% of adult volume by 1 year, and 83% of adult volume by two years of age [3]. Of further note, peak total brain volume is reported to occur between 12 and 15 years of age, after which it is reported to gradually decrease [4]. Whole brain volumetric studies also demonstrate clear differences between genders, with male total brain volume being roughly 10% larger than female total brain volume [12]. Taken together, these studies demonstrate, in vivo, the dynamic growth of overall brain volume in healthy subjects across the lifespan, and they confirm the critical importance of early postnatal years for long-term neurodevelopment.
Cortical and subcortical gray matter volume
Following the introduction of higher field MR magnets, which provide better contrast and higher spatial resolution, studies began to subdivide the cortex into lobes: occipital, parietal, temporal and frontal. The ability to acquire several hundred MRI scans from healthy individuals who span a wide age range also made it possible to evaluate regional developmental trajectories in brain maturation [10,11]. Findings from these studies report distinct regional differences in gray matter maturational patterns, confirming previous findings of cross-sectional post-mortem analyses by Huttenlocher and others [13][14][15][16][17]. Additionally, seminal imaging studies by Giedd and colleagues demonstrate that the subdivided cortex still exhibits an inverted-U pattern in both cortical and subcortical gray matter, although this pattern exhibits a posterior-to anterior and a medial-to-lateral pattern of regional growth [10]. This pattern also mirrors the general acquisition of functional processes related to specific cortical regions, such as visual or auditory capabilities [18,19]. Subcortical areas have also been shown to reach peak volume before cortical areas, and cortical sensory regions reach peak volume earlier than cortical association and highercognition areas [10,11,20]. Interestingly, in studies focusing on the earliest stages of brain development, Gilmore and colleagues [21] report that primary sensory and motor regions tend to grow less than other regions of the cortex, confirming the notion that these regions actually experience a greater degree of growth prenatally, and reach maturity within the first year of life [3,14,21]. Finally, although many of these findings relevant to gray matter maturation are not new, the application of neuroimaging methods to a longitudinal sample of healthy individuals allows us to observe and to quantify developmental trajectories that were heretofore only hypothesized.
Gyrification
While volumes of gray and white matter undergo dynamic change as a function of brain development, maturation and aging, the cortical folding pattern, or gyrification, is established during the 2 nd and 3 rd trimester of prenatal growth [22,23]. This pattern is largely set at birth and does not undergo a large degree of change as people age. Nonetheless, as brain growth progresses, further deepening of the sulci and enlargement of the gyri occur, yet, the pattern of cortical folds does not change. This was recognized as being a useful developmental feature for neuroimaging studies, even at later periods [24][25][26]. Thus utilizing this measure, any deviations from a normal gyrification pattern would strongly suggest aberrations in fetal brain development, which could be dated back to a particular period of gestation. In fact, clinical studies of infants born with lissencephaly, a genetic disorder where the cortex does not develop the canonical cortical folds, reveals that gyrification patterns may be a useful measure that indicates the presence of aberrant early brain development [27].
This close tie with early development is also why some of the earliest neuroimaging studies chose gyrification patterns as a central focus (e.g., [28][29][30]). Earlier techniques for the quantification and analysis of gyrification used manually traced contours of the cortical surface in histological sections of post-mortem brains. For example, Zilles et al. defined a gyrification index as the ratio of lengths of the complete and outer (superficially exposed) contours of a histological brain slice [28]. More recently, the introduction of advanced neuroimaging methods have made possible the development of a variety of image-based mathematical tools for cortical surface shape analysis. For example, Yu et al. [31] used spherical wavelets to characterize cortical folding patterns while Germanaud et al. performed a spectral analysis of the spatial frequencies of folding patterns [31,32]. In another study by Awate et al. [33], the authors utilized descriptors computed from the differential geometry of surface patches to characterize sex differences in gyrification patterns [33]. These measures have been used to test neurodevelopmental hypotheses in neuropsychiatric diseases because it has been suggested that such features might be under tight genetic control and therefore less susceptible to environmental changes. Thus the use of gyrification measures from neuroimaging methods might help in the quest for structural biomarkers related to genetic risk for major psychiatric disorders. In fact, this method has already shown to be effective in identifying differences in the gyral patterns of patients with chronic schizophrenia, specifically in the frontal and temporal lobes [29,34].
Cortical thickness and surface area
In parallel with mapping and characterizing cortical gyrification patterns, new measures have been developed to further characterize cortical volumes. The utilization of new measures is important because cortical volume itself does not reflect the true complexity of cortical growth patterns. For this reason image analysis tools were developed to measure the two primary features of cortical volume: cortical thickness and surface area. These tools utilize surface-based representations of the cortical mantle and offer a significantly more biological representation of the cortex. The development and application of surface-based methods now make it possible for previously identified maturational trajectories of cortical volume to be deconstructed into these two components, providing greater biological specificity, as described below.
In biological terms, cortical thickness represents the "height" of the cortical column whereas surface area is the area of the cortical region of interest. From previous histological studies dating back to the early 1900s, cellular composition within the 6-neocortical layers is shown to vary greatly across the cortex (i.e., Brodmann, Von Economo). Mechanical theories of cortical development suggest that radial growth (i.e., cortical thickness) occurs earliest in development [35]. Tangential growth (i.e., surface area), on the other hand, is believed to be the result of a massive proliferation of cortical cellular processes, such as dendrites, axons, synapses, (as well as continued myelination discussed later), and is believed to take place at later stages of postnatal development [35]. Of further note, even without the resolution to assess microstructural changes in the different 6-neocortical layers, cortical thickness and surface area prove to be extremely valuable in understanding maturational trajectories and they confirm early theories about the ways in which the cortex grows [13,14,35].
More specifically, the early application of surface-based methods show that cortical thickness and surface area are not uniform across the cortex, thereby confirming previous seminal cytoarchitectural studies, by Brodmann and Von Economo [36,37]. These imaging studies demonstrate that rates of growth also vary, with isocortical areas (6-layered, most of the neocortex) showing more cubic patterns of development than allocortical areas (3layered, i.e. cerebellum), which exhibit more linear patterns of development [20]. Similar to cortical volume, cortical thickness and surface area exhibit inverted-U cubic trajectories [38], with cortical thickness reaching peak values around 8 years of age, with no apparent influences by gender [38], and with surface area peaking later in childhood with sexual dimorphism, (8 years of females and 9.3 years in males) [38]. Furthermore, both cortical thickness and surface area exhibit extremely dynamic and regionally heterogeneous patterns of growth within the first two years of life [39]. That is, between birth and 2 years of age, overall cortical thickness increases by an average of 36.1% per region of interest, while surface area increases 114.6% per region of interest. More importantly, by age of 2, cortical thickness is shown to reach 97% of adult values, while surface area only reaches 67% of adult values [39]. These dynamic growth rates in the first two years are followed by much slower growth in the following years, with surface area, rather than thickness, being the principal driving factor in overall cortical volume growth. In fact, recent studies show surface area to be a pivotal feature in individual variation in brain size, IQ prediction, and a key mediator of gray matter deficits in psychiatric disorders, specifically schizophrenia [40][41][42].
The application of surface-based measures also led to the discovery that cortical thickness and surface area are mediated by distinct, largely non-overlapping genetic components [43,44]. The genetic independence of these measures allows for greater biological specificity in the identification of factors mediating individual differences in regional cortical thickness or surface area in healthy populations. These imaging measures can also assist in narrowing the field of investigation to genes that may mediate specific pathologies (e.g., specific reductions in cortical thickness but not surface area) within clinical populations.
Future directions -gray matter
Taken together, it is clear that the application of neuroimaging methods has led to the characterization of in vivo longitudinal gray matter developmental and maturational trajectories that would not otherwise have been possible. The dramatic evolution of imaging techniques developed and applied to the study of gray matter growth has led to an increasingly more refined picture of the structural changes that occur after birth and throughout the lifespan. As is evident from the progress made since the first MRI studies, innovative new tools will ultimately bring us closer to interrogating more neurobiological or cellular features with neuroimaging. One example is new cutting-edge technology that addresses one of the most difficult problems in the neuroimaging field: identifying and differentiating neocortical laminae. More specifically, in a recent publication by Barazny and Assaf [45], in vivo whole-brain visualization of the 6 cortical layers was accomplished utilizing inversion recovery MRI and the T1-properties of the cortical tissue [45]. Impressively, these findings have a high correspondence with histology within both rat and human cortices [45]. This methodology could provide a powerful tool to study not only the macroscopic organization of the cortex, but also the in vivo longitudinal development of the cortical layers in individual subjects. Another example of a novel method introduced to study gray matter development is a diffusion-based measure called Heterogeneity, developed by Rathi et al. [46]. Heterogeneity measures the variability of water diffusion within a specific region of interest, which indirectly reflects the organization of cortical gray matter complexity [46]. This method has already been shown to be useful in identifying retrogenesis of gray matter in a healthy aging population [46]. Although this method has not yet been applied to the study of gray matter development, it offers the ability to study longitudinal in vivo changes in cortical complexity and organization, which undergo microstructural changes from early development periods to later life, as shown by previous post-mortem studies [15][16][17].
White Matter
The study of white matter is a critical component to understanding lifespan neurodevelopment and brain maturation. Unlike gray matter, which experiences a dramatic increase within the first few years of life, white matter growth exhibits a more gradual pattern of development [3,47]. In fact, myelination occurs almost entirely postnatally and continues well into the third decade of life [48].
The MR techniques described in the previous section make the visualization and differentiation of both gray and white matter possible. This is achieved via the manipulation of the physical parameters of MR pulses, making MR sequences differentially sensitive to cellular type, density, tissue structure, lipid and water content. These high spatial resolution MR images, coupled with high contrast between gray and white matter, allow for more precise and automated segmentation of the brain. This, in turn, allows for the investigation of both gray matter changes over time, as described in the previous section, and white matter changes over time.
White matter volume
Early imaging studies showed changes in white matter volume that exhibited different maturational trajectories than those of gray matter. Moreover, these changes in white matter volume reflect specific changes in white matter tissue structure and content. Specifically, white matter contains axons and glial cells, specifically the myelin-producing oligodendrocytes. The former interconnect proximal and distal cortical areas, providing communication within and between large functional networks, which travel in large fascicles or bundles, while the latter provide protection, insulation, and support for axons. Of note, although the growth of neurons is largely finished before birth, glial cells are actively shaping white matter structure and function throughout life, first producing and growing the myelin sheath around the axons, which improves conductivity between cells, and then pruning and removing unnecessary cells and processes, which leads to optimizing brain connections, and finally providing protection from various stressors across the lifespan.
Previous post-mortem studies have shown that central nervous system myelination follows predictable topographical and chronological sequences, with myelination occurring in the proximal pathways before distal pathways, in sensory pathways before motor pathways, in projection pathways before association pathways, in central sites before poles, and in occipital poles before frontotemporal poles [49][50][51]. The general pattern of adult myelination is present by the end of the second year [52]. MRI studies demonstrate that this pattern is reflected in tissue relaxation times, which increase as myelination progresses. This is due to changes in tissue water content, and increased volume of lipid-containing myelin, which leads to not only improved contrast between gray and white matter, but also to increased volume of white matter itself. The contrast between gray matter and white matter is reversed until 6 months of age, when compared to the adult brain [53]. This is because gray matter is more developed and contains a greater cellular density, which results in a brighter appearance, while white matter, which contains water and non-myelinated axons, appears darker on T1W, structural scans. This contrast diminishes in the first year of life, and reverses to "adult-like" contrast, with darker gray matter and brighter white matter, by the end of the second year of life, which coincides with the myelination of axonal fibers [53].
While white matter reaches "adult-like" MR signal contrast at the age of one, white matter volume does not stop expanding until adulthood. Volumetric imaging studies have demonstrated that this expansion, especially between the ages of 4 and 22, is relatively linear, increasing about 12% per year [10], with boys exhibiting a steeper increase when compared to girls, which is most likely related to testosterone levels [54][55][56]. Volumetric neuroimaging studies of early postnatal brain development also show that within the first two years of postnatal development, white matter volume increases only 11% in the first year and 19% in the second year, consistent with annual linear increases reported at later ages [3]. However, although early volumetric studies shed light on developmental changes in white matter, it is only following the introduction of Diffusion Tensor Imaging (DTI) that white matter has been placed at the center stage of brain development research.
Microstructure of white matter
Diffusion MRI-Diffusion MRI was first clinically applied in 1986 by Le Bihan and colleagues. It is an advanced imaging method that utilizes the inherent diffusion properties of water molecules in biological tissues to quantify the directionality and amount of diffusion (either restricted or non-restricted) in a given location. Within brain tissue, this is particularly relevant to white matter where the mobility of water is restricted by both the lipid-rich myelin sheath surrounding the axons and by the high density of axons contained within the fiber bundles. In DTI (introduced by [57]), the diffusion information is acquired along several non-collinear directions, and modeled using an ellipsoid, also known as a tensor, that quantifies and characterizes both the orientation and the amount of diffusion within each voxel, and from which a number of measurements can be calculated. The common metrics of white matter microstructure include mean diffusivity (MD)-which reflects overall water diffusion in the voxel, fractional anisotropy (FA)-the most inclusive diffusion measure, reflecting organization, coherence and integrity of white matter within the voxel, axial diffusivity (AD)-measuring diffusion along the principal direction of axon, and believed to be related to axonal integrity, and radial diffusivity (RD)-measuring diffusion perpendicular to the axon, possibly reflecting myelin. While some of these measures have been partially validated using animal models [58][59][60], there is still some controversy regarding their specificity [60][61][62]. Here we focus on reports of FA, as this is the most widely used DTI measure in neurodevelopmental studies.
There are a number of DTI studies that focus on lifespan white matter development in humans, most of which have utilized one of the most common tools in DTI, fiber tractography. Tractography utilizes directional information from the diffusion tensor, resulting in the delineation of individual anatomical white matter tracts in individual subjects. Lebel and colleagues [48,63,64], for example, have conducted a large number of maturational tractography studies. In these studies, the authors utilize a normative cohort of over 400 individuals, both males and females, with an age range of 5 to 83 years. Using this cohort, they have plotted the maturational trajectories of 12 major white matter tracts. Findings show that the primary diffusion metric, fractional anisotropy (FA), exhibits a tractspecific maturational profile that is in line with previous post-mortem studies [48][49][50]65]. These profiles follow Poisson-shaped curves, which are characterized initially by a period of marked incline towards the peak, followed by a more gradual decline. Different white matter tracts are also shown to reach peak FA at varying ages, which is believed to reflect the age at which that tract is mature. The earliest maturing tract is the fornix, which reaches peak FA at or before 20 years of age, while the last tract to mature is the cingulum, which reaches peak FA at or after 40 years of life [48]. These studies also show that there is considerable variation in the peak FA values that are attained, also suggesting regional variability in the size, degree of myelination, or packing density of white matter tracts.
Lebel and colleagues did not, however, show a significant difference in the maturational profiles between males and females, although recent reports show that healthy females tend to have generally lower FA compared to healthy males [66]. Gender differences are also evident in the overall connectivity of the brain, where male brains appear to be more optimized for intra-hemispheric connections, while female brains appear to be optimized for inter-hemispheric connections [67]. As stated previously, the relationship between myelination and testosterone levels are proposed to play a role in this difference, as shown in a study by Herting and colleagues, where testosterone levels predicted higher FA in boys when controlling for age and puberty [68].
Similar to studies of gray matter, many of the larger developmental cohorts did not include the early postnatal years. DTI studies of healthy term neonates report increases in FA values in all regions (except the splenium), with concomitant decreases in mean diffusivity (MD), which are correlated with decreases in the T2 signal but not the T1 signal [69]. In a study by Geng et al. [70], the authors analyzed the developmental trajectories of 10 white matter tracts from birth (2 weeks) to 2 years of age and show that all tracts studied exhibit increases in FA, with greater rates of FA increase in the first postnatal year than the second year [70]. The increased use of diffusion imaging in the early postnatal years also provides important information in the evaluation of premature infants. For example, Hüppi and colleagues show that brain areas with reductions in FA in pre-term infants are associated with perinatal white matter injury, thought to reflect incoherent fiber tract organization within central white matter regions [71].
These studies provide important information regarding the considerable time and differential pattern of white matter maturation throughout the lifespan. Due to the fact that almost all white matter myelination occurs after birth, it has been suggested that white matter may be more susceptible to environmental factors or other insults [72,73]. For example, Kochonov and colleagues suggest that the developmental onset of many psychiatric disorders, but more specifically schizophrenia, coincides more closely with the maturational peak of white matter [74]. In fact, it has been suggested that myelination may be responsible for the closing of sensitive periods within the brain, whereby cortical connections may become more "hard-wired" and less plastic after they are fully-myelinated [75].
The use of DTI to study white matter neurodevelopmental trajectories thus makes it possible to confirm previous postmortem studies and it also provides a powerful approach to interrogate potential differences in lifespan white matter maturation in vivo in both healthy and clinical populations.
Additional methods to investigate white matter maturation
As discussed previously, neurobiological changes associated with white matter maturation are, for the most part, associated with either axons or myelin. Axonal size, thickness within a bundle, and axonal coherence are all reported to change with development, in addition to myelin thickness and composition. Unfortunately, all of these factors can influence diffusion, and thus FA. More specifically, even though FA has been widely associated in the literature with myelin integrity, only a small percentage of FA changes are explained by myelin [76]. Further, and more specifically, animal models of demyelination show that total loss of myelin accounts for only 16% decrease in FA [76]. There are also reports of FA changes in childhood and adolescence following different trajectories in boys and girls [66], which are not explained solely by myelination. For this reason, while DTI remains the most popular method to investigate white matter microstructure, other methods are used to complement FA. For example, since T2 transverse relaxation times are highly related to tissue composition, and, as discussed previously, T2-weighted gray-white matter contrast dramatically changes within the first year of age, the sampling of T2 decay through T2 relaxometry might provide additional information about the time-course of brain changes as a function of maturation. The T2 relaxometry derived measure-R(2) has been used previously as a proxy of myelination, and appears to be related to motor speed decrease with aging [77]. T2 relaxometry can also be used to distinguish between fast and slow relaxing water pools, with fast relaxing water more strongly associated with myelin component (more specifically water trapped between the myelin sheath layers-also known as myelin water fraction-MWF). The latter finding has been used to demonstrate the trajectory of brain myelination in infancy and early adulthood, with myelination beginning in the cerebellum and internal capsule prior to 3 months of age, then proceeding to the splenium, body, and genu of the corpus callosum, the optic radiations, the occipital and parietal lobes, and finally culminating with the frontal and temporal lobes as the last to myelinate [78].
Besides T2 relaxation, magnetization transfer is another source of information that can be used to increase microstructural specificity of imaging. With the use of additional inversion recovery (IR) pulse that suppresses the signal of water bounded to macromolecules, one can indirectly measure the concentration of these macromolecules in white matter tissue. Magnetization Transfer Ratio (MTR) is positively correlated with myelin content [66], and also decreases with aging [79]. During adolescence, however, MTR decreases, despite the finding of increases in white matter volume in boys [55], suggesting that its trajectory might be different than that of R(2) or FA.
Another measure related to myelin macromolecules, and recently used in the study of brain maturational trajectories across the lifespan (from 7 to 85 years of age- [80]), is a measure of longitudinal relaxation, or quantitative T1 (R(1)). R(1) follows a inverse-U trajectory, with rates of increase during maturation mirroring the rate of decline with aging. T(1) seems to peak later in life than FA (which shows asymmetric trajectory, with rapid growth and slower decline), suggesting that it might reflect different aspects of white matter maturation. While differences in biological specificity of R(1), R(2) and MTR have not been explored, future maturational models of white matter will most likely combine multi-modal imaging information.
Future directions -white matter
As the field of imaging progresses, in addition to novel acquisition paradigms, new image processing methodologies are being developed to improve signal modeling. It is especially noticeable in the field of diffusion MRI. New acquisition paradigms, such as Diffusion Spectrum Imaging (DSI) [81] or qBall [82], and new reconstruction methods are all leading to significant improvements in white matter structure delineation, resulting in increased biological specificity. One of the more recent improvements is the creation of multi-tensor tractography [83], which is particularly useful in the area of crossing fibers, allowing for more precise anatomical representation of cortical structural connectivity. Another way in which diffusion imaging is pushing the envelope is through the use of multi-compartmental models of the diffusion signal. Methods such as Free Water Imaging [84] and the neurite orientation dispersion and density imaging (NODDI) model [85], both of which treat the diffusion signal as the sum of multiple compartments. For example, in Free Water imaging, the diffusion signal is modeled as the sum of an isotropic compartment, called Free Water, which represents the unbound freely diffusing water in the extracellular space. The residual diffusion signal in this model, called FA-t, is then modeled in a manner similar to a traditional tensor model, but where the improvement lies in the fact that it is now solely drawing signal from the intracellularly bound water both within and surrounding the white matter fiber bundles [84]. NODDI utilizes a similar paradigm, whereby the "cellular" diffusion signal is modeled as two non-linear components: neurite orientation dispersion index (ODI) and neurite density index (NDI). Each of these components is purported to reflect specific microstructural features of the white matter bundles, where ODI reflects axonal organization, and NDI reflects cellular integrity [85]. In a recent publication that utilized NODDI in 66 healthy subjects, ages 7 to 63 years of age, Chang and colleagues [86] report that NDI exhibits striking logarithmic increases with age whereas ODI increases following an exponential pattern. The authors were also able to show that the use of these advanced analytical methods allows for a more precise prediction of chronological age than previous DTI metrics [86]. Finally, while ODI reflects intravoxel organization (or architecture) of axons, other methods have also been developed to model more macroscopic behavior, or architecture, of white matter fiber bundles. As one example, the macrostructural white matter geometry measures, introduced by [87,88], are designed to track architectonic changes of white matter during brain development, and are useful in detecting developmental abnormalities in diseases such as schizophrenia and autism [88,89].
Conclusions
In summary, the evolution and application of neuroimaging methodologies have made significant contributions to our understanding of lifespan trajectories of human brain development. This remarkable increase in knowledge cannot be understated. As evidenced by the studies highlighted above, these technological advances in imaging have led to discoveries that now allow us to quantify, to characterize, and to understand better the developmental changes in brain structure in both healthy and clinical populations. It is thus clear that neuroimaging methodologies available today provide the best way to understand in vivo human brain maturation.
Longitudinal studies are also more informative than cross-sectional measurements in showing the trajectories of structural brain development. In fact, many studies suggest that the developmental trajectory of the human brain may be considered an endophenotype [90,91]. This is likely as many neurodevelopmental disorders originate from aberrations in early brain development [5,6]. Thus, through the use of neuroimaging we can begin to characterize better maturational trajectories that hold promise for both identifying underlying pathophysiological mechanisms related to psychiatric illness, and for serving as potential biomarkers of risk that may be used for early identification and intervention.
As the field begins to move forward, technological advances will make possible still more refined interrogations into the underlying biological components reflected more indirectly by imaging measurements. Most importantly, the utilization of multi-modality imaging techniques will provide a more complete picture of both early and later cortical developmental patterns and also answer critical questions about the typical and atypical structural and functional networks that emerge throughout the lifespan.
|
2018-04-03T05:33:25.240Z
|
2016-03-10T00:00:00.000
|
{
"year": 2016,
"sha1": "69297ff84c06886e313a459dd75e78934d855bf8",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.17756/jnpn.2016-003",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "69297ff84c06886e313a459dd75e78934d855bf8",
"s2fieldsofstudy": [
"Psychology",
"Biology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
255832233
|
pes2o/s2orc
|
v3-fos-license
|
The use of social media for professional purposes among dentists in Saudi Arabia
To investigate the dentists’ opinions towards social media (SM) use in daily practice and the expected limitations from its use in Saudi Arabia. An electronic survey was carried out throughout May–June 2020 among a sample of dentists in Saudi Arabia. The survey covered three parts: the first part covered professional and demographic information, the second part covered the use of mobile phones and SM in dental practice, while the third part assessed dentists’ opinion on SM use. Descriptive statistics included frequency distributions and percentages and independent t test/ANOVA test for the relationship between the mean of dentists’ opinion towards SM and demographic variables. A p value of 0.05 or less was considered statistically significant. The majority of respondents (80%) believe that SM plays an active role in patients’ decisions regarding the selection of a healthcare provider. The mean dentists’ opinion scores on the use of SM were significantly lower among participants working more than 50 h per week compared with other participants (p = 0.014). The majority of sampled dentists believe that SM plays an active role in patients’ decisions regarding the healthcare provider’s selection. Directed campaigns can help dentists optimize the use of SM for both professional and personal purposes.
Introduction
There is an increase in social media (SM) use due to the increase in technological advancement. It has changed how individuals communicate and share information. People nowadays are more dependent on SM to explore available services, including dental services viewing displayed information, customers' feedback, and reviews. Hence, rendering visible communication as an essential part of any dental clinic activity [1,2]. Dental providers' SM's engagement is growing every day, becoming a tool that helps them connect, learn, involve professionally, and assist in dental care [3,5]. Proper communication with patients is one of the primary factors of success for any healthcare provider [6]. The SM platforms also have proven to be multi-faceted, offering a wide variety of tools, such as interactive blogs and audio-visual dissemination arenas catering to a broad audience who are potential future patients [3]. The micro-blogging site Twitter has also gained popularity among the medical fraternity to disseminate medical knowledge [5]. Out of the 168 Twitter accounts reported by Sugawara et al. [4], 73 were related to dentistry and oral surgery. SM has proven to be an effective and easy method for educating the laypeople and general masses [7].
There have been mixed reviews on the benefits of using SM by healthcare providers. The most-reported concerns were legal and security issues [8][9][10][11]. Counts of reviews of the medical literature available online have been labeled "low quality" which, if fallen into the wrong hands and taken heed of, could lead to potentially adverse, possibly lethal consequences such as drug overdose or unnecessary cosmetic surgical procedures [5]. In addition, SM tends to spread misinformation much quicker than reliable and verifiable facts, which might cause cyber disarray or confusion. This could lead to breaches in patient-healthcare provider confidentiality, professional image ruin, and healthcare professionals licensing issues [10]. Not to mention the amount of distortion a piece of information can go through being forwarded from one SM platform to another amongst laypeople [7].
Nevertheless, SM could improve the healthcare provision, especially with marketing, education, communication, and patients' condition follow-up [10][11][12]. SM is increasingly being used as a marketing scheme for organizational visibility. This increases the chances of channeling patients towards organizations that post ads on various SM platforms boasting about better customer support and efficient service provision [10]. Increased SM use in Europe for health communication has been observed, with around 22% of Norwegian hospitals using the SM platform Facebook for health communication [13]. Better visibility and interactions with potential patients imprinting a positive image on their minds lead to a better business sustainability at virtually reduced costs [12].
Several studies explored the perception of dental and other healthcare providers towards the use of SM [14]. More than half of dental practitioners surveyed in one study believed that SM platforms are more effective in marketing than conventional methods [11]. Parmar et al. [6] revealed a positive attitude toward SM's use to attract new patients. In a Saudi Arabian Study, one-third of the participants mentioned that they use SM to communicate with their patients and market their practice [14]. In another study that targeted physicians in Saudi Arabia, most of the participants stated that SM had a good impact on physicians' knowledge and abilities; however, there were ethical concerns regarding its use [3]. Ranschaert et al. [15] highlighted the need to create a clear guideline to improve physicians' skills in using SM safely and professionally. At the same time, another study specified that SM's role in the dental-care provision is still a vague area for both patients and dentists. Both share concerns about its uses and benefits and noted the excellent opportunity for dental practices to utilize and benefit from the use of SM [6]. Given limited studies covering the use of SM by dentists in Saudi Arabia, this study aims to investigate dentists' opinions towards the use of SM in daily practice and the expected limitations from its use.
Materials and methods
The present cross-sectional study was conducted between May and June of 2020 on a sample of dentists in Saudi Arabia. Study subjects were invited to participate in this study voluntarily. A convenience sample was selected from SaudiDent.com database, which contains approximately 5000 dentists in Saudi Arabia. The sample size was calculated based on a 95% confidence level, a 5% margin of error, and a 50% response distribution. The minimum required sample size was determined to be 357 (http://www.raoso ft.com/sampl esize .html). To accommodate for non-responders, the sample size was increased by 10% (i.e.: n = 393).
The questionnaire was adapted from previously validated questionnaires used in similar studies on the use of SM targeting medical professionals [3,16]. The survey questionnaire contained 16 questions. The first part of the questionnaire included professional and demographic information such as age, gender, qualification, work experience in years, region, work setting, and working hours per week. In the second part, it included the use of mobile phones and SM in dental practice such as daily general-purpose use of SM (in hours), preferred communication tool in dental practice with patients, the frequency of using any of the SM platforms and the type of SM platform provided by the employers. In the third part, it explored dentists' opinion on SM use such as discussing internet or SM usage with their patients (Yes, No, Unsure), role of SM in improving their professional knowledge and skills (Yes, No, Unsure), dentist's responsibility to disprove inaccurate health information posted online (Yes, No, Unsure), appropriateness of searching for patients' personal information on SM as part of regular clinical practice (Disagree, Neutral, Agree), patients' confidence of professional advice obtained by treating dentist from mobile phone applications or websites (Disagree, Neutral, Agree), preference of conducting a consultation with a patient via skype (or other online telecommunications) (Yes, No, Unsure), and their beliefs on whether SM would affect the patients' selection of healthcare provider (Disagree, Neutral, Agree).
The survey was pretested on a pilot group of 20 general dentists [a reliability coefficient (alpha) of 0.75] before distribution to ensure questions clarity and overall acceptability of the survey. Minimal corrections were made based on the feedback obtained from the pilot group subjects. Since this was a questionnaire-based study, an exemption was granted for this study by the Ethical Committee of the College of Dentistry, Imam Abdulrahman Bin Faisal University. An informed consent was obtained from all subjects. In addition, this study was carried out in accordance with relevant guidelines and regulations.
The survey was created using SoGoSurveys ® software [17]. It was then distributed to selected subjects via WhatsApp, Facebook, Twitter, and Instagram. A reminder message was sent on a weekly basis as means of follow-up for non-respondent practitioners.
The data was entered in MS Excel (2010) and transferred to IBM SPSS Statistics for Windows, version 22 (IBM Corp., Armonk, NY, USA) for statistical analysis. Descriptive statistics included frequency distributions and percentages. Mean of dentists' opinion (Main outcome) was calculated and used for bivariate analyses purposes. The significance between the mean of dentists' opinion on the use of SM and demographic variables was tested using an independent t test for dichotomized independent variables and ANOVA test for the other independent variables. A p value of 0.05 or less was considered statistically significant.
Results
Between May and June 2020, 1000 surveys were sent out to dental practitioners, 392 responses were returned, indicating a response rate of 39.2%. Out of 392 participants, 364 participants responded with the completed survey (survey completion rate of 92.8%). The demographic information of the 364 study participants is shown in Table 1. Most of the participants (58.5%) were males, more than half of the participants (62.9%) were less than 35 years old, and about one third of them (38.2%) belonged to the central region of Saudi Arabia. In addition, 40.7% of the participants were general dental practitioners, and 26.4% were consultants/specialists. Similarly, half of the surveyed dentists have less than five years of experience. Most of the participants work in a governmental job, and a majority of them (94.2%) work for less than 50 h per week. About half of the participants (48.4%) spent less than 3 h per day in daily general-purpose use of SM. While 42.6% of the participants prefer phones as a communication tool in their dental practice with patients.
Dentists' opinion of the use of SM in their practice is presented in Fig. 1. More than half of dentists (54%) encourage their patients to search the internet or SM to access online information about their condition. When asked if SM can help improve dentists' knowledge and skills, 87% of the respondents confirmed it. Regarding inaccurate health information in SM, most sampled dentists (74%) believed that they have professional obligations to correct any incorrect information. While only 41% of the surveyed dentists were willing to conduct consultations online, 36% preferred conventional communication with patients.
Only 10% of sampled dentists considered using SM as a tool to collect personal information about their patients as appropriate. About 26% of the dentists agreed that their patients would doubt their clinical advice if they use a medically related mobile phone application or website. The majority of sampled In-person 93 (25.5) dentists (80%) believe that SM plays an active role in patients' decisions to select healthcare providers. Table 2 shows the relationship between the respondents' age with the daily use of SM. Younger participants mostly used Twitter, WhatsApp, Instagram, Youtube, and Snapchat platforms compared to older ones. Statistically, a significant difference was observed in proportions of daily Twitter, Facebook, Instagram, and Snapchat use between younger and older participants with p value < 0.05. Table 3 shows a comparison of mean dentists' opinion scores on the use of SM among different demographic factors. The mean dentists' opinion scores on the use of SM were significantly lower among participants working more than 50 h per week compared with other participants (p = 0.014).
Discussion
This study's findings agree with the results of several reports that younger-aged dentists are using SM to engage with their patients compared to older-age dentists [18,19]. When considering a dentist's age as a determinant factor in using SM, younger dentists (under 35 years old) were using Twitter, Instagram, and Snapchat significantly more than older dentists. It is worth to mention that social media is relatively recent, where social platforms such as Instagram and Snapchat started only 10 years ago while Twitter started 14 years ago. Thus, someone could realize the larger effect of social media on younger generations who grew up with social media surrounding them.
Dentists must understand that SM's professional use should be dictated by the type of SM frequently used by their patients. Furthermore, the daily use of SM was alarming as most respondents reported using SM for more than 30 min a day, which might introduce signs of SM "over-dependence" [20]. This reported overuse of SM needs to be addressed by dentists and professional organizations in the form of educational programs and counseling services to better guide dental professionals in the proper use of SM.
The effect of SM on dental care delivery is undisputable. Patients use SM to collect information on their health status, health concerns, and health care providers [21][22][23]. Our study's findings confirmed the effect of SM, as the majority of sampled dentists believed that a high proportion of patients are using SM to choose their treating dentists. Ajwa et al. reported that 89.4% of dental practitioners believed that SM is the most effective marketing strategy to recruit patients into dental practices in Saudi Arabia. They also reported that 82.3% of their sampled participants mentioned that posting an ad on SM created an increased influx of patients to the dental clinics [14]. This is in line with the psyche of the current generation as they like to explore their options on SM before they embark on the journey, be it their doctor's appointment or their travel expenses. It gives a sense of security because they back their information obtained on SM and the internet.
Because of the importance of SM's role in shaping the dental practice, it is not surprising that more than half of the sampled dentists in this study reported discussing SM usage with their patients. Concerning the accuracy of health information on SM, Sumayyia et al. said that among other issues, addressing information accuracy may reduce the risk of misleading information to the patients [24]. This contribution could be made possible by encouraging patients and general masses to access repute websites with scientific rigor and informing their patients in the process of how to differentiate between websites of good and bad scientific quality.
In our study, most sampled dentists (74%) believe that dentists should take a leading role in rectifying inaccurate online health information. For this reason, Koumpouros et al. suggested that SM should be useful in marketing, gaining patients' trust and covering their needs [25]. As also put forth by Mangold et al., the relationship between the originators of a healthcare message and the laypeople who read that message is changing and evolving constantly. Hence, a certain degree of control is required for healthcare professionals using SM platforms to manage the content validity and reliability reaching the laypersons through the internet, as misinformation is rampant and could have fatal consequences [26]. Bahkali et al. reported the importance of the accuracy of the health information available online. It can be used as a strengthening means to improve the health care system [27], and 74% of this study participants believe that dentists need to disapprove and clarify inappropriate or inaccurate online health information. This reflects an understanding of the situation about patients' needs and in agreement with published literature.
No doubt that SM has made a significant change in the health profession in recent years. Part of this change is related to knowledge gain and improved clinical judgment. In our study, 86% of sampled dentists believed that SM could improve their knowledge and skills and promote their careers. These findings agree with similar literature [28,29]. The ease of contact between healthcare providers and the public is one of SM's strengths. Parmar et al. reported that about 44% of sampled patients liked the idea of being in contact with their dentists via SM [6]. In addition, Henry et al. reported that 52% of dentists contact their patients on Facebook [30]. In our study, less than half of the study sample were willing to provide dental consultation through SM. It is possible that the unwillingness of the majority of sampled dentists to provide dental consultation on SM could be related to inadequate information to make such consultation or fear of legal consequences for such consultation.
One of SM's critical issues is related to ethical and privacy perspective [30,31]. In this study, less than half of the respondents believed it is inappropriate for dentists to check their patient's SM account. Lack of engagement on SM because of privacy issues has been previously reported [32]. Only 41% of the study sample felt comfortable conducting a consultation with patients. Clear-cut boundaries between medical professionalism and SM indiscretion need to be defined beforehand because new medical and dental students being inducted in respective programs already have a sense of technology applications being used to share information, leaving what is now being called a "digital footprint" for others to see [33]. Although SM usage has to be encouraged; some boundaries and guidelines are needed; misuse represents a significant risk to the individuals using or in charge of monitoring SM use in the clinical practice. Clear policies, limitations, and aspects of service should be available for dental personnel to reduce the risks [34][35][36][37].
Although close to a reported percentage by the published literature, this study's results still reflect some conflict; participants are shy to be engaged themselves. Up to 70% of the participants doubted that patients would trust advice or information provided online or by phone. Together with the dentists, patients should develop critical appraisal skills to apply to the information posted on SM and be able to judge which is appropriate and trustworthy [38]. Targeted educational programs should be established to help dentists utilize SM, conduct a virtual clinic or learning sessions that might be advantageous, and be designed to target practicing dentists or undergraduate students [3].
The use of SM depends on several demographic factors, among which age plays an essential factor. To our surprise, there was no difference between older and younger participants favoring SM's use. One study on US dental educators reported that older dental educators favored SM use [32]. On the other hand, two studies reported that younger dentists favored SM more than older dentists [30,39]. A possible explanation for having older participants in our study favor SM's use would be that most of them are consultants and specialists who have been trained in the US and Europe and have private practices. Most of them have SM accounts focused on their clinical practice. In our study, gender and qualifications were not contributing factors in using SM, especially when using SM for business purposes. Snyman and Visser made a similar observation among their sample of South African dentists. However, when SM is used for personal purposes, female dentists tend to favor using SM more than male dentists [40].
In this study, working experience had no effect on the use of SM. This observation is not similar to some other reported studies. For example, one study conducted in Ecuador reported that dentists with more than eight years of experience were associated with a lower likelihood of using SM [39]. Furthermore, it has been reported that there was no association between years of experience and SM use among sampled dentists in one study in South Africa [40]. This difference in SM use among experienced and less-experienced dentists could be explained in light of differences in people's behaviors and attitudes toward SM among different nations and cultures.
Interestingly, the type of job setting did not affect SM's use for the participants in this study. However, the number of working hours per week showed an association with the dentists' opinion towards the use of SM. Those who worked less than 20 h per week scored higher than those who worked more than 50 h per week. This could be explained by the fact that consultants might be working fewer hours than general dentists, thus coinciding with the scores for both age and work experience in years where older and more experienced dentists scored higher for SM use.
This study sheds light on SM's importance in dental practice as more dentists and patients are reliant on this form of technology. Dental practice can be enhanced by the SM's use in the provision of dental services, advertising, counseling and oral health education. SM platforms could also be used for professional development, where dental organizations and dental educators can disseminate information and updates via different SM platforms. Nevertheless, future studies to examine the impact of individual SM platforms on dental practice and dental education is needed.
Limitations
This study has some limitations. One of these limitations is related to the method used to collect relevant information through an electronic survey. There is the good proportion of the targeted population that either does not respond to electronic surveys or does not use such a communication method. Second possible limitation related to the sample being mainly from central and eastern regions (72%). The western region is second to the central region in both the number of Saudi population living in this geographic location and the number of practicing dentists. Thirdly, it is quite challenging to discern between active users of SM and users of SM by word of mouth. The fourth possible limitation is the lack of a probability sampling technique. This may affect the generalizability of results to the whole dentist population in Saudi Arabia. Also, fewer survey participation from dentists in the private sector. This could be explained by the fact the Saudi dentists generally favor working in governmental sector.
Conclusion
The majority of sampled dentists believe that SM plays an active role in patients' decisions regarding the healthcare provider's choice. SM is essential for the success of patients' engagement and practice marketing. Taking this belief into consideration, directed campaigns can help dentists optimize the use of SM to their benefit without compromising integrity; such a campaign can help those who are shy to be engaged or those who still have concerns. Despite this study's limitations, it can help shed light on areas that require further investigation and exploration, such as limitations, over-dependence, and confidentiality. Abbreviation SM: Social media.
|
2023-01-16T14:50:21.036Z
|
2021-01-12T00:00:00.000
|
{
"year": 2021,
"sha1": "da6d9758ba3e96827ecbe34f838129301c7572d3",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s12903-021-01390-w",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "da6d9758ba3e96827ecbe34f838129301c7572d3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
251067231
|
pes2o/s2orc
|
v3-fos-license
|
Implementation Of Tiny Machine Learning Models On Arduino 33 BLE For Gesture And Speech Recognition
In this article gesture recognition and speech recognition applications are implemented on embedded systems with Tiny Machine Learning (TinyML). It features 3-axis accelerometer, 3-axis gyroscope and 3-axis magnetometer. The gesture recognition,provides an innovative approach nonverbal communication. It has wide applications in human-computer interaction and sign language. Here in the implementation of hand gesture recognition, TinyML model is trained and deployed from EdgeImpulse framework for hand gesture recognition and based on the hand movements, Arduino Nano 33 BLE device having 6-axis IMU can find out the direction of movement of hand. The Speech is a mode of communication. Speech recognition is a way by which the statements or commands of human speech is understood by the computer which reacts accordingly. The main aim of speech recognition is to achieve communication between man and machine. Here in the implementation of speech recognition, TinyML model is trained and deployed from EdgeImpulse framework for speech recognition and based on the keywords pronounced by human, Arduino Nano 33 BLE device having built-in microphone can make an RGB LED glow like red, green or blue based on keyword pronounced. The results of each application are obtained and listed in the results section and given the analysis upon the results.
I. INTRODUCTION
We experience a daily reality such that AI models assume a significant part in our day to day existence. Everyday undertakings like snapping a photo, checking the climate and so forth rely upon AI models yet preparing the model and running connection points are costly. Little ML calculations generally work the same way as a normal AI calculation. The models are prepared on the cloud or a client's PC. Subsequent to preparing, undertakings become possibly the most important factor in cycles of model pressure. It's a field of learning in machine learning systems and embedded programs that explores such models you can use on nearly nothing, less solid contraptions as little controls. Enables low inactivity, low power and low exchange speed model for contraptions. Regularly, Tiny ML grants IOT based embedded edge contraptions to go to cut down power structures with combination of refined power the board modules [1]- [5]. In the gathering region, Tiny ML can stop edge time in view of stuff dissatisfaction by engaging consistent decision. ML running at embedded edge devices as shown in fig.1 results in low processing latency, better privacy and minimal connectivity dependency [6]- [10]. Figure.1 (a) Physical world and digital AI, (b) Tiny ML assisted digital AI Fig.2 represents technology required at cloud ML, edge ML and TinyML .The technology is in terms of algorithms, hardware and so on. TinyML technology takes data from sensors and give it to the micro or nano level convolution neural network where microcontrollers are used to run the neural networks. Such microcontrollers may have hardware accelerators. In case process is so complex then such process can be taken into deep neural network with the help of GPU, multi-core CPUs and TPU. Fig.3 represents layered approach implementing ML [11]- [16]. To support configuration, screen and adjust the small ML application, we should utilize "ML capabilities". ML occupations accomplish basically everything consequently. According to the viewpoint of all stages, they oversee and handle information, train AI models, convert them, test them, look at them, and use them. Countless devices might be essential for the environment. Likewise, minuscule ML as a help that engages creation locales to effectively oversee and coordinate different little ML gadgets. A profoundly scattered environment might develop in light of the fact that little ML implanted gadgets, from ML integrators and ML processing frameworks, are intended to accomplish very low power productivity. At the point when the equipment changes, the split influences the progression of the coordinated ML model [17]- [20]. Tiny ML frameworks can take direct information input from different sensors. It can utilize a miniature and Nano level convolution brain organization. The fig.4 presents key parts of the Tiny ML where the total mix of equipment programming co-plan is the main variable. Such frameworks ought to go past the AI bend gave top notch information and incorporated programming plan. Ordinarily, the Tiny ML framework is enlightened by double documents created from a prepared model on a huge facilitating machine [21]- [24].
There are certain Programming and software libraries available for TinyML Implementation ; Tensor Flow Lite (TFL): It is a top to bottom perusing system with open source help for an educated understanding idea. Edgeempowered AI on the gadget can be settled by this structure while utilizing five key hindrances (e.g., delays, protection, network, size, and power utilization). Upholds Android, iOS, installed Linux, and an assortment of microcontroller. It additionally upholds dialects (e.g., C ++, Python, Java, Swift, C) to further develop AI on the fringe gadget.
Tensor: It is a free implanted learning climate that makes model and moment sending on IOT-edge gadgets.
Tensor is a little size module that requires just 2 KB of circle. Edge Impulse: It is a cloud administration for creating AI models on Tiny ML-coordinated edge gadgets. This supports Auto ML handling for start to finish areas. It likewise upholds various sheets that coordinate advanced cells to create learning models on such gadgets. Preparing is finished on the cloud stage and a prepared model can be conveyed on the fringe gadget as per the empowered information move technique. The force can be run on a nearby machine with the assistance of the underlying C ++, Node.js, Python, and Go SDK. Nano Edge AI Studio: The product was previously known as Cartesiam.ai, presently it considers the choice of the best library and test library usefulness by utilizing an emulator before the last edge. PyTorch Mobile: It is important for the PyTorch environment that plans to help all stages from preparing to the utilization of AI models to advanced cells (e.g., Android, iOS). A couple of APIs are accessible to pre-process AI in versatile applications Installed Reading Library (ELL): Microsoft has created ELL to help the Tiny ML environment for inserted perusing. Offers help for Raspberry Pi, Arduino, and miniature: piece stages. Models utilized on such gadgets are undetectable web, so no cloud access is required. Upholds picture and sound division at present. STM32Cube.AI: It is a code generator and improvement programming that permits AI and AI-related errands to be rearranged on STM32 ARM Cortex Mbased (STM32Cube.AI, 2021) sheets. The utilization of brain networks on the STM32 board can be accomplished straight by utilizing the STM32Cube.AI to change over brain networks into the most reasonable MCU code.
II. DESIGN APPROACH AND METHODOLOGY
Hardware board used in this development is Arduino Nano 33 BLE as shown in fig.5 is progressed PC stages for utilizing tense AI models. It contains a 32-bit ARM Cortex-M4F microcontroller running at 64MHz with 1MB of framework memory and 256KB Smash. This little regulator gives sufficient ability to utilize Tiny ML models.
The Arduino Nano 33 BLE Sense contains variety, splendor, closeness, contact, development, vibration and different sensors. This sensor suite will be all that could possibly be needed for most applications Figure .5 Arduino Nano 33-BLE Machine learning framework as shown in fig.6 represents model of EdgeImpulse.
There are a couple of structures that address the issues of TinyML. In that sense, the EdgeImpulse is extremely famous and has extraordinary social help. Utilizing edge drive, it can convey models on microcontroller. Steps to work on edge impulse: To start, first make a record by going to https://studio.edgeimpulse.com/information exchange. Subsequent to entering in your data and checking your email, you will be welcomed by a getting everything rolling page. This will walk you through the most common way of interfacing a gadget, gathering information, lastly conveying a model. I named my most memorable task PhoneTest-1, yet it tends to be anything you like. Interfacing a Cell Phone: Tiny ML upholds numerous gadgets, including the ESP32, numerous ST ARM Cortex-M3 sheets, and a few Arduino Wi-Fi-empowered units. Nonetheless, a large number of similar errands can be achieved just by utilizing a cell phone by means of an internet browser, as it contains a receiver and accelerometer.
To interface your telephone, just snap on the "Utilization your cell phone" button which opens up a QR code. In the wake of checking it, you will be taken to their site and consequently associated with their Programming interface through a Programming interface key. Make a point to keep your telephone on and the program window open all through the remainder of the aide. Gathering Information: Presently it is the right time to make a plunge and make a model, as a matter of fact. On the whole, there must be information to prepare it on. Ensure you have your telephone convenient on the grounds that you'll utilize its sensors to catch the information. To start, go to the information procurement tab and ensure your telephone is chosen. Pick the accelerometer sensor and recurrence and afterward click "Begin examining." After you are finished moving your telephone, you can see the gathered information in a diagram. Preparing a Model: Since you have a few information recorded, now is the ideal time to prepare a model from it. Feel free to explore to the "Make Motivation" page and select the suggested Ghastly Investigation handling block and a Keras Brain Organization learning block. Then feel free to save the drive. Then, set up your information's scaling, channel, and FFT settings. These will control how your information gets pre-handled prior to being sent into the NN. In the wake of doing that, view and create the highlights. On the NN settings page, I chose to change the default certainty edge from 80% up to 91%. In the wake of preparing the model, I had the option to see a diagram of what the model thought of. Then, at that point, I went to the "Order" page and accumulated a smidgen more information from my telephone and saw what the model had the option to recognize. Sending: To send the model, I sent out the model as a Web ASM record and afterward unfastened it. Then another js record called run-impulse.js and put it into a similar organizer as the model (however the document is connected to this task page). To run it, I entered the hub order into the order brief followed by run-impulse.js and afterward glued the "Crude elements" cluster in statements as the second contention for the hub order.
There are two applications implemented in this article. i). Gesture recognition to find the movement of human hand. ii) Speech recognition to control on board RGB LED of Arduino 33 BLE board.This incorporates both an Arduino IDE and EdgeImpulse framework .EdgeImpulse frame works has built-in tools, libraries and nxn algorithms to build the model .In EdgeImpulse tool, the process starts with sampling to generate the data called as training dataset. Later test dataset is generated by split process. Training data and test data will be in the ratio of 80/20. Pre-processing is carried out on the data using spectral analysis. Later classification using NN classifier is carried out using Keras method .The next step is to feature extraction which is done on training dataset followed by classification. Last step is to test the model (if the results are not satisfied, retrain the model) followed by build the model for deployment into the hardware. Fig. 8 shows the flowchart for gesture recognition to find the movement of human hand and fig.9 shows the flowchart for speech recognition to control RGB LED. . Speech Recognition. Same procedure is followed for this application too Therefore it gives the results as shown fig.15 for model testing.
Figure. 15 Model testing
Control LED with Voice • Add the Compress record we acquired from EdgeImpluse into Arduino Library. • Open the mouthpiece ceaseless model and Select the right board and port. • Run and Transfer the program ready and Open Chronic screen. • It will show expectation for tones and inactive state, when a variety is called its forecast worth will Increment • We can alter the code to control worked in RGB Drove ready. • In fabricated Drove will shine Red when Red is called as shown in fig.16 Figure. 16 Output
IV. CONCLUSION AND FUTURE SCOPE
The hand signal acknowledgment framework that is intended to have the option to perceive hand motions progressively, this is an image of the looking at a climate where a fashioner can portray by controlling pointer utilizing a couple of digital gloves and can cooperate with the plan item in 3D space. Speech recognition framework will be all the more broadly utilized from now on. An assortment of discourse acknowledgment items would show up on the lookout. It is hard to deliver a discourse acknowledgment framework work precisely like a human. Presently the discourse acknowledgment innovation must be acquainted into individuals' lives with bring more accommodation. Specialists accept discourse acknowledgment is one of the vital impending advancements in the field of data.
|
2022-07-27T01:15:58.574Z
|
2022-07-23T00:00:00.000
|
{
"year": 2022,
"sha1": "5ee285417bf233640433e732f8d2f3f15686367a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "5ee285417bf233640433e732f8d2f3f15686367a",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Engineering",
"Computer Science"
]
}
|
266671058
|
pes2o/s2orc
|
v3-fos-license
|
Soluble Forms of Immune Checkpoints and Their Ligands as Potential Biomarkers in the Diagnosis of Recurrent Pregnancy Loss—A Preliminary Study
Immune checkpoints (ICPs) serve as regulatory switches on immune-competent cells. Soluble ICPs consist of fragments derived from ICP molecules typically located on cell membranes. Research has demonstrated that they perform similar functions to their membrane-bound counterparts but are directly present in the bloodstream. Effective control of the maternal immune system is vital for a successful pregnancy due to genetic differences between the mother and fetus. Abnormalities in the immune response are widely acknowledged as the primary cause of spontaneous abortions. In our research, we introduce a novel approach to understanding the immune-mediated mechanisms underlying recurrent miscarriages and explore new possibilities for diagnosing and preventing pregnancy loss. The female participants in the study were divided into three groups: RSA (recurrent spontaneous abortion), pregnant, and non-pregnant women. The analysis of soluble forms of immune checkpoints and their ligands in the serum of the study groups was conducted using the Luminex method Statistically significant differences in the concentrations of (ICPs) were observed between physiological pregnancies and the RSA group. Among patients with RSA, we noted reduced concentrations of sGalectin-9, sTIM-3, and sCD155, along with elevated concentrations of LAG-3, sCD80, and sCD86 ICPs, in comparison to physiological pregnancies. Our study indicates that sGalectin-9, TIM-3, sLAG-3, sCD80, sCD86, sVISTA, sNectin-2, and sCD155 could potentially serve as biological markers of a healthy, physiological pregnancy. These findings suggest that changes in the concentrations of soluble immune checkpoints may have the potential to act as markers for early pregnancy loss.
Introduction
Early pregnancy loss is a significant medical event that inflicts both physical and psychological trauma on young women and their families.While most spontaneous abortions result from genetic malformations of the embryo, a substantial number can be attributed to immunological disturbances at the feto-maternal interface [1].To achieve a successful pregnancy, it is imperative to maintain immunological homeostasis between the mother and fetus, who carries paternal antigens, and to facilitate physiological trophoblast invasion [1,2].Dysfunctional regulation of maternal-fetal immunity has been linked to pregnancy loss [3,4].Currently, ESHRE defines recurrent spontaneous abortion (RSA) as more than two miscarriages before 20 weeks of pregnancy [4].Pregnancy loss can occur in the first or subsequent pregnancies.Beyond well-established causes of RSA, such as hormonal dysfunctions, chromosomal abnormalities, thrombophilic factors, and uterine anatomical malformations, approximately half of RSA cases remain of unknown etiology.Recent studies have associated RSA with maternal immunological responses to paternal antigens [5].In this field, many questions remain unanswered.
Immune checkpoints (ICPs) play a crucial role in maintaining the balance of immunocompetent cell functions.ICPs are molecules responsible for the regulation of the activity of various immune cells, including leukocytes.The molecules that inhibit the immune system include PD-1/PD-L1, CTLA-4, TIM-3, VISTA, TIGIT, and LAG-3.The molecules, upon binding with their ligands, send inhibitory signals to the cells, resulting in reduced activity.Activation of such ICPs can lead to the transition of the cell into an anergic state or trigger the apoptotic pathway.Proper functioning of the inhibitory molecules safeguards the organism against an excessive immune response to pathogens or prevents the development of autoimmunity.Similar regulatory mechanisms are activated during physiological pregnancy [1][2][3][4][5].
In the realm of cancer research, substantial attention has been directed toward immune checkpoint molecules such as PD-1/PD-L1, CTLA-4, TIM-3, VISTA, TIGIT, and LAG-3.The molecules play pivotal roles in activation of effector T cells, maintaining immune system homeostasis, and minimizing detrimental immune responses.Notably, tumor cells exploit immune checkpoint pathways as a mechanism for immune evasion, allowing them to elude immune surveillance-a phenomenon that appears to mirror fetal behavior [6].Nevertheless, to date, the precise mechanisms governing immunological tolerance toward semi-allogenic fetuses remain elusive.The immunological interplay between the maternal immune system and fetal cells has yet to be comprehensively investigated [2].The latest advancements in immunotherapy have demonstrated that manipulating immune checkpoint proteins (ICPs) can modify immune responses, either by reversing immune suppression in cancer or inhibiting cell activation in autoimmune diseases [7].The interaction of sICPs with their ligands was pictured on Figure 1.
Int. J. Mol.Sci.2024, 25, 499 2 of 15 and fetus, who carries paternal antigens, and to facilitate physiological trophoblast invasion [1,2].Dysfunctional regulation of maternal-fetal immunity has been linked to pregnancy loss [3,4].Currently, ESHRE defines recurrent spontaneous abortion (RSA) as more than two miscarriages before 20 weeks of pregnancy [4].Pregnancy loss can occur in the first or subsequent pregnancies.Beyond well-established causes of RSA, such as hormonal dysfunctions, chromosomal abnormalities, thrombophilic factors, and uterine anatomical malformations, approximately half of RSA cases remain of unknown etiology.Recent studies have associated RSA with maternal immunological responses to paternal antigens [5].In this field, many questions remain unanswered.Immune checkpoints (ICPs) play a crucial role in maintaining the balance of immunocompetent cell functions.ICPs are molecules responsible for the regulation of the activity of various immune cells, including leukocytes.The molecules that inhibit the immune system include PD-1/PD-L1, CTLA-4, TIM-3, VISTA, TIGIT, and LAG-3.The molecules, upon binding with their ligands, send inhibitory signals to the cells, resulting in reduced activity.Activation of such ICPs can lead to the transition of the cell into an anergic state or trigger the apoptotic pathway.Proper functioning of the inhibitory molecules safeguards the organism against an excessive immune response to pathogens or prevents the development of autoimmunity.Similar regulatory mechanisms are activated during physiological pregnancy [1][2][3][4][5].
In the realm of cancer research, substantial attention has been directed toward immune checkpoint molecules such as PD-1/PD-L1, CTLA-4, TIM-3, VISTA, TIGIT, and LAG-3.The molecules play pivotal roles in activation of effector T cells, maintaining immune system homeostasis, and minimizing detrimental immune responses.Notably, tumor cells exploit immune checkpoint pathways as a mechanism for immune evasion, allowing them to elude immune surveillance-a phenomenon that appears to mirror fetal behavior [6].Nevertheless, to date, the precise mechanisms governing immunological tolerance toward semi-allogenic fetuses remain elusive.The immunological interplay between the maternal immune system and fetal cells has yet to be comprehensively investigated [2].The latest advancements in immunotherapy have demonstrated that manipulating immune checkpoint proteins (ICPs) can modify immune responses, either by reversing immune suppression in cancer or inhibiting cell activation in autoimmune diseases [7].The interaction of sICPs with their ligands was pictured on Figure 1.presenting cells (APC), T lymphocytes, and trophoblast cells, highlighting the impact of secreted immune checkpoints.The secretion of soluble immune checkpoints, including sPD-1, sCD80/86, sGal-9, sCD112, sCD155, etc., is depicted.Elevated soluble factors may lead to the T cell inactivation and downregulation of trophoblast antigen presentation by APC cells.The interaction of Gal-9 (Galectin-9) and PtdSer (phosphatidylserine) is crucial during implantation process.The figure is adapted from the work of Zych et al. (2021) [8], exploring differences in immune checkpoint expression (TIM-3 and PD-1) on T cells in women with RSA.
Our study was grounded on the hypothesis that a comparable mechanism is in operation during normal pregnancies, and disturbances in the regulation of immune checkpoint proteins (ICPs), membrane bound or soluble, could potentially play a role in spontaneous abortions.As a result, it is conceivable that tailored antibodies could be developed to identify differences in the concentrations of soluble ICPs and their ligands, which may provide valuable insights into the significance of ICPs and their regulatory role in pregnancy.This knowledge could potentially contribute to progress in the diagnosis and treatment of pregnancy losses in the future.Furthermore, soluble isoforms of immune checkpoint proteins (ICPs) can be detected in blood samples, rendering them potential candidates as a biomarker of pregnancy loss.
Questionnaire
Data obtained from the analysis of the questionnaire conducted among women classified for the study are presented in Table 1.
Table 1.Data from the questionnaire completed by the participants in the study.Number of participants in the group: Non-pregnant multiparous women (n = 10), pregnant (n = 20), RSA (n = 20), p-values statistically significant below 0.05 (p < 0.05) were marked as: *, if p < 0.001 was marked as **; N/A: not applicable. 1 RSA vs. Pregnant, 2 RSA vs. Multiparous, 3 Multiparous vs. Pregnant Data are presented as median and 25th (Q1)-75th (Q4) percentile, or percentage of the group.No significant differences in age or body mass index (BMI) were observed between the groups, as shown in Table 1.Patients underwent a thorough assessment, which included the number of miscarriages prior to the study, full-term pregnancies, internal medicine interviews, e.g., of chronic diseases such as diabetes, endometriosis, insulin resistance, Hashimoto's disease, and polycystic ovary syndrome (Table 1; for detailed information, see Supplementary Data Figures S1 and S2), drug administration before and during pregnancy, and the administration of folic acid before pregnancy (Table 1; for detailed information, see Supplementary Data Figures S4 and S5).Additionally, data regarding prodromal symptoms were collected (for detailed information, see Supplementary Data Figure S3).
Median and
Among the studied groups, one non-pregnant multiparous woman and two women in the RSA group received treatment with Euthyrox, whereas four pregnant women with physiological pregnancies were in a euthyroid state; thus, participants due to active autoimmunology disease were excluded from the study.
The results shown below concern 9 non-pregnant multiparous women, 16 pregnant women, and 18 RSA.
Analysis of Soluble Immune Checkpoints and Ligands
The conducted studies did not reveal differences in the concentrations of secretory sCTLA-4 (Figure 2A) and sCD28 (Figure 2B) molecules between the studied groups.However, we observed significantly higher concentrations of sCD80 (Figure 2D) in the RSA women compared to pregnant women.
The conducted studies did not reveal differences in the concentrations of secretory sCTLA-4 (Figure 2A) and sCD28 (Figure 2B) molecules between the studied groups.However, we observed significantly higher concentrations of sCD80 (Figure 2D) in the RSA women compared to pregnant women.No differences were observed between the groups in the concentrations of the sPD1 molecule (Figure 3A) and its ligands, sPD-L1 (Figure 3B) and sPD-L2 (Figure 3C).No differences were observed between the groups in the concentrations of the sPD1 molecule (Figure 3A) and its ligands, sPD-L1 (Figure 3B) and sPD-L2 (Figure 3C).No differences were observed between the groups in the concentrations of the sPD1 molecule (Figure 3A) and its ligands, sPD-L1 (Figure 3B) and sPD-L2 (Figure 3C).The concentration of sVISTA was significantly higher in pregnant women compared to women with RSA (Figure 4A).Additionally, the concentration of sHVEM was higher in the non-pregnant women's group compared to the RSA group (Figure 4B).The concentrations of soluble ligands of the TIGIT molecules, sNectin-2 (sCD112) an sCD155, are shown in Figure 5. SCD155 was lower in RSA women compared to pregnan women (Figure 5B).Pregnant women exhibited the highest concentration of sCD155 comparison to the other groups (Figure 5B).The concentration of sTIM-3 was significantly higher in the pregnant women grou compared to the RSA group (Figure 6A).Furthermore, pregnant women exhibited th lowest concentration of sLAG-3 among the studied groups (Figure 6B).The concentrations of soluble ligands of the TIGIT molecules, sNectin-2 (sCD112) and sCD155, are shown in Figure 5. SCD155 was lower in RSA women compared to pregnant women (Figure 5B).Pregnant women exhibited the highest concentration of sCD155 in comparison to the other groups (Figure 5B).The concentrations of soluble ligands of the TIGIT molecules, sNectin-2 (sCD112) and sCD155, are shown in Figure 5. SCD155 was lower in RSA women compared to pregnant women (Figure 5B).Pregnant women exhibited the highest concentration of sCD155 in comparison to the other groups (Figure 5B).The concentration of sTIM-3 was significantly higher in the pregnant women group compared to the RSA group (Figure 6A).Furthermore, pregnant women exhibited the lowest concentration of sLAG-3 among the studied groups (Figure 6B).The concentration of sTIM-3 was significantly higher in the pregnant women group compared to the RSA group (Figure 6A).Furthermore, pregnant women exhibited the lowest concentration of sLAG-3 among the studied groups (Figure 6B).
The concentration of sTIM-3 was significantly higher in the pregnant women group compared to the RSA group (Figure 6A).Furthermore, pregnant women exhibited the lowest concentration of sLAG-3 among the studied groups (Figure 6B).The concentration of the soluble sGal-9 molecule was the highest in pregnant women compared to the other studied groups (Figure 7).The concentration of the soluble sGal-9 molecule was the highest in pregnant women compared to the other studied groups (Figure 7).Our analysis revealed that women with RSA miscarriages had rather similar levels of sICPs and ICP ligands to non-pregnant women, with an accompanying decrease of sHVEM and sGalectin-9.compared to pregnant women, decreased concentrations of sGalectin-9, sTIM-3, sCD155, and sVISTA.and increased concentrations of sLAG 3, and sCD80.Table 2 summarizes our observations.Table 2. Analysis of differences in the concentration of soluble immune checkpoint s and ICP ligands between the RSA vs. non-pregnant, multiparous women, and pregnant women.The arrows show decrease or increase of sICP concentration.
Discussion
The most well-documented forms of immune checkpoint proteins (ICPs) are the membrane-bound variants.Nonetheless, numerous scientific publications have detailed soluble ICPs and their associated ligands.The molecules hold a pivotal role in the regula- Our analysis revealed that women with RSA miscarriages had rather similar levels of sICPs and ICP ligands to non-pregnant women, with an accompanying decrease of sHVEM and sGalectin-9, compared to pregnant women, decreased concentrations of sGalectin-9, sTIM-3, sCD155, and sVISTA.and increased concentrations of sLAG 3, and sCD80.Table 2 summarizes our observations.Table 2. Analysis of differences in the concentration of soluble immune checkpoint s and ICP ligands between the RSA vs. non-pregnant, multiparous women, and pregnant women.The arrows show decrease or increase of sICP concentration.
Discussion
The most well-documented forms of immune checkpoint proteins (ICPs) are the membrane-bound variants.Nonetheless, numerous scientific publications have detailed soluble ICPs and their associated ligands.The molecules hold a pivotal role in the regulation of immune responses, contribute significantly to the development and prognosis of immune response (Figure 1), and serve as potential biomarkers and targets for emerging immunotherapies [9].
Elevated concentrations of soluble CTLA-4 (sCTLA-4) have been reported by Gu et al. in patients with breast cancer [10].Omura et al. discovered that sCTLA-4 and soluble PD-L1 (sPD-L1) may have prognostic implications for patients with colorectal cancer [11].The research of Wang et al. revealed elevated levels of soluble CD28 (sCD28) and decreased sCTLA-4 levels in the plasma of patients with neuromyelitis optica and multiple sclerosis [12].Other studies have shown that soluble LAG3 (sLAG3) and sCD28 were negatively correlated with the cytolytic activity of T cells in clear-cell renal cancer [13].Cao et al. determined that sCD28 and sCTLA-4 were elevated in patients with chronic HBV infection [14].However, there is a limited body of knowledge regarding the concentrations of sCTLA-4 and sCD28 during pregnancy or in cases of pregnancy loss.Merely, Misra et al. have established a link between reduced sCTLA-4 secretion and the statistically significantly higher occurrence of minor allele homozygous rs231775 and rs3087243 tag-SNPs in RSA cases [15].Our results are contradictory; the levels of sCTLA-4 and sCD28 were comparable among women with recurrent spontaneous abortion (RSA), pregnant women, and non-pregnant women.
Ip et al. concluded that the extent of changes in the concentrations of sCTLA-4, sCD28, sCD86, and sCD80 in plasma may correlate with the severity of acute asthma [16].We found increased concentrations of sCD80 in the sera of RSA patients compared to pregnant women.CD80 binds as a ligand to the costimulatory molecule CD28 on the surface of naïve T cells and to the inhibitory receptor CTLA-4 expressed on activated T cells [16].Soluble forms of the mentioned proteins may act similarly to their membrane-bound counterparts, either activating or inhibiting activated T cells.However, it is important to note that CD80 and CD86 have a higher binding affinity to CTLA-4 than CD28.Consequently, we can speculate that the elevated concentrations of sCD80 in RSA women may lead to an excessive suppression of activated T cells.This, in turn, could result in immunological disruptions at the feto-maternal interface during, e.g., embryo implantation, when inflammation is required [5].The determination of sCD80 and/or sCD86 were utilized as a marker of poor prognosis for inflammatory conditions like rheumatoid arthritis or hematological malignancies [17,18].
Subsequently studied ICPs were soluble T cell immunoglobulin domain and mucin domain 3 (TIM3).TIM3 was initially identified as an inhibitory molecule in IFNγ-producing T cells.Numerous cell types, including regulatory T cells (T reg cells), myeloid cells, natural killer (NK) cells, and mast cells have been shown to express TIM-3 [19,20].In the studies involving pregnant and preeclamptic women, the pivotal role of sTim-3 was reaffirmed.Li et al. emphasized the significance of TIM-3-expressing NK cells, and that the interaction between TIM-3 and Gal-9 led to the activation of IL-10 and TGF-β genes, thus enhancing the generation of Treg cells [21].Grossman et al. found a positive correlation between sTIM-3 levels and TNF-α, HSP70, and Gal-9 in the serum of pregnant women.Furthermore, sTIM-3 level was positively correlated with the gestational age at delivery [22].In line with the aforementioned findings, we observed an elevation of the sTIM-3 level in the serum of pregnant women.However, Wu et al. reported increased sTIM-3 and Galectin-9 concentrations in the sera of RSA patients [23].In our study, healthy pregnant women exhibited the highest sTIM-3 and Gal-9 concentrations in the serum.The discordant data may be attributed, in part, to differences in the group sizes of the tested RSA patients (n = 35 vs. n = 18).Nevertheless, as noted by Meggyes et al., the engagement of TIM-3 with its ligand Gal-9 leads to the apoptosis of Th1 and Th17 cells [24].Consequently, heightened expression of TIM-3 and its shedding may influence positive pregnancy outcomes.Furthermore, Meggyes et al. discovered that during a healthy pregnancy, soluble Galectin-9 concentration increased progressively with each trimester [24].In line with the findings of Meggyes et al., our research demonstrated an increase in the concentration of sGalectin-9 in pregnant women's serum if compared to non-pregnant or RSA women.Enninga et al. extended their assessment of sGal-9 by including additional time points and showed that maternal blood levels of sGal-9 remained elevated throughout gestation [25].Both Meggyes and Enninga's studies showed that concentrations of both soluble Galectin-9 and sPD-L1 increased during pregnancy [24,25].However, we did not find an elevation of sPD-L1 concentration in pregnant women or in RSA serum.It is worth noting that the placenta exhibits a tremendous expression of Gal-9 and PD-L1, which might be associated with appropriate placental development throughout pregnancy [26].
Hadley et al. proved that sPD1 concentration correlates with the active disease state of autoimmune hepatitis and inflammatory bowel disease in pediatric patients [27,28].Zhou et al. showed that serum sPD-1 levels correlate with numerous clinical parameters, reflecting inflammation and viral replication in patients affected by the chronic hepatitis B virus.The authors suggested that sPD-1 may serve as a new biomarker of liver fibrosis and can further aid in selecting antiviral treatment [27,28].Similarly, Chang et al. found that sPD-1 and sPD-L1 may serve as prognostic markers in the progression of hepatocellular carcinoma [28,29].Concerning pregnancy research, Gu et al. showed that maternal sPD-1 levels were significantly higher and PD-L1 relatively higher in preeclamptic than in normotensive pregnant women [30].The authors conclude that aberrant crosstalk between sPD-1 and sPD-L1 signaling is characteristic in preeclampsia.Moreover, elevated maternal sPD-1 and sPD-L1 concentrations were associated with fetal gender differences and immune tolerance distinctions during pregnancy [30].sPD-L1 has been shown to be a potential discriminatory marker for endometriosis-related infertility [30].However, Okuyama et al.'s research indicated that sPD-L1 levels are elevated in the third trimester of pregnancy when compared to non-pregnant individuals [31].In our findings, we do not find differences between groups in terms of sPD-1, sPD-L1, and sPD-L2 concentrations.It is important to clarify that our study did not include preeclampsia or endometriosis patients, cases where we found an abundance of literature.The timing of studies, the timing of data collection, and the specific populations studied could play a significant role in the observed discrepancies in the collected data.Further research on a larger scale might help to clarify and reconcile the irregularities and discrepancies [30].
Studies related to the herpes virus entry mediator (HVEM), sHVEM, or mHVEM in pregnant women or pregnancy diseases are limited.HVEM is a receptor for LIGHT, a tumor necrosis factor (TNF) superfamily ligand.LIGHT has emerged as a potent initiator of the T cell costimulation signal effecting CTL-mediated tumor rejection, allograft rejection, and graft versus host disease [32].Gill et al. found that HVEM was present in syncytiotrophoblast and amnion epithelial cells, but it was absent in villous mesenchymal cells and cytotrophoblasts [32].Wang et al. investigated the role of the LIGHT vs. HVEM relation in pregnancy and pregnancy-related disorders, particularly preeclampsia [33].The research revealed that elevated LIGHT levels, coupled with heightened HVEM receptor activation, cause placental damage and the release of potent vasoactive factors such as soluble fms-like tyrosine kinase-1 (sFlt-1) and endothelin-1 (ET-1) during pregnancy [33].The findings strongly imply that LIGHT signaling might be a pivotal factor in the development of preeclampsia [33].Our results are contradictory; the sHVEM level decreased during pregnancy, with the lowest concentrations noted in patients who experienced miscarriage.Generally, it has been shown that sHVEM levels are upregulated in the serum of patients suffering from allergic asthma, atopic dermatitis, rheumatoid arthritis, and various neoplastic diseases [34,35].
Among others, the next studied sICP was LAG-3, also known as lymphocyte-activation gene 3, which is a protein encoded by the LAG3 gene in humans.LAG-3 is a type I transmembrane protein with structural similarities to CD4, and it is expressed 3-4 days post-activation on both CD4 and CD8 T cells [36].In addition, LAG3 expression was found on activated T cells, NK cells, B cells, and plasmacytoid dendritic cells [37].The molecule binds a non-holomorphic region of major histocompatibility complex class II (MHC class II) [38].LAG-3 utilizes an additional 30 amino acid loop in the D1 region, which binds to MHC class II with greater affinity than CD4 [37,38].
Ching-Tai Huang's research suggests that LAG-3 plays a crucial role in the function of natural and induced regulatory T cells (Tregs).The discovery supports the conclusion that LAG-3 is an essential receptor for Tregs, which plays a crucial role in pregnancy tolerance development [39].Recent research conducted by Marozio et al. on endometrial biopsies from RSA women with dysfunctional uterine bleeding and previous uneventful pregnancies as controls showed intensified expression of genes and proteins of CTLA-4 and LAG-3 in the endometrial tissue of RSA women [40].The results are in line with ours considering the sLAG-3 concentration in RSA women's serum samples.
An additional immune checkpoint protein (ICP) studied by us was VISTA (B7-H1), a negative checkpoint regulator (NCR).VISTA is primarily expressed on various immune cells, including T cells, myeloid cells, and dendritic cells.The VISTA function is multifarious and evokes inhibitory and stimulatory effects on immune responses, depending on the context [41].VISTA shares significant homology with PD-L1 and PD-L2 [41].Wu et al. established that serum VISTA could serve as a potential novel biomarker in pancreatic cancer diagnosis [42].We noticed that serum RSA women exhibit significantly lower concentrations of sVISTA than pregnant women.
We aimed to evaluate the concentration of ligand for TIGIT, which is the soluble form of Nectin-2 (CD112).TIGIT could be engaged with the two ligands, CD155 (PVR) and CD112 (PVRL2, Nectin-2), expressed by tumor cells and antigen-presenting cells in the tumor microenvironment.There is substantial evidence demonstrated in vivo and in vitro that the TIGIT pathway plays a role in T-cell-mediated and natural killer cell-mediated tumor recognition.Dual blockade of PD-1 and TIGIT has been shown to significantly enhance the expansion and function of tumor antigen-specific CD8 + T cells in vitro and promote tumor regression in mouse tumor models [43].Nevertheless, in the existing literature, we have not encountered examples of utilizing sNectin for diagnostic or therapeutic purposes.Meggyes et al. evaluated the TIGIT-CD226-CD112-CD155 immune checkpoint network during a healthy pregnancy.The difference in CD226 expression and concentration among all the studied parameters was found [43].
The second ligand for TIGIT is CD155 [44].Iguchi-Manaka et al. found that sCD155 was increased in patients with various cancer types, including esophageal, colorectal, pancreatic, bile-duct, breast, gastric, ovarian, endometrial, lung, and cervical cancers; thus, authors concluded that it might be a useful biomarker for cancer development [44].In a separate study, Iguchi-Manaka et al. established that sCD155 concentration in the serum of patients with breast cancer was positively correlated with the patient's age, stage of disease, and size of the invasive tumor [45].Okumura et al. showed that sCD155 derived from tumors hinders the DNAM-1-mediated antitumor activity of NK cells [46].This phenomenon suggests that an elevated concentration of sCD155 may lead to the downregulation of NK cell activity, e.g., during physiological pregnancy, which supports our observation of the decreased level of CD155 in RSA women.
Institutional Review Board Statement
The study received ethical approval from the Bioethics Committee of the Medical University of Warsaw (Approval Number: KB/13/2020 issued 13 January 2019).Informed consent was obtained from all participants before conducting measurements, interventions, and blood collections.The procedures adhered to the principles outlined in the Helsinki Declaration of 1975, as revised in 2013.
Study Groups
The study participants were categorized into three groups: RSA (recurrent spontaneous abortion) women, pregnant women, and non-pregnant women.Data were collected from all participants, including information on age, weight, height, history of early and late miscarriages, and prodromal pregnancy symptoms (e.g., vomiting, nausea, and breast pain).Additionally, we recorded details about medical procedures undertaken before and during pregnancy, the use of vitamins or dietary supplements, pre-pregnancy folic acid intake, hormonal contraception history, and previous in vitro treatments.Chronic medical conditions such as diabetes, endometriosis, insulin resistance, Hashimoto's disease, and polycystic ovary syndrome (PCOS) were also documented.The data collection period spanned from 2019 to 2021, and all necessary precautions related to the COVID-19 pandemic were undertaken.
Control Group (a) Non-Pregnant Women
The non-pregnant control group consisted of 10 fertile, non-pregnant women with no prior obstetric-gynecological or internal medicine disorders-multiparous women.All the women in this group had previously given birth at least once without any complications, and they reported having experienced healthy pregnancies.None of the participants in the control group had a history of miscarriages.Additionally, apart from one individual, who had Hashimoto's disease but was in a euthyroid state, none of the control subjects had received treatment for chronic illnesses.Blood samples were collected during the follicular phase of the menstrual cycle.
(b) Pregnant Women
Twenty pregnant women between 11-13 week of pregnancy were classified as the control "pregnancy group".To confirm the physiological development of pregnancy, patients underwent an ultrasonographic scan following the guidelines of the Fetal Medicine Foundation.Additionally, physical examinations and blood tests mentioned above were conducted.
Patients with Recurrent Spontaneous Abortion (RSA)
Twenty recurrent spontaneous abortion (RSA) patients diagnosed, according to ESHRE, as experiencing two or more consecutive spontaneous miscarriages before the 20th week of gestation [47,48], were recruited as the study group.The samples were collected within 72 h following the miscarriage.
Sample Preparation
The blood samples were vested into 5 mL BD Vacutainer Plus Serum Tubes.After 30 min of blood collection, serum was separated from RBC by centrifugation at 2500 rpm for 20 min.One milliliter of serum was collected in 5 tubes with 200 µL volume sample in freezing tubes (MERCK, Darmstadt, Germany) and frozen at −80 • C.
Luminex Acquisition
The concentrations of 13 ICPs were measured on MAGPIX (MERCK, Darmstadt, Germany) with a Luminex-based bead array, the MILLIPLEX ® MAP Human Immuno-Oncology Checkpoint Protein Panel 1 (MERCK, Darmstadt, Germany), and the MILLI-PLEX ® MAP Human Immuno-Oncology Checkpoint Protein Panel 2 (MERCK, Darmstadt, Germany) on the Luminex xMAP ® platform using a magnetic bead format (MILLIPLEX ® Analytes, Millipore, MA, USA) for the following biomarkers: sTIM-3, sCTLA-4, sCD80, sCD86, sCD28, sPD-1, sPD-L1, sPD-L2, sHVEM, sCD112, sCD155, sLAG-3, and sVISTA (B7-H5).For each sample, 25 µL of serum was used to assess the concentration of soluble ICPs and their ligands.All procedures were performed according to the manufacturer's recommendations.Quality assurance was maintained through the inclusion of appropriate standards and quality controls provided in the kits.Each run incorporated relevant quality controls, and results were calculated using the xPONENT system (MERCK, Darmstadt, Germany).For specific data regarding the upper limit of quantification (ULOQ) and lower limit of quantification (LLOQ), please refer to Supplementary Materials, Table S1.
ELISA Method
ELISA assays were conducted using commercially available reagent kits following the manufacturer's instructions.Serum samples underwent a two-fold dilution.Galectin-9 levels were analyzed with the Human Galectin-9 ELISA Kit (Thermo Fischer, Waltham, MA, USA), and the concentrations were determined using the four-parameter logistic curve, as per the manufacturer's instructions.The ELISA kit exhibited a sensitivity of 36.86 pg/mL.
Statistical Analysis
Statistical analyses were performed using GraphPad Prism 8.4.1.The results were presented as the mean ± standard deviation (SD).Gaussian distribution was assessed using the Shapiro-Wilk test.For data sets exhibiting a Gaussian distribution, statistical comparisons were made using the F test to assess equal variance, followed by unpaired t-tests for data sets with equal SD and unpaired t-tests with Welch's correction for data sets with different SD.In cases where the data did not follow a Gaussian distribution, the Mann-Whitney U test was applied.Statistically significant differences between groups, indicated by p-values below 0.05, were denoted with asterisks.
Conclusions
The collected results suggest that alterations in the concentrations of certain immune checkpoint proteins (ICPs) may be associated with pregnancy loss.Specifically, we observed that women experiencing recurrent pregnancy abortions (RSA), when compared to both control groups (pregnant women and non-pregnant), exhibited decreased concentrations of sGalectin-9, sCD155, and sTIM-3.Additionally, increased secretion of sLAG-3 and sCD80 accompanies this phenomenon.The pattern of expressed sICPs by pregnant women could be correlated with the function of Galectin-9, CD155, and TIM-3, which downregulate NK and T cell activation.Whereas, overexpression of sLAG-3 may indicate trophoblast HLA antigen recognition, and increased sCD80 costimulatory molecule secretion may suggest triggering of T cell responses in RSA women.
Generally, in our research, RSA women exhibited analogous expression of soluble immune checkpoints as non-pregnant women.Considering all the compiled data, we can suggest that changes in the secretion of the following immune checkpoint proteins (ICPs) could be a potential marker for recurrent pregnancy loss: sCD155, LAG-3, Gal-9, and sTIM3, with an accompanying increase in the T-cell receptor (TCR) costimulatory molecule sCD80.
Determination of the ICPs might help to predict the fate of early pregnancy and give the possibility of introducing targeted therapy based on the observed immunological imbalance of the feto-maternal interface.
We acknowledge that the presented results should be validated in larger study populations.Given the intricate nature of the immune system's functioning, the findings from our research catalyze further advancement in scientific research on the subject of recurrent miscarriages.
Figure 1 .
Figure 1.Relationship between antigen-presenting cell (APC), lymphocyte T, and trophoblast cells regulated by secreted immune checkpoints.The figure illustrates the intricate interplay among
Figure 1 .
Figure 1.Relationship between antigen-presenting cell (APC), lymphocyte T, and trophoblast cells regulated by secreted immune checkpoints.The figure illustrates the intricate interplay among antigen-
Figure 2 .
Figure 2. Concentrations of secretory molecules controlling the immune system (ICPs) and their ligands, (A) sCTLA-4, (B) sCD28, (C) sCD86, (D) sCD80 in the sera of studied groups of women: Group of RSA women (n = 18), group of pregnant women (RSA) (n = 16), group of non-pregnant women (n = 9).Results are presented as individual data points, with the mean value indicated as a line.Significance was calculated using Student's t-test or Mann-Whitney U test, * p < 0.05.
Figure 3 .Figure 2 .
Figure 3. Concentrations of secretory molecules controlling the immune system: (A) sPD-1, (B) sPD-L1, (C) sPD-L2 in the sera of studied groups: Group of women with miscarriages (RSA) (n = 18), group of pregnant women (n = 16), group of non-pregnant women (n = 9).Results are presented as individual data points, with the mean value indicated as a line.Significance was calculated using Student's t-test or Mann-Whitney U test.The concentration of sVISTA was significantly higher in pregnant women compared
Figure 2 .
Figure 2. Concentrations of secretory molecules controlling the immune system (ICPs) and their ligands, (A) sCTLA-4, (B) sCD28, (C) sCD86, (D) sCD80 in the sera of studied groups of women: Group of RSA women (n = 18), group of pregnant women (RSA) (n = 16), group of non-pregnant women (n = 9).Results are presented as individual data points, with the mean value indicated as a line.Significance was calculated using Student's t-test or Mann-Whitney U test, * p < 0.05.
Figure 3 .Figure 3 .
Figure 3. Concentrations of secretory molecules controlling the immune system: (A) sPD-1, (B) sPD-L1, (C) sPD-L2 in the sera of studied groups: Group of women with miscarriages (RSA) (n = 18), group of pregnant women (n = 16), group of non-pregnant women (n = 9).Results are presented as individual data points, with the mean value indicated as a line.Significance was calculated using Student's t-test or Mann-Whitney U test.The concentration of sVISTA was significantly higher in pregnant women compared Int. J. Mol.Sci.2024,25,
Figure 4 .
Figure 4. Concentrations of secretory molecules controlling the immune system, (A) sVISTA, (B sHVEM in the sera of studied groups: Group of women with miscarriages (RSA), (n = 18), group pregnant women (n = 16), group of non-pregnant women (n = 9).Results are presented as individu data points, with the mean value indicated as a line.Significance was calculated using Student's test or Mann-Whitney U test, * p < 0.05.
Figure 5 .
Figure 5. Concentrations of secretory ligands.(A) sNectin2, (B) sCD155 in the sera of studied group Group of women with miscarriages (RSA) (n = 18), group of pregnant women (n = 16), group of no pregnant women (n = 9).Significance was calculated using Student's t-test or Mann-Whitney U tes * p < 0.05.
Figure 4 .
Figure 4. Concentrations of secretory molecules controlling the immune system, (A) sVISTA, (B) sHVEM in the sera of studied groups: Group of women with miscarriages (RSA) (n = 18), group of pregnant women (n = 16), group of non-pregnant women (n = 9).Results are presented as individual data points, with the mean value indicated as a line.Significance was calculated using Student's t-test or Mann-Whitney U test, * p < 0.05.
Figure 4 .
Figure 4. Concentrations of secretory molecules controlling the immune system, (A) sVISTA, (B) sHVEM in the sera of studied groups: Group of women with miscarriages (RSA), (n = 18), group of pregnant women (n = 16), group of non-pregnant women (n = 9).Results are presented as individual data points, with the mean value indicated as a line.Significance was calculated using Student's ttest or Mann-Whitney U test, * p < 0.05.
Figure 6 .
Figure 6.Concentrations of secretory molecules controlling the immune system.(A) sTIM-3, (B) sLAG-3, in the sera of studied groups: Group of women with miscarriages (RSA) (n = 18), group of pregnant women (n = 16), group of non-pregnant women (n = 9).Results are presented as individual data points with the mean value indicated as a line.Significance was calculated using Student's t-test or Mann-Whitney U test, * p < 0.05.
15 Figure 6 .
Figure 6.Concentrations of secretory molecules controlling the immune system.(A) sTIM-3, (B) sLAG-3, in the sera of studied groups: Group of women with miscarriages (RSA) (n = 18), group of pregnant women (n = 16), group of non-pregnant women (n = 9).Results are presented as individual data points with the mean value indicated as a line.Significance was calculated using Student's ttest or Mann-Whitney U test, * p < 0.05.
Figure 7 .
Figure 7. Soluble Galectin-9 concentration in studied groups: Group of women with miscarriages-RSA (n = 18), group of pregnant women (n = 16), group of non-pregnant women (n = 9).Results are presented as individual data points with the mean value indicated as a line.Significance was calculated using Student's t-test or Mann-Whitney U test, * p < 0.05.The statistically significant differences marked with lines.
Figure 7 .
Figure 7. Soluble Galectin-9 concentration in studied groups: Group of women with miscarriages-RSA (n = 18), group of pregnant women (n = 16), group of non-pregnant women (n = 9).Results are presented as individual data points with the mean value indicated as a line.Significance was calculated using Student's t-test or Mann-Whitney U test, * p < 0.05.The statistically significant differences marked with lines.
|
2023-12-31T16:13:13.148Z
|
2023-12-29T00:00:00.000
|
{
"year": 2023,
"sha1": "268298eec68e6cb1a43fd7c8e6f636fe92de3c9d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/25/1/499/pdf?version=1703864889",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "02e1d2a44d1b6cef4154852df80adf52aaf8da23",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
21698597
|
pes2o/s2orc
|
v3-fos-license
|
Angiopoietin-like protein 3 (ANGPTL3) deficiency and familial combined hypolipidemia
Three members of the angiopoietin-like (ANGPTL) protein family-ANGPTL3, ANGPTL4 and ANGPTL8- are important regulators of plasma lipoproteins. They inhibit the enzyme lipoprotein lipase, which plays a key role in the intravascular lipolysis of triglycerides present in some lipoprotein classes. This review focuses on the role of ANGPTL3 as emerged from the study of genetic variants of Angptl3 gene in mice and humans. Both loss of function genetic variants and inactivation of Angptl3 gene in mice are associated with a marked reduction of plasma levels of triglyceride and cholesterol and an increased activity of lipoprotein lipase and endothelial lipase. In humans with ANGPTL3 deficiency, caused by homozygous loss of function (LOF) variants of Angptl3 gene, the levels of all plasma lipoproteins are greatly reduced. This plasma lipid disorder referred to as familial combined hypolipidemia (FHBL2) does not appear to be associated with distinct pathological manifestations. Heterozygous carriers of LOF variants have reduced plasma levels of total cholesterol and triglycerides and are at lower risk of developing atherosclerotic cardiovascular disease, as compared to non-carriers. These observations have paved the way to the development of strategies to reduce the plasma level of atherogenic lipoproteins in man by the inactivation of ANGPTL3, using either a specific monoclonal antibody or anti-sense oligonucleotides.
Introduction
Angiopoietin-like proteins (ANGPTLs) represent a family of eight secreted glycoproteins that show structural homology to angiopoietins and carry distinct physiological functions, including putative roles in lipid metabolism, expansion of stem cells, inflammation, tissue remodeling and angiogenesis [1][2] . In recent years, three ANGPTLs, ANGPTL3, ANGPTL4 and ANGP-TL8, have been shown to play a role in lipid metabolism and in the regulation of plasma lipid levels [3][4] .
More specifically, ANGPTL 3, ANGPTL4 and ANG-PTL8 share a common feature, being, to a variable extent, negative regulators of the activity of lipoprotein lipase (LPL), the key enzyme involved in the intravascular lipolysis of triglyceride (TG) present in some lipoprotein classes, such as chylomicrons and very low density lipoproteins (VLDL) [5][6] .
This overview focuses on the role of ANGPTL3 in lipoprotein metabolism and the effect of its deficiency/ inactivation in humans and animal models.
ANGPTL3 deficiency: discovery of monogenic combined hypolipidemia in mice The link between ANGPTL3 and plasma lipoprotein metabolism emerged from the identification of the KK/ San mouse strain, a mutant strain derived from a colony of KK mice characterized by diabetes, obesity and hypertriglyceridemia. The KK/San mice, despite maintaining the phenotype of obesity and diabetes, had a marked decrease of plasma TG ( > 90%), as compared with the colony of KK mice. This severe hypotriglyceridemia was due to a marked reduction of plasma VLDL. These mice were found to be homozygous for a loss-offunction (LOF) variant of the Angptl3 gene, causing the formation of a truncated Angptl3, and leading to a complete Angptl3 deficiency [7] . The hypolipidemia observed in this mutant mouse strain was found to be associated with an increased activity of LPL [8] . Since this observation, other studies conducted in Angptl3 -/mice not only confirmed that Angptl3 is an inhibitor of LPL, but also showed that complete Angptl3 deficiency is associated with a reduction of TG-containing lipoproteins (VLDL) and also with a reduction of cholesterol-carrying lipoproteins, such as LDL and HDL [9] . On the other hand, overexpression of Angptl3 in mice increased plasma TG by inhibiting the activity of LPL [10] .
Familial combined hypolipidemia in humans
In humans, a lipoprotein phenotype similar to that observed in KK/San mice was firstly described by Musunuru et al. in a large family originally thought to be affected by familial hypobetalipoproteinemia (FHBL1) (OMIM#615558), in view of the low levels of plasma LDL-C [11] . Two siblings of this family showed extremely low levels of total cholesterol (TC) and LDL-C and low levels of TG and HDL-C. Exome sequencing revealed that these siblings were compound heterozygous for two LOF variants of ANGPTL3 [p. (E129*)/p.(S17*)], expected to cause a complete deficiency of ANGPTL3. This novel lipoprotein phenotype was designated Familial Combined Hypolipidemia (FHBL2, OMIM#605019) [11] . The genetic screening in the family led to the identification of 13 heterozygotes. These individuals showed intermediate plasma levels of LDL-C and TG between compound heterozygous and non-carrier family members with a gene-dosage effect. This was not the case for plasma HDL-C levels that were similar in heterozygous carriers and in non-carriers. The nonsense variant p.(S17*) of ANGPTL3 was also found in a cohort of hypolipidemic individuals living in a district of Central Italy, where the resequencing of Angptl3 gene in nine families and in a large cohort of individuals of the local population (352 individuals) led to the identification of 62 carriers of this variant (8 homozygotes and 54 heterozygotes) [12] . In this survey homozygotes had undetectable plasma levels of ANGPTL3, low TG and cholesterol levels and a striking reduction of all lipoprotein classes (VLDL, LDL and HDL); heterozygotes had a 50% reduction in circulating ANGPTL3 and reduced levels of TC and HDL-C, as compared to non-carriers and levels of LDL-C and TG similar to controls [12] . Furthermore, p.(S17*) homozygotes had significantly higher LPL activity and mass as compared to controls and lower plasma level of free fatty acids, which suggested a reduced lipolysis in adipose tissue [13] . Other individuals with familial combined hypolipidemia due to different LOF variants of Angptl3 gene were identified in Italian and Spanish families, as well as in a cohort of subjects with primary hypocholesterolemia [14][15][16] . In a pooled analysis of carriers of LOF variants of Angptl3 gene, Minicocci [17] . These investigators showed that, as compared to controls, the carriers of two LOF alleles (homozygotes/compound heterozygotes) as well as carriers of a single LOF allele (simple heterozygotes) showed a significant reduction of all plasma lipoproteins.
From a clinical point of view, carriers of two Angptl3 LOF alleles identified so far in family studies did not show a distinct pathological phenotype. More specifically, they did not show clinical manifestations of premature atherosclerosis or increased risk of ischemic heart disease, which might have been expected in view of the lifelong exposure to low levels of HDL-C (a known clinical predictor of risk of atherosclerotic cardiovascular disease) [18] . Minicocci et al. evaluated the vascular status in a group of FHBL2 subjects (7 homozygotes and 59 heterozygotes) carrying the Angptl3 LOF mutation p.(S17*) [19] . They found that FHBL2 individuals did not show significant changes in carotid intima-media thickness (a surrogate marker for atherosclerosis) with respect to controls. These observations suggest that, despite the presence of low HDL-C levels, FHBL2 subjects are protected from developing premature atherosclerosis by the concomitant reduction of the levels of atherogenic lipoproteins such as VLDL and LDL.
Another key issue regarding the clinical phenotype in FHBL2 concerns the presence of fatty liver disease, a condition frequently encountered in individuals with FHBL1. The latter individuals, who have persistently low levels of TG and LDL-C, resulting from APOB gene LOF variants which impair the hepatic secretion of VLDL, usually develop fatty liver of variable severity [20] . Carefully conducted clinical studies have shown that in FHBL2 there was no increased prevalence of fatty liver or chronic liver disease with respect to controls [21] .
LOF variants of Angptl3 gene identified in population studies
Early genome wide association studies (GWAS) had shown that common and rare variants of Angptl3 gene were associated with variations in plasma levels of TG, TC, LDL-C and HDL-C [22][23] . In addition, resequencing of Angptl3 in some population studies had shown that some LOF variants were associated with reduced levels of plasma TG [24] . By sequencing the Angptl3 gene in the DiscovEHR study participants and in four other population cohorts, Dewey et al. recently identified 400 subjects heterozygous for 13 different LOF Angptl3 variants, with an estimated allele frequency of 1 in 237 [25] . These heterozygous carriers had a significant reduction of TG, LDL-C and HDL-C levels and a 50% reduction of circulating ANGPTL3 as compared to non-carriers. In addition, the LOF variants of Angptl3 were found to be associated with a 39% reduction of coronary artery disease (CAD) [25] .
In another recent study, Angptl3 sequence data from case-control studies and a population-based cohort study led to the identification of 23 LOF variants [26] . These LOF variants were present in 130 of 40,112 participants (1 in 309 individuals). Furthermore, in heterozygous carriers of an Angptl3 LOF variant, selected from a cohort of more than 20,000 individuals of the Myocardial Infarction Genetics Consortium studies, the plasma levels of TC, LDL-C and TG were reduced by 11%, 12% and 17%, respectively, with no significant changes in HDL-C as compared to noncarriers. In addition, the authors determined the relationship between Angptl3 LOF variants and the risk of CAD. They found that heterozygous carriers of LOF variants in Angptl3 had a 34% decreased risk of CAD. This reduced CAD risk was associated with lower levels of circulating ANGPTL3 [26] . Collectively, these two large surveys strongly indicate that partial ANG-PTL3 deficiency due to heterozygosity for LOF Angptl3 variants results in a reduced risk of CAD as compared to non-carriers, suggesting that ANGPTL3 may be a novel target to reduce the level of atherogenic lipoproteins [25][26] . An updated list of LOF variants reported so far is shown in Supplementary Table 1(available online).
ANGPTL3 and lipoprotein metabolism
ANGPTL3 is synthesized in the liver as a precursor protein, which is converted into the mature form by proteolytic cleavage by several hepatic pro-protein convertases (such as Furin, PCSK2, PCSK4, PACE4, PCSK5 and PCSK7). The cleavage process yields the active N-terminal fragment, which has efficient LPL inhibitory activity; this is supported by the observation that the deletion of ANGPTL3 amino terminal region causes the total loss of its inhibitory activity [8,27] . The cleavage process of ANGPTL3 appears to be facilitated by ANGPTL8. ANGPTL8 is secreted by the liver into the circulation where it interacts with ANGPTL3 for cleavage and forms a complex with the N-terminal fragment of ANGPTL3. The complex, as well as the free N-terminal fragment of ANGPTL3, inhibits LPL [28] . There is evidence in mice that ANGPTL8 has an LPL inhibitory motif, which is inactive as it is not accessible to the enzyme. The formation of the complex ANGPTL8-ANGPTL3 induces structural changes in ANGPTL8, which exposes the inhibitory motif for LPL inhibition. Thus, ANGPTL8 is inactive "per se" and requires ANGPTL3 to acquire LPL inhibitory activity. It has been demonstrated that the major ability of the ANGPTL8/ANGPTL3 complex to inhibit LPL depends on the active LPL inhibitory motif of ANGPTL8 [29] . This is supported by the observations that the LPL inhibition by the ANGPTL3/ANGPTL8 complex could not be reversed by an anti-ANGPTL3 blocking antibody [29] .
Some genetic variants of ANGPTL8 affecting plasma lipids have been reported. Quagliarini et al. found that a common variant c.175C > T, p.Arg59Trp (rs2278426) was associated with lower plasma LDL-C and HDL-C levels in Hispanics and African Americans of the Dallas Heart Study [28] . In a genome wide association study (including mostly individuals of European descent), the p.Arg59Trp variant was found to be associated with a reduction of both LDL-C and HDL-C, but not to be associated with reduction of plasma triglycerides [28] . Furthermore, a low frequency variant of ANGPTL8 [c. 361C > T, p.Gln121* (rs145464906)] was identified in a survey of a large group of individuals by Peloso et al. [30] . Heterozygous carriers of this variant had higher HDL-C, lower TG and not significantly lower LDL-C levels as compared to non-carriers. It is conceivable to assume that the truncated ANGPTL8 generated by this variant is unable to form the ANGPTL8/ANGPTL3 complex, which is known to exert a strong inhibition on LPL activity [31] .
The mechanism of ANGPTL3-mediated inhibition of LPL is still not fully understood. Enzyme kinetic studies in cell free system showed that ANGPTL3 binds and reversibly inhibits the catalytic activity of LPL without affecting the LPL self-inactivation rate [32] . In the presence of cells, ANGPTL3 binds LPL attached to the cell surface, promotes the dissociation of full-length LPL from the cells and induces the cleavage of the enzyme by protease (PACE4 and Furin). This translates into the irreversible inactivation of the enzyme [33] .
In addition, ANGPTL3 has an inhibitory effect on the activity of the endothelial lipase (EL), an extracellular lipase, which carries mainly a phospholipase activity acting predominantly on HDL, and increases the catabolism of HDL particles. Angptl3 deficient mice show low plasma levels of HDL-C, accompanied by increased phospholipase activity [34] .
Treatment of mice with a monoclonal antibody against ANGPTL3 significantly reduced plasma HDL-C levels in wild-type but not in endothelial lipase deficient mice [35] (Fig. 1).
ANGPTL3 and metabolic conditions associated with hypertriglyceridemia
Angptl3 gene expression and ANGPTL3 plasma levels may show marked changes in some metabolic disorders in humans or experimental manipulations in rodents which are characterized by marked elevation of the level of plasma triglycerides. These disorders include conditions such as diabetes, obesity, hypothyroidism, and leptin deficiency etc., as reported below.
ANGPTL3 and diabetes
In vitro and in vivo studies have indicated that insulin acts as a negative regulator for ANGPTL3 production. In rat and human hepatoma cells, the amount of Angptl3 mRNA and secreted ANGPTL3 protein decreased in a dose dependent fashion in the presence of insulin. In striking contrast, Angptl3 gene expression and plasma protein level were increased in insulin-deficient streptozotocin-treated mice [36] . Inukai et al. confirmed that the level of Angptl3 mRNA was increased in the liver of streptozotocin diabetic mice and this effect was reversed by administration of insulin [37] . In addition, the level of Angptl3 mRNA and protein was increased more than 3 fold in type 2 diabetic obese mice (db/db mice) [37] .
Haridas et al. investigated the in vivo effect of insulin on circulating ANGPTL3 in humans [38] . They found that in vivo 6-hours euglycemic hyperinsulinemia decreased plasma ANGPTL3 (and ANGPTL8) at 3 and 6 hours. They also found that in immortalized human hepatocytes, insulin decreased Angptl3 gene expression and ANGPTL3 secretion into the medium [38] . Therefore, also in humans, insulin decreases plasma ANGPTL3 by decreasing Angptl3 expression in the liver. These combined results indicate that the expression of ANGPTL3 is increased in both insulin deficient and insulin resistant diabetic states, suggesting that increased plasma ANGPTL3 contributes to diabetic hypertriglyceridemia.
ANGPTL3 and leptin deficiency
The role of leptin on the expression and plasma level Fig. 1 Effects of ANGPTL3 deficiency. ANGPTL3 deficiency increases the activity of lipoprotein lipase (LPL) and endothelial lipase (EL). The activation of LPL results in an enhanced catabolism of TG-rich lipoproteins which translates into a reduction of plasma triglyceride. The activation of EL results in an increased hydrolysis of HDL phospholipids and a decreased plasma level of HDL-cholesterol (HDL-C). In mouse models and in human hepatocytes, ANGPTL3 deficiency reduces ApoB-100 secretion and increases LDL/VLDL uptake by the liver, thus contributing to the low LDL-C levels. ANGPTL3 deficiency in humans reduces VLDL-apoB production rate and increases LDL-apoB fractional catabolic rate. In adipose tissue, ANGPTL3 deficiency suppresses lipolysis, resulting in a decreased release of free fatty acids (FFA) into the circulation. of Angptl3 has emerged from early studies in mice. Shimamura et al. found that: (1) Angptl3 mRNA expression and plasma Angptl3 levels were increased in both leptin resistant C57Bl/J6 db/db mice and in leptin deficient C57Bl/J6 ob/ob mice; (2) elevation of ANGPTL3 in plasma was associated with elevation of plasma TG; (3) leptin administration to leptin deficient ob/ob mice reversed Angptl3 expression and ANGPTL3 plasma levels and induced a normalization of plasma TG [36] .
The role leptin on ANGPTL3 expression and plasma level in humans was recently documented in a study [39] conducted in patients with generalized lipodystrophy, a leptin deficient condition associated with hypertriglyceridemia [40] . These patients were found to have increased plasma levels of ANGPTL3 (but not plasma levels of ANGPTL4), suggesting the possibility that hypertriglyceridemia might be the result of ANGPTL3 mediated inhibition of LPL [39] . After metreleptin treatment, the plasma level of leptin increased whereas the plasma level of ANGPTL3 and TG showed a significant decrease [39] . Thus, the finding of elevated ANGPTL3 levels in patients with lipodystrophy and their reduction following leptin treatment is consistent with the results obtained in leptin deficient mice and suggests that ANGPTL3 may contribute to hypertriglyceridemia in leptin deficient states.
ANGPTL3 and hypothyroidism
Studies in humans with hypothyroidism and elevated plasma TG have shown that the administration of T4 induced a reduction of plasma VLDL cholesterol and triglyceride, associated with an increased LPL activity [41] . On the other hand, LPL activity has been found reduced in hypothyroidism and increased in hyperthyroid state [41] . Furthermore, the administration of selective agonists of thyroid hormone receptor beta (TRβ) resulted in a selective decrease of VLDL-TG in rodents [42] . The study by Fugier et al. showed that thyroid hormone down-regulates Angptl3 (but has no effect on Angptl4) in hypothyroid rats [43] . Using thyroid hormone receptor deficient mice, these investigators showed that thyroid hormone down-regulates Angptl3 expression in a TRβ-dependent manner [43] . Therefore, decreased Angptl3 expression and reduced ANGPTL3 secretion would result in increased LPL activity and a more rapid removal of plasma TG in hypothyroid patients after thyroid hormone administration.
ANGPTL3 and LXR activation
LXR (Liver X Receptor) is a nuclear receptor that forms a heterodimer with RXR (retinoid X receptor) and activates the transcription of several genes involved in lipid metabolism. The compound T0901317 is a synthetic LXR ligand broadly used in the studies of LXR biology. Treatment of rodents with T0901317 causes an accumulation of TG in the liver and a marked increase of plasma TG [44] . While the accumulation of TG in the liver was explained by the increased expression of hepatic fatty acid synthase mediated by sterol regulatory element binding protein 1 (SREBP-1) [44] , the mechanism of hypertriglyceridemia has not been fully clarified. The involvement of Angptl3 in hypertriglyceridemia after LXR activation was firstly suggested by Inaba et al. who found that in human hepatoma cells LXR ligands induced an increased expression of Angptl3 gene and an increased secretion of ANGPTL3 protein [45] . In addition T0901317 administration to C57BL/6J mice increased hepatic mRNA expression and plasma concentration of Angptl3, which were associated with a marked increase of plasma TG. By contrast, T0901317 administration to C57BL/6J-Angptl3 deficient mice (C57BL/6J Angptl3 hypl , which do not produce Angptl3 mRNA or Angptl3 protein) increased hepatic triglyceride content but failed to increase plasma TG [45] . These results demonstrate that hypertriglyceridemia induced by LXR activation in mice was accounted for by LXR-mediated induction of Angptl3 expression. Similar conclusions were reached by Kaplan et al. on the basis of in vivo as well as in vitro studies [46] .
ANGPTL3 and acute phase reaction
The acute phase response is characterized by elevation of plasma TG due to both hepatic overproduction of VLDL and defect in the clearance of TGrich lipoproteins, secondary to reduction of the LPL activity [47] . Treatment of mice with lipopolysaccharide (LPS) (a model of Gram-negative infection and an inducer of acute phase response) was found to induce a decreased expression of Angptl3 in the liver and an increased expression of Angptl4 in heart muscle and adipose tissue [48] . These results indicate that ANGPTL3 is not a positive acute phase protein and suggest that ANGPTL3 is not responsible for the reduction of LPL activity observed during acute phase response [48] .
Mechanisms underlying the combined hypolipidemia in FHBL2
The increased LPL-mediated hydrolysis of TG in VLDL and chylomicrons explains the low levels of plasma TG in subjects with FHBL2. In addition to the enhanced clearance of TG-rich lipoproteins, the low level of circulating free fatty acid (FFA) present in mice and humans with Angptl3 deficiency reduces the hepatic availability of FFA for the de novo synthesis of TG to be incorporated into VLDL [13] .
The catabolism of chylomicrons has been investigated in 7 homozygotes and 31 heterozygotes for the ANGPTL3 nonsense mutation p. (S17*) [49] . These subjects were investigated at fasting and at 6 hours after a fat rich meal. In homozygotes, the complete Angptl3 deficiency was associated with a highly reduced postprandial hypertriglyceridemia, probably due to an accelerated catabolism of intestinal derived TG-rich lipoproteins (chylomicrons) secondary to the increased LPL activity. Additionally, heterozygotes with partial Angptl3 deficiency displayed an attenuated postprandial lipemia, as compared to controls [49] .
The mechanism for low LDL-C levels in Angptl3 deficiency is a matter of active investigation. In in vivo studies of lipoprotein metabolism in carriers of Angptl3 LOF variant showed a reduced VLDL-apoB (the main protein constituent of VLDL) production rate and an increased LDL-apoB fractional catabolic rate, with a gene-dosage effect, suggesting that ANGPTL3 regulates hepatic lipoprotein secretion and clearance [11] .
Wang et al. reported that in mice with genetic deficiencies in key proteins involved in lipoprotein clearance (e.g. apoe -/or ldlr -/mice), the inactivation of ANGPTL3 with an ANGPTL3 monoclonal antibody reduced the hepatic production of VLDL-TG but not that of VLDL-apoB [50] . The decrease in hepatic TG secretion in Angptl3 deficient mice is caused by a decreased supply of FFA from the circulation into the liver for hepatic de novo synthesis of TG. Shortage of TG is expected to decrease VLDL lipidation [50] . This finding is in keeping with the observation that carriers of Angptl3 LOF variants and Angptl3 deficient mice have low plasma FFA levels due to the absence of ANGPTL3-stimulated lipolysis in adipose tissue [13] .
Alternatively, the low LDL plasma levels may be the result of receptor-mediated catabolism of LDL. However, the observation that Angptl3 deficient mice lacking functional proteins involved in LDL plasma clearance (e.g. the ldlr -/or apoe -/mice) had a reduction of plasma LDL-C similar to that of wild-type Angptl3 deficient mice, indicated that the reduction of LDL-C in ANGPTL3 deficiency is independent of the canonical receptor-mediated clearance pathways [50][51] . It was suggested that Angptl3 inactivation in mice increases the clearance of VLDL remnants (which are largely converted to LDL in the circulation), leading eventually to a reduced plasma level of LDL.
Recently, Xu et al. performed RNAi-mediated Angptl3 gene silencing in five mouse models and in human hepatoma cells and validated the results by deleting Angptl3 gene in vitro using CRISPR/Cas9 genome editing [52] . They found that hepatic Angptl3 silencing in multiple mouse models is sufficient to reduce plasma LDL-C levels. On the other hand, in human hepatoma cells, Angptl3 silencing and deletion reduced apoB secretion and increased LDL/VLDL uptake [52] . These results are consistent with the in vivo turnover study conducted by Musurunu et al. in subjects with familial combined hypolipidemia, as mentioned above [11] .
The low levels of plasma HDL-C present in FHBL2 subjects may be the result of the increased activity of endothelial lipase (EL), as found in mice [34] . The reduction of plasma HDL-C was found to be associated with qualitative changes of HDL, which have also a reduced size [14,19] . In addition, it was reported that the function of the HDL in FHBL2 is impaired, as plasma of subjects with complete deficiency of ANGPTL3 showed a reduced cell cholesterol efflux capacity through various efflux pathways [14] .
Plasma levels of ANGPTL3
In vivo, ANGPTL3 is cleaved by proprotein convertases to yield the biologically active N-terminal fragment and an inactive C-terminal fragment. Thus, the forms of ANGPTL3 present in plasma are the full length as well as the N-terminal and C-terminal fragments. To measure the plasma level of ANGPTL3 two quantitative sandwich ELISA assays have been developed. The first method used commercially available rabbit polyclonal anti-human ANGPTL3 antibody and biotin labelled anti-human ANGPTL3 rabbit polyclonal antibody as a capture and detection antibody respectively (Biovendor assay). This method is likely to detect the full-length ANGPTL3 protein as well as the cleaved forms of the protein depending on the epitope localization [53] . The second method employed a rabbit polyclonal antibody raised against the N-terminal recombinant human ANGPTL3 as a capture antibody and a biotinylated sheep IgG raised against human full-length recombinant ANGPTL3 as detection antibody. This method should detect the full-length protein but does not distinguish the two cleaved forms [54] .
The concentration of ANGPTL3 in plasma of healthy subjects is highly variable, probably depending on the type of antibodies used and the different populations investigated. The mean plasma level of ANGPTL3 was found to be (470AE122) ng/mL [35] and (764AE291) ng/ mL [55] in the Japanese population while in the Finnish population the mean concentration was found to be (368AE168) ng/mL [54] . The plasma level of ANGPTL3 has shown a positive correlation with plasma HDL-C and LDL-C [54][55][56] . The correlation between plasma levels of ANGPTL3 and TG is still controversial; in one study a negative correlation was found [54] whereas in another study such a correlation was not observed [56] . In subjects homozygous or compound heterozygous for LOF mutations in ANGPTL3, plasma ANGPTL3 was undetectable while in simple heterozygotes its level was reduced by 50%-60% of the values found in the control subjects [13,49,57] .
ANGPTL3 as therapeutic target
The marked reduction of the level of atherogenic lipoproteins (VLDL and LDL) observed in FHBL2 subjects suggested that the inactivation of ANGPTL3 may be used as a therapeutic strategy in the management of dyslipidemic conditions. Two different approaches have been used to inactivate ANGPTL3. The first is based on the administration of a human monoclonal antibody against ANGPTL3. When administered to monkeys and dyslipidemic mice, this antibody induced a marked reduction of plasma TG, LDL-C and HDL-C [35] . The treatment of healthy human dyslipidemic volunteers with ANGPTL3-blocking antibody (evinacumab) was found to reduce TG and LDL-C by 76% and 23%, respectively. In dyslipidemic mice, evinacumab was found to reduce total cholesterol and TG levels by 50% and 85% respectively, as well as the size of atherosclerotic plaques and their necrotic content, providing a proof of principle that the combined hypolipidemia associated with therapeutic inhibition of ANGPTL3 is anti-atherogenic [25] . Evinacumab was also administered to nine patients with homozygous familial hypercholesterolemia due to complete deficiency of LDLR. Four-week treatment reduced LDL-C, TG and HDL-C by approximately 50%, 47% and 36%, respectively, in addition to reductions in baseline levels already achieved with aggressive lipid-lowering therapy [58] .
The second approach is based on the administration of antisense oligonucleotides (ASOs) targeting hepatic Angptl3 mRNA, which is expected to markedly reduce the level of ANGPTL3 protein. ASO treatment in mice reduced the levels of TG and LDL-C as well as liver TG content and retarded the progression of atherosclerosis. In Phase 1 trial in humans, treatment for 6 weeks with multiple doses reduced the plasma concentrations of TG by a maximum of 63%, LDL-C by a maximum of 32.9%, with no significant changes in HDL-C and without important side effects [59] .
Conclusions
The discovery of ANGPTL3 deficiency in mice and humans has stimulated a series of studies which have clarified the role of ANGPTL3 in lipoprotein metabolism. These investigations have suggested that ANGPTL3 may be a novel therapeutic target in the management of dyslipidemias in humans. The results of recent intervention trials aimed at inhibiting ANGPTL3 appear to support this hypothesis.
|
2018-05-21T20:27:39.988Z
|
2019-04-22T00:00:00.000
|
{
"year": 2019,
"sha1": "b23749e2aedec97808f5a8c217d9064c8f293c00",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7555/jbr.32.20170114",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b23749e2aedec97808f5a8c217d9064c8f293c00",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
218635036
|
pes2o/s2orc
|
v3-fos-license
|
Biological and social aspects of Coronavirus Disease 2019 ( COVID-19 ) related to oral health
The expansion of coronavirus disease 2019 (COVID-19) throughout the world has alarmed all health professionals. Especially in dentistry, there is a growing concern due to it’s high virulence and routes of transmission through saliva aerosols. The virus keeps viable on air for at least 3 hours and on plastic and stainless-steel surfaces up to 72 hours. In this sense, dental offices, both in the public and private sectors, are high-risk settings of cross infection among patients, dentists and health professionals in the clinical environment (including hospital’s intensive dental care facilities). This manuscript aims to compile current available evidence on prevention strategies for dental professionals. Besides, we briefly describe promising treatment strategies recognized until this moment. The purpose is to clarify dental practitioners about the virus history and microbiology, besides guiding on how to proceed during emergency consultations based on international documents. Dentists should consider that a substantial number of individuals (including children) who do not show any signs and symptoms of COVID-19 may be infected and can disseminate the virus. Currently, there is no effective treatment and fast diagnosis is still a challenge. All elective dental treatments and non-essential procedures should be postponed, keeping only urgent and emergency visits to the dental office. The use of teledentistry (phone calls, text messages) is a very promising tool to keep contact with the patient without being at risk of infection.
Introduction
Coronavirus is a family of viruses that causes respiratory infections including the new coronavirus (SARS-CoV-2) discovered in December 2019 in China. Coronaviruses represent enveloped, positive stranded RNA virus that contains four genera: Alpha-, Beta-, Gamma-, and Deltacoronavirus. 1 Six different coronavirus have been identified in humans: HCoV-OC43, -229E, HCoV-NL63, HKU1, the Middle East respiratory syndrome (MERS)-CoV and (SARS)-CoV. 2 Although the latter virus became widely discussed recently, the first human coronaviruses were isolated for the first time in 1937. 3 The denomination coronavirus was due to its microscopic aspect Declaration of Interests: The authors certify that they have no commercial or associative interest that represents a conflict of interest in connection with the manuscript.
resembling crown-like spikes on its surface and the main host receptor for humans seems to be the angiotensin-converting enzyme 2 (ACE2). 4 This recent COVID-19 turned into a global public health outbreak. 5,6 It is transmitted after contact with infected surfaces and with infected patient's fluids, including saliva and aerosol. 6,7 These characteristics place the dental offices as main risk settings of cross infection among patients, dentists and health professionals in the clinical environment, including hospital's dental intensive care facilities. 8 Dental practitioners are exposed to close contact to patients, to saliva aerosol, blood and handle sharp contaminated instruments. 9 After the World Health Organization (WHO) pandemic declaration, institutions like the General Coordination of Oral Health from the Brazilian Health Ministry published a Technical Note with the main clarifications regarding dental practice considering the Coronavirus pandemic. 10 Centers for Disease Control and Prevention (CDC) and American Dental Association (ADA) are recommending dentists to postpone elective procedures and concentrating on emergency or urgent dental care in order to reduce COVID-19 infection, 11,12 similar to what several cities in China have done. 13 As health professionals, it is extremely relevant that dentists be aware of the biological and social characteristics involved in COVID-19 pandemic, contributing to the clarification of the population and adopting finest clinical measures to avoid unnecessary risks to contain the perioperative transmission. 8 Based on the current available evidence related to oral health care, the aim of the present critical appraisal is to compile prevention strategies for dental professionals and clarify dental practitioners about the virus history, pathogenesis, current pharmacological clinical trials, and measures to minimize economic and health consequences to the oral health system.
Microbiological aspects
This new health problem emerged from a public market in which animals are kept and traded alive in Wuham -China. It became the focus of global attention after the spread of an unknown cause epidemic pneumonia. At first, these cases of pneumonia were monitored and tested in the laboratory for coronavirus and possible influenza infections. On January 7, 2020, Chinese authorities announced that a new type of Coronavirus was isolated: the new Coronavirus, nCoV. 14 This new viral agent, which until that moment has not been identified in humans before, was called SARS-CoV-2 and is able to cause respiratory infectious disease that is called COVID-19. Previous occurrence of coronavirus such as the Severe Acute Respiratory Syndrome (SARS) (SARS-CoV) and Middle East Respiratory Syndrome (MERS) (MERS-CoV) left 774 and 850 dead, respectively, reflecting the severity of the threat and the urgency to control this new outbreak as soon as possible. 15 T he genom ic sequence of t he new vi ra l Coronavirus was immediately defined by public health support and online community resources "virological.org" on January 10 th (Wuhan-Hu-1, GenBank accession number MN908947) 16 followed by four other deposited genomes on January 12 th in the database of viral genomic sequences maintained by the Global Initiative on Sharing All Influenza Data (GISAID). 17 The clinical signs and symptoms in the beginning suggested the presence of a virus closely related to SARS outbreak in 2002/2003. This species also comprised a large number of viruses detected in rhinolophid bats in Asia and Europe. 17 After sequencing, the SARS-CoV-2 genome was found to be 96.2% identical to the Bat RaTG13 coV, while sharing 79.5% identity with the SARS-CoV. In this way, the similarity between the genomes of the viruses shows that the bat is the natural host of the virus and SARSCoV-2 may have been transmitted to humans, in an unknown way, through intermediate hosts. Several studies suggest that the bat is the potential reservoir of SARS-CoV-2. However, there is evidence that the origin of SARS-CoV-2 was the seafood market in Wuhan, China. 18 Coronaviruses (CoV) α-and β-CoV are capable of infecting mammals, while γ-and δ-CoV tend to infect birds. Although the six CoVs identified as human-susceptible viruses, presented low pathogenicity, causing mild respiratory symptoms similar to a common cold; SARS-CoV and MERS-CoV may lead to severe and potentially fatal respiratory tract infections. 18,19 Viruses are complex pathogens with a high capacity to infect multiple host species, causing a variety of diseases with numerous symptoms. CoVs are pleomorphic RNA-viruses (subgenus sarbecovirus, subfamily Orthocoronavirinae) characterized by high speed of gene recombination due to constant errors in their RNA polymerase-dependent replication process (RdRP). 18,20 The main steps involved in the replication cycle of SARS-CoV-2 are: recognition and binding to the host cell via membrane fusion or endocytosis mechanism. After the invasion, the viral genome is released; then occurs translation of the viral polymerase protein; RNA replication; subgenomic transcription; translation of viral structural proteins; viral structural proteins combination with the nucleocapsid; formation of mature virions and finally the release of mature virions by exocytosis. At the end of the cycle, newly mature virions are released and may infect new targets and the cycle repeats itself continuously. 15 During their replication cycle, two-thirds of the viral RNA encode 16 non-structural proteins (NSPs). The other one-third of the virus genome encodes four essential structural proteins, including: spike glycoprotein (S), small envelope protein (E), matrix protein (M) and nucleocapsid protein (N), and also other accessory proteins. 18,21 Host factors can also influence susceptibility to infection and disease progression. Research shows that SARS-CoV-2 use angiotensin-converting enzyme 2 (ACE2). The S-glycoprotein located on the surface of the coronavirus can bind to the ACE2 receptor on the surface of human cells. After binding to the host cell membrane, the RNA of the viral genome is released into the cytoplasm and translates two polyproteins, pp1a and pp1ab that encode non-structural proteins and form the replication and transcription complex (RTC) and the replication cycle continues as stated above 18 . Host antiviral defense plays an important role in the course of SARS-CoV-2 infection. As the first line of defense against viruses, type I interferon (IFN) plays a critical role in initiating host antiviral responses. Following virus infection, the host innate immune system is activated by the recognition of viral-specific components such as ssRNA, dsRNA or glycoproteins. 22 The Toll-like and RIG-I-like receptors are the most common host pattern recognition receptors (PRRs) that respond to RNA viruses. 23 The domains then initiate an antiviral signaling cascade by leading the phosphorylation and activation of IRF3 and NF-κB, leading to the production of type I IFN. IFN-β secretion induces IFN-stimulated genes, which will induce the expression of host antiviral effector factors. 24 Viruses have developed the capacity to escape host immune detection and to suppress the host IFN system. 25 Viruses encode viral proteins that interfere with PRRs signaling pathways to increase an early benefit against host defense. For example, the SARS-CoV N proteins inhibit RIG-I ubiquitination and thus suppress the release of type I IFN, 26 SARS-CoV M proteins prevents the TRAF3/ TBK1 complex formation and inhibits TBK1/IKKεdependent activation of IRF3/IRF7 transcription factors. 27 Lastly, the repressive modifications that are induced by the nonstructural SARS-CoV nsp1 protein blocks host mRNA translation 28 and mediates host mRNA degradation. 29 Human-to-human transmission of SARS-CoV-2 occurs primarily between family members, including relatives and friends who have more intimate contact with infected or asymptomatic patients or carriers. As an emerging acute respiratory infectious disease, COVID-19 spreads mainly through the respiratory tract pathways through droplets, respiratory secretions and direct contact even at a low infectious dose. Likewise, the presence of SARS-CoV-2 in swabs from fecal and blood samples has been identified, indicating the possibility of multiple routes of infection. 18 Based on the current epidemiological investigation, the incubation period is from 1 to 14 days, mainly from 3 to 7 days, being contagious in its latency period. It is highly transmissible in humans, especially in the elderly and people with underlying diseases. Patients with COVID-19 have symptoms such as fever, malaise and cough. Most adults or children infected with SARS-CoV-2 have mild flu-like symptoms. However, a few patients also progress to a critical condition and rapidly develop acute respiratory distress syndrome, respiratory failure, multiple organ failure and even die. 18,30 There are still many gaps in knowledge about the epidemiology and clinical overview of COVID-19, including the exact incubation period, the possibility of transmission from asymptomatic carriers and the rate of transmissibility. However, human-human transmission has been rapidly proven and remains responsible for the continued spread of the disease.
Reliable laboratory diagnosis is among the priorities to facilitate public health interventions. In acute respiratory infections, RT-PCR is routinely used to detect viruses caused by respiratory secretions. During international health emergencies, the viability of real-time detection of the virus by real-time RT-PCR has been demonstrated through coordination between public laboratories and universities. 17
SARS-CoV-2 Drug Therapy
Drugs tested effective for SARS-CoV and/or MERS have been included in the WHO mega clinical trial -SOLIDARITY. 31 For its study, WHO chose a nucleotide analogue Remdesivir; the malaria medication chloroquine (and its analog hydroxychloroquine); a combination of the anti-HIV drugs lopinavir and ritonavir; and that combination plus interferon-b.
Re mde siv i r i s a n a nt iv i ra l pr o d r ug of remdesivirtriphosphate with in vitro activity against coronaviruses. 32,33 Remdesivir-TP acts as an inhibitor of RNA-dependent RNA polymerases and competes with adenosine-TP for incorporation into emerging viral RNA chains. 34 Hydroxyhloroquine and chloroquine have in vitro activity against SARS-CoV-2 32,35-37 and the mechanism of action includes inhibition of viral enzymes (RNA polymerase), viral protein glycosylation, virus assembly, new virus particle transport, and virus release. Other mechanisms may also involve ACE2 receptor inhibition, decrease acidity in endosomes, and immunomodulation of cytokine release. 5,32,36 The third arm of SOLIDARITY combines two HIV protease inhibitor drugs, lopinavir-ritonavir. The combination shown in vitro and in vivo potential activity for SARS-CoV and MERS-CoV 38,39 and the mechanism of action involves the inhibition of M pro , an essential enzyme for coronavirus replication 40 . Recent report published in The New England Journal of Medicine 41 was not encouraging and the combination of lopinavir-ritonavir did not differ significantly from "standard care" group.
The fourth arm of SOLIDARITY combines lopinavir-ritonavir with interferon-b. The activation of innate antiviral response by interferon should have beneficial effects at least in the initial stage of infection. However, cautions should still be observed and the possibility that interferon might exacerbate inflammation during the late phase of SARS-CoV-2 infection cannot be excluded. 42 Lastly, clinical trials are being conducted to evaluate the use of SARS-CoV-2 convalescent plasma from persons who have recovered from COVID-19 that potentially contain antibodies to treat patients with life-threatening viral infections. 43 A group led by Lei Liu 44 gave convalescent plasma (total dose: 400 mL with a SARS-CoV-2-specific antibody-IgG titer greater than 1:1,000) to five critically ill patients and the symptoms diminished in all of them within ten days. Even though these cases reported by Shen et al 44 are compelling, this investigation has some limitations. The intervention was not evaluated in a randomized clinical trial, and the outcomes in the treatment group were not compared with outcomes in a control group -patients who did not receive the intervention. Moreover, patients received numerous other therapies (antiviral and steroids), and the convalescent plasma was administered up to 21 days, and it is not clear whether this timing is optimal or if earlier administration potentially have been associated with different outcomes. Despite these limitations, the study does provide important evidence to support the possibility of evaluating this therapy in more rigorous studies.
Dental practice in the Covid-19 scenario Risk scenario
Dentists are among the professionals with the greatest exposure to COVID-19. The oral cavity and the work environment represent a high potential source for transmissibility and susceptibility to this and other etiological agents. 7,45,46 The context of undocumented infections is significant, which facilitates the rapid spread of SARS-CoV-2. A substantial number of individuals do not show any signs and symptoms or have mild symptoms. These individuals serve as the primary source for the majority of reported cases and, therefore, for health teams that can become multipliers. 47,48 The rapid identification of COVID-19 cases is crucial for the containment of the pandemic. However, it is still challenging due to the lack of pathognomonic symptoms, coupled with the limited capacity to perform specialized polymerase chain reaction (PCR) tests 49 -which also have limitations. The need to develop fast accurate molecular diagnostics is mandatory to identify a large number of infected patients and asymptomatic carriers, in order to prevent the transmission of the virus and ensure proper conduct. 50,51 Rapid tests can facilitate elective care in the future since the risk of contamination by SARS-CoV-2 would be ruled out. However, the dentist can never neglect the existence of other diseases transmitted by saliva and aerosol, such as hepatitis B, measles and tuberculosis. 52, 53,54 Dentists should receive and make great efforts regarding preventive care and testing, as they can seriously affect the flattening of the epidemic curve, avoiding the collapse of the health system. Several modeling studies and scenario comparisons -both related to the current pandemic situation and those already experienced especially in China and Italy -have shown that combined interventions must be implemented, both for the population and for health professionals. General measures for all health professionals including dentists comprise daily monitoring of the temperature and testing the health care provider team; use of N95 masks; distance from the workplace (when possible) with the implementation of network communication technologies with patients; social distance; mobility restriction measures; avoid crowd places; diagnostic tests and isolation of infected individuals as well as their families. [55][56][57][58][59][60] Especially for dentists it is necessary to follow guidance protocols and new tools/technologies for dental practice aimed at safeguarding oral health professionals, as well as the population under their care. 59,61 Dental treatment during the Covid-19 Pandemic Due to the nature of the dental treatment, several procedures, as the use of high-speed handpiece or ultrasonic scalers generates aerosol (very small particles or droplets) that can be inhaled, absorbed by the skin or set in nearby surfaces. 62 According to the last Scientific Brief published by the World Health Organization, 63 the transmission of the SARS-CoV-2 can occur by respiratory droplets from direct contact with an infected person (distance less than 1m), indirect contact with contaminated surfaces or objects and by aerosol produced during procedures performed on infected patients. Based on that, dental and health organizations have issued recommendations to postpone all elective dental treatments and non-essential procedures and limit services only to urgent and emergency visits. 10,11,12 Dental health care personnel (DHCP) should be aware of the mechanisms of transmission, the expanded infection control procedures, be able to identify patients with signs / symptoms of COVID-19 and have a clear understanding of what characterizes a dental emergency, urgent dental care and nonemergency dental treatment.
During the COVID-19 pandemic, DCHP should use telecommunication or teledentistry prior the dental treatment to evaluate the needs of the patient and to minimize the risk of infection, asking if patient has fever, cough or shortness of breath (ADA) 64 and have traveled national or internationally (CDC). 65,66 When possible, dentist should offer advice, prescribe medication for analgesia and/or antimicrobial (when appropriate) and postpone the visit of the patient to the office, but keep direct contact with the patient by phone or text message. 67 If patient presents a dental emergency (potentially life threatening), as an uncontrolled bleeding, or an urgent dental need that requires relieve of severe pain and/or risk of infection, 68 and present sign/symptom of respiratory infection, this patient should not be seen in a dental office and should be referred for an emergency care facility where Transmission-Based Precautions (N95 masks, Airborne Infection Isolation Room for example) are available (ADA). 64 In the United Kingdom, the National Health Service (NHS) is working with dental practices and community dental services to establish Local Dental Urgent Care System in every region. These dental offices will accommodate visits of all types of patients, including those with suspected or confirmed COVID-19, patients that are shielded, vulnerable or patients without any of those specific conditions. In those places dental public health practitioners will be available and will have access to the FFP3 respirator to perform the treatment. 67 In most countries, cases of dental emergency or urgent dental care on patients without any signs and symptoms of COVID-19 can be treated at the dental office. However, since there is a large number of asymptomatic cases of Covid-19, 47 the dentist should take extra precautions when seeing the patient and not assume he/she is COVID-19 free. Besides the asymptomatic patients, dental practitioners should be aware that children represent a significant transmission risk to the virus since they present milder symptoms than adults. 69 It is important to maintain patient isolation (have only one patient in the waiting room), adhere to the infection control protocol: standard procedure of putting on and removing all Personal Protective Equipment (PPE), including gown, goggles, N95 mask with face full shield and gloves. 64 Before every treatment, patient should use a mouth rinse with 1% or 1.5% hydrogen peroxide or 0.2% povidone 9,64 and should wear goggles and bib during the whole procedure. To minimize the aerosol production, dentists should use hand instrumentation, high-volume saliva ejector and dental dam during the treatment and refrain to use 3-in-1 syringe. 61 Intraoral radiographs should be avoided since it can induce coughing; the office space should be limited to the patient and to the operator and dental assistant. After the treatment, the DCHP should wear appropriate PPE to proceed with the cleaning and disinfection of the room and equipment using the recommended disinfecting products. 64 Besides, dentists should reconsider the use of sedation (inhalation and pharmacological) to manage severe anxiety or phobia in the dental settings and focus on non-pharmacological techniques to minimize the potential risk of needing life support measures that involve the manipulation of airways and aerosolization (inhalation sedation). 8 In a specific situation where the patient has an unavoidable emergency and no signs and symptoms of COVID-19 and the dentist does not have a N95 mask or higher level, he/she must wear surgical mask in a single use, goggles and face shield to treat a patient, but be aware that the risk of contamination will be moderate. 64 There is a limitation in following this procedure since there is current community spread of COVID-19 with asymptomatic cases in the population. Current research shows that the prognosis of patients with COVID-19 is worst for those older than 60 years of age or presenting underlying diseases (diabetes, hypertension or cardiovascular disease, for example). 70 In this sense, members of the health team must use clinical judgment and take all precautions to prevent transmission.
In this unprecedented situation, it is advisable to look for and apply the most recent protocols and guidance from your local dental organizations in your country that are based in the current literature and be aware that the COVID-19 pandemic brings challenges to the dental health care providers not only on their practices but on their financial situation as well. A general flowchart (Figure) was constructed based on the ADA's Interim guidance on minimizing COVID-19 transmission risk when treating dental emergencies. 64 As also stated in this ADA's document, 64 Figure 1 does not constitute legal advice or legal guidance. It only helps clinicians for their own judgment about the risks of infection while working in dental offices.
Perspectives
Health professionals are facing new challenges in providing care to their patients. Remote treatment via chat, video conversation, telemedicine, teledentistry and other technologies have given rise to a new look at the professional-patient relationship, opening doors to an untapped universe, since most dentists do not use them as part of their daily work. 71 It is estimated that by 2025 over 60% of the population will be using mobile internet. 72 Therefore, mobile technologies, including phones, are great allies to community health even in low and middle-income population. [73][74][75] Individuals that still do not have access to mobile services would also benefit due to diminishing waiting lines in local health assistance, at the nearest Primary Health Units.
In private offices, the limitation on dental and medical activities to only urgent and emergency procedures presents a strong impact on the economy of these sectors. 76 This economic crises have raised reflections and concerns that go beyond clinical security and social detachment and have highlighted the importance of social security and financial education. Such factors must also be taken into account by the entities that guide dental practice, in order to generate discussions to support the dentists on those occasions where they will have to keep distance from their routine clinical tasks during COVID-19.
The dental class, which comprises in its vast majority, autonomous professionals, should recover the issues of financial education, frequently so distant from the contents of the academic curriculum. There is an evident scarcity of articles related to financial education for dental offices. Emergency financial reserve, funds to deposit this reserve and long-term investments, public or private pension, should be part of the incisive recommendations to this group. Other professional classes are raising these issues concerning this urgent moment to guarantee social security for all and to go beyond the packages proposed by governments. 77 Such strategies must be sustainable, long-term, with a view to protecting the self-employed and avoiding an unprecedented economic crisis.
Conclusions
This recent COVID-19 turned into a global public health outbreak. It is transmitted after contact with infected surfaces and with infected patient's fluids, including saliva and aerosol. A substantial number of individuals do not show any signs and symptoms and may disseminate the virus. These characteristics put the dental offices as main risk settings of cross infection among patients and dentists. Currently there is no effective treatment and fast diagnosis is still a challenge. All elective dental treatments and non-essential procedures should be postponed, keeping only urgent and emergency visits to the dental office. Unexpected situations like this pandemic, *Use of N95 mask denotes low risk of infection # If no N95 available, refer patient to a facility that has N95. If not feasible, use your clinical judgment and precautions of infection control. Dentist and staff must wear surgical facemasks, goggles and full-face shields, with other basic clinical PPE and follow disinfection procedures immediately after every procedure. Use of surgical facemasks denotes moderate risk of infection. DCHP should quarantine for 14 days and communicate with patients that were seen after that day. Procedures involving aerosol should be scheduled for the last appointment of the day. Dentists should avoid intra-oral radiographs, prefer hand instrumentation, and use high-volume saliva ejector and dental dam during the treatment. This figure was constructed using free images obtained at: https://smart.servier.com/ and https://www.freepik.com/. brings financial issues to the dental team; in this way, financial education become very important subject to be discussed during the professional school. The use of telecommunication (phone calls, text messages) and teledentistry are very promising tools to keep contact with the patient without put them in high risk of infection.
|
2020-05-07T09:03:46.090Z
|
2020-05-08T00:00:00.000
|
{
"year": 2020,
"sha1": "0c51d29c6dbd9f75d78f2d1b4cbce9587c94f09a",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/bor/a/3SdNkS85QsjYcSDhHvgFbbC/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d34b50c28aeafb27baacf2a79eaaae04b7bc8f7d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
211003800
|
pes2o/s2orc
|
v3-fos-license
|
Superconducting Symmetry Phases and Dominant bands in (Ca-) Intercalated AA- Bilayer Graphene
Built on a realistic multiband tight-binding model, mirror symmetry is used to map a calcium-intercalated bilayer graphene Hamiltonian into two independent single layer graphene-like Hamiltonians with renormalized hopping. The quasiparticles exhibit two types of chirality. Here a quasi-particle consists of two electrons from opposing layers where possess an additional quantum number called"cone index"which can be regarded as the eigenvalue of mirror symmetry operations. To obtain tight-binding parameters, both effective monolayer Schrodinger equations are solved analytically and fitted to first-principles band structure results. Two quasi-particles (four electrons) can team up to build a Cooper pair with even or odd chirality. Treatment of the pairing Hamiltonian leads to two decoupled gap equations. The pairing of quasi-particles with different cone indexes is forbidden. The decoupled gap equations are solved analytically to obtain all the possible superconducting phases. Two nearly"flat bands"crossing the Fermi energy, each related to the graphene-like structures, are responsible for two distinct superconductivity gaps that emerge. Depending on how much these bands are affected by the intercalant and which is closer to the Fermi energy, distorted s-wave or d-wave superconductivity may become dominant. Numerical calculations reveal that d-wave superconductivity is dominant in both sectors. For these two dominant phases, within the range of 0-6 K which superconductivity has been observed, numerically the transition from single-gap to dual-gap superconductivity is possible. Adopting the two-gap viewpoint of superconductivity in C$_6$CaC$_6$, the dominant $d$-wave states should have the same critical temperature. Around $T_c=2K$ these two relations intersect, otherwise, superconductivity has been realized just in one of these two sectors and disappears in the other one.
methods have caused theoretical attention to their electronic physics. Due to its strong two dimensionality, bilayer graphene (BlG) has provided an attractive platform for studying 2D electron correlation effects. 14 Because of the weak interlayer van der Waals forces, the layers of the graphene bilayer can rotate relative to each other and form different ordering stakes i.e. AA, AB (Bernal phase), or even "twisted bilayer" form.
Following the indication of a Mott insulating phase in twisted bi-layered graphene 18 and the observation of superconductivity upon doping, 19 these types of systems have revived interest and may realize a new class of superconductor. It seems that bilayer graphene can host superconductivity, magnetism, and other unusual phases. In addition to relativistic character of quasi particles in single layer graphene, interlayer coupling causes the bilayer graphene to exhibit behaviors that are not observed in the single layer graphene. Most fascinating behaviors of TBlG are played by interlayer hybridization of nearby Dirac K points of opposite layers via interlayer tunneling in a spatially periodic way 20 . Interlayer coupling causes quasi particles in the AB-stacked bilayer graphene behave as massive chiral quasiparticle with parabolic dispersion near the Dirac Fermi points. In this manuscript we will see that quasi-particles in AA-stacked BlG can exhibit extra aspects of such behaviors.
Beside TBlG, when the electron-electron correlations effects are taken into consideration in AA and AB-stacking structures, theory suggests a variety of instabilities with potential technological applications, including unusual quantum Hall effects, antiferromagnetic phases, and tunable band gap opening at the charge neutral points (for a review see [14]), some of which have been confirmed experimentally. 26 . Although it is difficult to distinguish AA-stacked BlG from single layer graphene but Some authors claimed to fabricate AA-BlG experimentally. However, it has been received much less attention than more stable AB-stacked phase. Unlike Fermi points in monolayer graphene and AB-stacked, the well nested Fermi surface of pristine AA-BlG consists of small electron and hole pockets of equal area. This feature has drastic consequences, tending the system toward electron instabilities such as antiferromagnetism at zero doping and bilayer exciton condensation when doped. 14, [30][31][32][33] Superconducting instabilities in doped or gated AB-stacking phases have been predicted. An effective two-band Hamiltonian with attractive interactions was used in Ref 27 to investigate the possibility of a time reversal breaking d + id phase in moderately doped AB-stacked BlG. Using a weak-coupling renormalization group formalism, the possibility of unconventional superconducting orders from repulsive interaction on doped AB-stacked honeycomb bilayer in d-wave, f -wave, and pair density wave channels were discussed in by James et al. 28 Spin triplet s-wave pairing could also arise. 29 Superconductivity reported in Ca-intercalated bilayer graphene represents the thinest limit of graphite intercalation compounds (GICs) 22 , at 4K 24 and around 6.4K in Ca-doped graphene laminates 25 . Even so, the superconducting phase of AA-graphene has been addressed very little in the literatures very rarely.
Based on an effective Hamiltonian with an attractive potential between inter-and intra-layer near neighbor sublattices Alidoust et al. 34 to study phonon-mediated superconducting pairing symmetries that may arise in AA, AB (and AC)-stacking bilayer graphene at the charge neutral point and beyond (by varying chemical potential). They claimed that at a finite doping, AB stacking can develop singlet and triplet d-wave symmetry beside s-wave, p-wave and f -wave that can be achieved at the charge neutral point, while the AA-stacked phase, similar to the undoped case, is unable to accept d-wave pairing.
Motivated by experimental observation of superconductivity in Ca doped bilayer graphene, a more realistic model has been introduced here to obtain all possible superconducting symmetries which can be arise analytically. In this manuscript, we follow the notion that Calcium doped bilayer graphene as mentioned in the ref. 22 as the thinest limit of graphite intercalation compounds for which the structure Consists of Ca intercalated bilayer AA-stacked graphene ( Fig.1(a)). However, recently, another possibility has been raised experimentally 37 .
Based on mean-field treatment of an extended Hubbard model, a realistic tight-binding model has been used where its parameters are determined by a fitting to DFT band structure. By adding an effective attractive interactions between interlayer and intralayer electrons, all possible superconductivity pairing symmetry characters of C 6 CaC 6 has been studied in details ( which can be applied to any related graphene-like structures such as B 3 N 3 CaB 3 N 3 ). We will take advantage of the observation that, mathematically one can use mirror symmetry properties of Bloch coefficients of intercalated AA -stacked bilayer graphene and interpret their Hamiltonian as two independent single layer pseudo-graphene structures (even and odd sectors) where one of them (even symmetry sector) is decorated with calcium layer. This notion leads to the emergence of a topological number called cone index (c = ±1). Conservation of cone index during Klein tunneling across an n-p junction is one of the interesting unique behaviors in AA-stacked bilayer where it raises the possibility of cone-tronic devices based on AA-BlG. 38 In the real space a quasi-particle consist of two electrons from opposing layers with the same symmetrical position where they possess an additional quantum cone index beside their chirality nature near the Dirac points. The cone index concept is a unique feature in describing quasi-particles behaviors in AA-BlG with respect to single layer graphene. Two quasi particles (i.e. "four electrons") can team up to build a Cooper pair as one can see in Fig. 1(b,c). We will see that only quasiparticles with the same cone index can be paired.
It will be shown that the question of superconducting phases in metal-intercalated bilayer graphene such as C 6 CaC 6 can be decoupled to two independent gap equations corresponding to each of the even and odd sectors which they can 6 11 (c) FIG. 1: Structure and notation. (a) Sketch (exaggerated) of shrunk bilayer graphene where numbers indicate C-C first, second, and so on, neighbors of reference carbon atom in each layer. (b) Shows the unit cell of intercalated bilayer graphene. In this Kekulé structure the intralayer hopping energies symmetry between first nearest neighbor atoms is broken. Intralayer hopping parameters along hexagonal bonds are the same and shown by t11 while between the long bond slightly has been changed given by t ′ 11 . Similarly interlayer hopping are given by t12 and t ′ 12 . Symmetry breaking of hopping energies leads to open two unequal gap at the Dirac points that folded back to the Γ point. (c) Intra-plane superconductor pairing amplitudes (Σ1 Σ2 Σ3) are between 1-4, 3-6 and 2-5 subsites respectively, (∆1 ∆2 ∆3) are between 3-5, 2-4 and 1-6 subsites respectively, and (Π1 Π2 Π3) are between 2-6, 1-5 and 3-4 subsites respectively. Also, inter-plane superconductor pairing amplitudes (Σ be solved analytically (or nearly so) to obtain all possible pairing symmetry phases which can be probed experimentally.
The two gap nature of superconductivity that is one of the unique feature of MgB2 39 can be inspected here similarly. We have numerically predicted that in the temperature range of 0-6 K the phase transition from d-wave single-gap to dual-gap d-wave superconductivity could be observed. Using ab initio anisotropic Migdal-Eliashberg theory including Coulomb interaction, Margine et al. 40 concluded that C 6 CaC 6 should support phonon mediated superconductivity with a critical temperature T c = 6 − 7K, within the range of observations, and it exhibits two distinct superconducting gaps on the electron and hole Fermi surface pockets which is in agreement with the result has been obtained in the present manuscript.
The rest organization of the paper is as follows. Section II introduces the model Hamiltonian that we study, with Sec. III setting the stage by obtaining mostly analytic diagonalization of the non-interacting system. In Sec. IV, the treatment of pairing and presentation of superconducting phases is presented. In Sec. V, analytic insight are complemented by numerical solutions, followed by a discussion and summary in Sec. VI. Many of the analytic expressions are delegated to Appendices.
II. MODEL HAMILTONIAN
The system we consider, illustrated in Fig.1(a), consists of AA stacked bilayer graphene intercalated by Ca metal layer in which intercalant atoms are located on the central symmetry plane of bilayer graphene at the center of neighboring carbon hexagons. The distance between the graphene layers is calculated to be h = 4.63Å in the case of Ca intercalation. The nearest in-plane Ca-Ca distance is ξ = 4.26Å. Charge transfer from Ca to the graphene layers leads to breaking symmetries of hopping amplitudes, and of C-C bond lengths similarly to those of Li decorated monolayer graphene [ 41 ]. The attractive interaction between metal cations and C atoms after charge transfer contracts the Ca-C distance and reduces the C-C bond lengths in the Ca-centered hexagon to a 1 = 1.419Å. As a result the bond length of neighboring C atoms in different hexagons is somewhat larger, at a 2 = 1.423Å. Also in this "shrunken bilayer graphene" 41 the hopping integrals between short-bond inter-and intra-layer carbons are respectively t 11 1 and t 12 1 , while those between stretched carbon sites will be denoted t 1 . The lattice then becomes a two dimensional hexagonal Bravais lattice with thirteen atomic sites. The sites of i-th cell will be labeled as and Ca, where m is layer index and takes m = 1, 2. The Hamiltonian of this system iŝ Here a quasi-particle consists of two electrons with the same spin which each one located at similar sub-sites in the opposing layers. These quasi-particles describe by an additional h-chirality index. Two quasi particles (i.e. "four electrons") can team up to build a Cooper pair as one can see in Fig. 1(b,c). We will see that only quasiparticles with the same cone index can be paired where α and β run over the sublattice orbitals A m i p z , B m i p z and Ca s. Hereĉ † iασ ,ĉ iασ are creation and annihilation operators of an electron with spin σ on subsite α of ith lattice site, andn iσ =ĉ † iσĉ iσ is the electron number operator. The chemical potential is µ 0 and t iα,jβ is the hopping integral from α subsite of ith site to the β subsite of jth site. Here U is an effective negative interaction between electrons in the extended (negative U ) Hubbard model that allows the possibility of superconductivity.
III. THE NON-INTERACTING SYSTEM
In this section a thirteen band tight binding model, consisting of twelve p z C orbitals and the Ca s orbital, for Ca-intercalated bilayer graphene is constructed, to be applied to study superconducting states of this system within BdG theory. The non-interacting system Hamiltonian is invariant under mirror symmetry, which leads to division of the intercalated AA-BlG band structure into two sectors characterized by eigenvalues of mirror operation. Here we take advantage of the mirror symmetry. We first apply this reduction to the simple case of pristine AA-BlG.
A. Reducible Tight Binding Model for Pristine AA-staked Bilayer Graphene
The unit cell of AA-BlG, illustrated in Fig. 2(a), consists of four atoms A 1 , B 1 in the top layer and A 2 , B 2 in the bottom layer. The Schrödinger equation for this system in terms of Bloch coefficients is given by Inserting Eq. 3 into Eq. 2 and defining new single layer graphene-like Hamiltonians H ± = H 11 ± H 12 , the four band Schrödinger equation converts into two decoupled single layer graphene two band Schrödinger equations of the form wherein 2-component iso-spinors (has been shown in Fig. 2c) are given by The ± sign appearing in the even and odd sector Hamiltonians are the h = ±1 eigenvalues of the mirror symmetry operatorŜ h = 0 I 2 I 2 0 which is the same as fifth Dirac gamma matrixγ 5 that reflects the right (χ R ) or left hand (χ L ) chirality of quasiparticles in the relativistic quantum field theory. This additional ± topological index called by Sanderson and Ang (AS) as "cone index." 38 . SA have shown that quasiparticles in AA-BlG are not only chiral but are also characterized by "cone index,". So we will refer to the cone index as h-chirality index and to usual helicity appears again as v-chirality. According to this notion, one can describe Dirac cones in AA-BlG shown in the Fig. 2(b) with two kind of chirality with respect to asymmetric in such a way that the structure and its vertical ("v-chirality") and horizontal ("h-chirality") mirror image are not superimposable. This chirality is a general aspect of AA-BlG quasi-particles that holds for general hoppings and over the entire Brillouin zone. SA show that electron transport across a barrier must conserve the cone index, a consequence of the Klein tunneling behavior in AA-stacked BlG. In the following sections we will extend the consequence of the cone index notion to superconductivity pairing. A quasi-particle in the even sector (odd sector) consists of two electron (2e charge) from opposing layers with the same spatial symmetry which possess h = +1 (−1) cone index and on-site energy of ε + = +γ 0 (−γ 0 ) respectively. Hopping of a quasi-particle from A subsite to nearest neighbor B subsite in the even-sector (odd-sector) changes the energy by t + = t 1 + γ 1 (t − = t 1 − γ 1 ). This decoupling is more than just a simple mathematical diagonalization and symmetry characterization. One can interpret the Hamiltonian of AA-BlG as a single layer honeycomb lattice Hamiltonian with two types of charge carriers described by is the creation operator of a quasiparticle in the A(B) subsite of the ith site with σ = ±1 wherein σ represents h-chiral Pseudo-spin (h-Pspin). where a ± † iσ (b ± † iσ ) is the creation operator of a quasiparticle in the A(B) subsite of the ith site with σ = ±1 wherein σ represents h-chiral Pseudo-spin (h-Pspin).
In the intralayer and interlayer nearest neighbor hopping approximation, viz. t 1 , γ 0 and γ 1 the system Hamiltonian is given by Here t ± = t 1 ± γ 1 and t 1 (γ 1 ) is the nearest neighbor intralayer(interlayer) hopping between A and B sublattices and the direct interlayer hopping (i.e. A 1 to A 2 or B 1 to B 2 ) is given by γ 0 . Here µ is chemical potential, also where η v−c = ±1. The four bands of AA-BlG separates into two independent up-down shifted single layer graphene bands where they are referred to as the even and odd sector, respectively. Near the Dirac points, the dispersion energies E ± n ( k) =hv ± f | k| ± γ 0 of odd and even sectors are shown by two up-down shifted Dirac cones in Fig. 2 is Fermi velocity of h = ±1 quasi particles. Generalizing the tight binding model to include further neighbor hopping terms can highlight some hidden aspects of the AAstacked Dirac cone quasi-particles. For second neighbor interlayer hopping, γ 1 taken into regard one can distinguish quasi-particles with the same chirality (v-chirality) and different cone index (h-chirality) from their velocities which could be inspected experimentally. The Fermi velocity of Dirac cone with h = −1 chirality decreases as interlayer hopping increases, while the velocity of h = +1 quasi-particle increases: In the strong inter layer coupling t 1 → (−)γ 1 Fermi velocity v − f (v + f ) → 0 and odd(even) sector bandwidth tends to zero. As shown in the next subsection, interlayer coupling γ 1 may be considerable, so that inequality could be considerable.
B. Analytic Tight Binding Model for Intercalated Bilayer Graphene
In this subsection we will generalize the previous procedures to include the case of experimentally observed structures such as Li or Ca intercalated bilayer graphene. We will follow the notion it believe that these structure are intercalated AA-stacked bilayer graphene as has been shown in the Fig. 1. From the beginning, the Hamiltonian is generalized to incorporate several broken symmetries, including the on-site energies, hopping integrals, and bonds lengths (geometry). Due to this generalization, it can be used to obtain analytic dispersion energies of not only C 6 CaC 6 , but also related graphene-like structures such as B 3 N 3 CaB 3 N 3 . The Hamiltonian of such a non-interacting system iŝ where α and β run over sublattice orbitals A m i , B m i and the intercalated atom (e.g. Ca) orbital. The Schrödinger equation for this system in terms of Bloch coefficients in k space becomes Here the β=0, 1, 2, ..., 12 subscripts refer respectively to intercalant Ca, and N is the number of unit cells. Here ǫ Ai = ǫ A and ǫ Bi = ǫ B . Mirror symmetry of this system result in the relations which reflects the mirror symmetry through the Ca plane that separates even and odd states (i.e. h = ±1 h-chirality). By inserting Eq. 10 into Eq. 9, with more detail given in Appendix A, one obtains two independent Schrödinger equations corresponding to eigenvectors Ψ − i ( k) T = (0, ) respectively. For the odd eigensystem, the Schrödinger Eq.35 reduces to following 6 × 6 matrix eigenvalue problem The Schrödinger Eq. 11 can be solved analytically, with the six eigenvalues presented in Appendix I, Eq. 51. These expressions are unaffected by the intercalant layer due to the separation of even and odd mirror symmetries, but the presence of Ca will renormalize parameters. For the even mirror sector, the Schrödinger equation Eq.35 reduces to the following 7 × 7 matrix eigenvalue problem where n = 7, ..., 13. The other seven bands of the Schrödinger equation Eq. 35 can be obtained from solving the new Schrödinger Eq. 12. The k-dependent part of corresponding matrix components of H 11 and H 12 are identical in form, differing only in hopping parameters, hence H ± ( k) can be considered as a shrunken graphene monolayer Hamiltonian with renormalized hopping parameters. It follows that, Similar to the pristine bilayer graphene (see 14 ), intercalated bilayer graphene can be interpreted as two independent pseudo-graphene monolayers where one of them is dressed by a modified hopping Ca layer. The thirteen bands of bilayer graphene divide into two groups, six bands group (odd symmetries) corresponding to H − Hamiltonian and seven bands group (even symmetries) which are eigenvalues of H + c matrix. Mathematically many of the results obtained in ref. [41] can be generalized to these graphene like structures but with renormalized hopping parameters. For general k, except at Γ, it is challenging to obtain an exact analytical solution of the Schrödinger equations of Eq. 12. At the graphene Dirac points which have become folded back to the this supercell Γ point, symmetry breaking of nearest neighbor intra-and interlayer hopping parameters i.e. t 11 , t ′ 11 and t 12 , t ′ 12 (Fig. 1b) results in two unequal small gaps with different centers, corresponding to six (odd-sector) and seven (even-sector) band pseudo-graphene Hamiltonians, given by where t ± 1 = t 11 ± t 12 and t ′± 1 = t ′ 11 ± t ′ 12 . For the use of graphene as field effect transistors, it is necessary to create an tunable gap. Tunable and sizable band gap can be constructed in single layer ? by decoration and in the bilayer graphe by intercalation as can be seen from Eq. 13.
The effect of symmetry braking of the the inter-layer coupling parameter i.e. ∆γ 1 = t 12 − t ′ 12 leads to inequality of even and odd sector gaps. Knowing the size of these energy gaps, one can find the difference in the first nearest neighbor intra and inter-layer hopping parameters symmetry breaking (suppose ∆t > 0, ∆γ 1 > 0 and ∆t > ∆γ 1 ), where ∆t = t 11 − t ′ 11 . These two gaps are characteristic of AA-IBlG. In the case of Li-intercalated BlG, experimental ARPES spectra (Fig. 4 Ref. 23 ) shows two distinct gaps of wide E − g = 0.20eV and E + g = 0.46eV . In this reference the authors has equated the ratio of these two gaps as the ratio of the interlayer skew coupling parameters (γ 2 and γ ′ 2 in their notation). Equation 14 slightly correct the discussion that has been stated in the [Sec. III, sub-sec.C of Ref. 23 ] about this ratio.
Eq. 12 results in the event that intercalant layer hopping parameters to graphene sheets are negligible, as in the case of Li-decorated graphene where Li atoms fully ionize and the Li-associate band lies above the Fermi energy, so Li-C hopping effects are negligible. In this particular case the odd (-) and the even (+) nontrivial eigenvalues of H ± matrix, are given by are defined in Eqs. 52 and 55 respectively. Details for obtaining these results are presented in Appendix A and ref. [41]. Now further notation is established. Similar to the case of pristine AA-BlG investigated in the previous subsection, this transformation recast the noninteracting Hamiltonian Eq. 8 as the direct sum of two single layer pseudo-graphene structures with renormalized hopping integrals in the right and left hand chiral representation of the form wherein, as illustrated in the Fig.2(c), we introduced quasiparticle creation operators and hopping integrals in real space,ĉ The creation operatorĉ iασ creates an quasi-particle with h = ±1 h-chirality and spin (σ) at iα atomic subsite of each of these two graphene-like structures, participate directly in the formation of superconducting phases.
The separation of thirteen bands of intercalated bilayer graphene into groups of six bands with odd-symmetry and seven bands with even symmetry has strong advantages. The tight-binding model band structures can be fit to DFT band structure results with greater accuracy and simplicity, for example. Especially when the pairing interactions are introduced, this transformation reduces the speed of numerical calculations significantly and provides additional insight into physical properties of bilayer graphene.
C. Fit to DFT band structures
This formalism has been applied and divided into two separated effective single layer (shrunken)-pseudo-graphene models. DFT calculation is used to obtain the electronic structure data. Gnu-plot of DFT data has been shown with blue thin dashed lines in the background of Fig. 3(a),(b). The six odd bands and seven even bands were fit to DFT bands with results shown with color lines in the Fig. 3. The main problem that emerged here is that odd and even sector bands are not separated in the DFT data. But by inspecting the DFT band structure and to be careful in analytical calculations, knowing that odd sector does not affected by Ca-C coupling the odd and even sector flat bands can easily be distinguished. Emerging of two distinct gaps in the Dirac point of both sectors is the other guide to perform fitting. The reduced fitting parameters are given in Tables I and II. We follow the model presented in ref. [41]. As illustrated in Fig. 1(a) on each of bilayer graphene sheets, the A 1 sublattice site of the central unit cell is chosen as the origin labeled by 0, and the B 1 site in the adjacent hexagon is considered as the second C atom neighbor. While just slightly longer than the nearest neighbor atoms B 2 and B 3 in the same hexagon, this neighbor is labeled by n = 2, and so on the further neighbors are labeled. In Fig. 1 (a), the big hexagon included up to nine intra-plane neighbors but for the pristine graphene it is surrounded by five neighbors. C-C hopping from 0-subsite to intra-plane nth neighbor(t intra i0jn ) plus (minus) hopping from 0-subsite to inter-plane nth neighbor (t inter i0jn ) has been shown by t ±CC 0n . In-plane Ca-Ca hoppings t Ca−Ca 0m are included up to m = 4 neighbors. Modified Ca to C hopping integrals in Eq. 12, which are defined as √ 2 times the hopping from central Ca to mth neighbor C atoms, are denoted by t CaC 0m and obtained up to m = 5 neighbors. The six odd bands and seven even bands specified by Eqs. 11 and 12 respectively, have reduced hopping integrals given by t ± imσ,jnσ = t inter im,jn ± t intra im,jn , DFT calculated bands were fitted to tight binding odd bands of Eq. 15, with results presented in Fig. 3(a) and Table I. The even bands which are solutions of Eq. 12 are obtained numerically and fitted to the DFT bands. The results are illustrated in Fig. 3(b) and Table II. There are two flat bands with d-wave Bloch character: one in each of the odd and even sectors. The opposite signs of nearest neighbor inter-and intra-layer hopping amplitudes, t 11 1 and t 12 1 , leads to reduced bandwidths of the even states (+ sign) in Eq. 12. Larger interlayer hopping t 12 1 leads to smaller bandwidth (H ± = H 11 ± H 12 ), while for the other six odd-symmetry bands, the bandwidth can increase as can be seen in Fig. 3. On the other hand, the bandwidth of the even sector flat band is reduced, again due to the calcium to carbon hopping while the odd sector is not affected by Ca-C hoppings. For this reason, the flat band belonging to the even bands group plays a major role in superconductivity. where the index n indicates n-th neighbour.
IV. SUPERCONDUCTING PAIRING AND STATES
A.
Bogoliubov-de Gennes Transformation
We treat the thirteen band Hubbard model in mean field approximation to investigate superconductivity in intercalated bilayer graphene. Singlet pairing is considered and, as illustrated in Fig. 1, pairing interactions are pictured in real space as interactions between nearest neighbors on inter-and intra-layer carbon atoms. This superconducting Hamiltonian can be transformed, as for the non-interacting case, to the direct sum of two independent superconducting Hamiltonian corresponding to odd and even symmetries pseudo-graphene structureŝ ) and H + su and H − su are Hamiltonians of even and odd symmetry pseudographene structures respectively; for more information see Appendix C. Decoupling of these Hamiltonians means there is no effective pairing between an electron in the even sector with one in the odd sector. Using the fact that the gap is small on the electronic scale, applying perturbation up to second order gives quasiparticle energies from Eq.18 (see ref. [41]) as Here where < ij > subscript has been dropped for brevity. Also Here . Band order parameters ∆ + mn ( k) are defined such that first electron is in the mth band and second electron is in the nth band of H + c ( k), also ∆ − mn ( k) is defined such that first electron is in the mth band and second electron is in the nth band of H − ( k). Note that an electron in the mth band of H + c ( k) and an electron in nth band of H − ( k) cannot be paired; i.e for this case ∆ ± mn ( k) = 0. The Bogoliubov-de Gennes transformation used in Eq. 18 shows that pairing amplitudes should be ∆ α ± =<ĉ ± α,iĉ ± α,j > which implies that all inter-and intra-layer pairing amplitudes in real space are equal, g 0 = g ′ 0 and g 1 = g ′ 1 . This restriction makes the matrix gap equations hermitian and implies that band order parameters, ∆ ± mn ( k) can be interpreted physically as pairing of electrons in different bands with pairing interaction g ± 0 . In this limit ∆ ± mn ( k) is equal to the product of band Green function and g 0 , annihilates an electron with spin σ in the ith even or odd bands with energy E ± i ( k).
B. Two Gap Superconducting Pairings and States
The linearized gap equation can be decoupled by minimizing free energy with respect to nearest neighbor pairing, or equivalently with respect to ∆ α ± , for more detail see Appendix C. Minimization of free energy with respect to ∆ α in which A, B, C and D matrices have been introduced as wherein, Γ matrix elements are given by Equation 23 can be interpreted as two independent gap equations for odd (minus sign) and even (plus sign) pseudographene systems. The impact is that superconductivity can be established independently in two distinct sectors this system. In the next section we numerically inspect which of these pseudo-graphene sectors, odd or even, play major rules of superconductivity. (1 1 − 2), where the latter two are degenerate. Their eigenvalues, in obvious notation, are Similar to decorated single layer graphene 41 , for each of the gap equations given by Eq. 70 there are nine independent solutions. The first three superconducting states with island (localized) character can be expressed in compact form as where V sy refers to one of the V s , V dxy or V d x 2 −y 2 -wave symmetries. Pairing in these phases cannot propagate. The other six superconducting states of Eq. 70 have the explicit form where Here for each symmetry, and + superscript refer to the even sector and − superscripts to the odd sector. In each of above categories, d x 2 −y 2 and d xy phases are degenerate. Similar to decorated single layer graphene, only three of solutions for which l = 2 are physically reachable in the framework of mean field theory. In the limit of pristine bilayer graphene, these three states convert to usual s-wave and d-wave symmetries. In the sec. V, for odd and even sectors we illustrate from numerical solutions which of these three phases are dominant.
C. Flat band(s) Superconductivity: Strong interlayer coupling
To make a rough estimate and provide mathematical insight into the physics, one can diagonalize normal state Hamiltonian of pristine bilayer graphene in the mini-Brillouin zone of C 6 CaC 6 . As shown in Fig. 3, two conduction bands corresponding to odd and even sector are weakly dispersive near the Fermi energy along Γ → M , which seems to play a major role in the formation of superconducting Cooper's pairs.
In the case of pristine bilayer graphene, odd (-) and even (+) so called flat bands are the minimum of (E ± α,2 ( k), E ± β,2 ( k)) along different high symmetry paths that are given by Eqs. 47 and 48, where their Bloch wave function, viz. Eq. 49, are similar to those of ref. 41 . They have linear combination of d x 2 −y 2 and d xy character and are responsible for superconducting pairing d x 2 −y 2 and d xy .
One can ask: what is so special about these flat bands? To address this question, we return to the matrix gap equation of Eq. 25. The right hand side contains the product of a form factor given by Ω α ni ( k)Ω * β ni ( k) + Ω β ni ( k)Ω * α ni ( k) and the thermal occupation factor over the energy denominator i.e.
tanh( En 2k B T )
En( k)+Ei( k) . The form factor is a function of the Bloch wave coefficients of normal state Hamiltonian. Using Eqs. 49, 50 one can investigate that in the limited case of pristine bilayer graphene at the nearest neighbor approximation, these Bloch wave coefficients are the same for both sectors and this is almost for the next neighbor approximation. As such, it is independent of chemical potential µ. Thus the form factor is the same for the both odd and even sector of band structures. Since tanh(x) x → 1 as x → 0, when one of the conduction odd or even flat bands and so their corresponding valance bands becomes completely flat at the Fermi level then are one of these flat bands. In this case the dominant contribution comes from these mutual conduction and valence flat bands, and one can show that all gap equation block matrix elements in Eq. 24 are equal to A ± . In this event, depending on whether the flat bands belong to the odd or even sector, one can use Eqs. 21, 25, and 49 to show that Cooper pair interaction potentials g 0 of d-wave symmetry, i.e. g d 0 and s-wave symmetry g s 0 , are given by In this case Γ ± 12 < 0 and g d 0 is less than g s 0 , so d-wave symmetry is dominant, with an extraordinary decrease in pairing potential interaction proportional to the critical temperature. This "ultra" decrease of pairing interaction can explain the importance of the flat bands in the formation of Cooper pairs in twisted bilayer graphene. Here, another point that can be deduced from mathematical calculations is that in the limit of strong interlayer hopping when inter-layer hoppings tends to minus (plus) of intra-layer hoppings, as one can see from Appendix D, all of the six bands of even (odd) sector become flat while the other sector bandwidths increases. Then one can show that the gap matrix elements are and so all possible superconducting symmetries are degenerate with pairing potential g 0 = k B T c .
A. General features
To know in the variety of doping regimes which of the pairing symmetries (distorted s-wave or d-wave) are dominant, and also to inspect in the which sectors of the band structure Cooper pairs with the lowest pairing potential can constructed, superconducting gap equations of odd and even sectors i.e. Eq. 23 are solved numerically. The result is shown in Fig. 4.
Similar to Li intercalated single layer graphene (ref. [41]), at moderate doping, d-wave superconductivity always dominates in both sectors of the C 6 CaC Fig. 4 it can be seen that when the even flat band is dominant (hole doping), the pairing potential g 0 of d-wave symmetry emerging from this band is less than the case that dominant d-wave symmetry occurs in the odd flat band (by electron doping). For example, with a factor of one-third at their critical M point i.e. g + 0 = 1 3 g − 0 , that means that when the even flat band reaches near the Fermi level, reduction of bandwidth due to both interlayer C-C interaction(H 12 ) and C-Ca layer interactions lead to a sharp increase in the density of states. While both C-C and Ca-C interlayer interaction decrease the pairing potential g 0 , one can numerically inspect that reduction of the bandwidth due to graphene interlayer interaction, more affected the energy of pairing in the even flat band than Ca-C layers interaction.
In the case of Ca intercalated bilayer graphene, for a given T c the pairing interaction potential g 0 (proportional to superconducting gap energy, |∆| 2 ) for dominant d-wave phases of the odd and even superconducting gaps are illustrated in Fig. 5. It is evident that for a given critical temperature T c , superconductivity can be single gap or two gap and dominant superconducting pairings can occur between electrons in the odd H − , or even H + c , sector.
VI. DISCUSSION AND SUMMARY
Discovery of new superconducting phases, often at low temperature, has been one of the active achievements in recent decades. Superconductivity in the lithium-coated single layer graphene with respect to to the case of calciumdecorated single layer is more capable 10 and also reported experimentally, whereas this situation is opposing in the bilayer graphene. Superconductivity has been reported in the Ca-intercalated bilayer graphene around T c = 4K while Li-intercalated bilayer graphene is not superconductor 24,25 . Experimental fabrication of Li and Ca-intercalated bilayer graphene has been reported in the ref. 23 and ref. 22 respectively. Li and Ca atoms are suggested to intercalate between graphene layers with an ordered structures similar to that of bulk GICs like LiC 6 i.e. two graphene layers are AA-stacking. That it demonstrates the vitally important role of intercalant inter-band in the formation of superconductivity Cooper pairs. Most theoretical microscopic models of pristine honeycomb bilayer superconductivity concentrate on the more stable AB stacking of bilayer graphene. 27,28 . To our knowledge, there are few studies that focus on pristine AA stacked bilayer graphene, 34 and no analogous studies that concentrate on intercalated bilayer graphene. Based on ab initio calculations of electron-phonon coupling, anisotropic Migdal-Eliashberg theory has been applied by some authors 40,42 to give strong evidence that this system is a phonon mediated two-gap superconductor with predicted T c around 7K. Recently unconventional superconductivity up to T c = 1.7K has been reported in gated twisted bilayer graphene where the layers are rotated relative to each other by a magic angle of 1.1 • . Superconductivity in this low doping regime of band filling cannot be addressed within the framework of conventional electron-phonon coupling based on Migdal's adiabatic approximation. This discovery has opened speculation that this superconducting behavior may shed light on other systems in which superconductivity arises from an insulating phase. 19 This development also highlights studies such as ours, which does not rely on the mechanism, but instead on more general pairing concepts and the specific electronic structure.
Many theorists have suggested that exotic superconductivity gaps arise in some materials are related to a peculiarity of the normal-state band structure. In this kind of issue angle-resolved photoemission spectroscopy (ARPES) extensively has been applied to analysis of the normal state band structure. To determine structural and electronic properties of material, tight binding model in addition to DFT calculation has been used to interpret experimental results achieved from ARPES. Following this point of view, extended Hubbard model has been used here to address superconductivity of Ca-intercalated bilayer graphene.
The main results are achieved in two steps: first, for the normal-state (non-interacting part) a more realistic effective tight-binding model with two decoupled symmetry sectors is derived, with the parameters determined by a fitting to the DFT band structure; second, the dominant superconducting pairing channels are discussed based on a mean-field treatment of a Hubbard model obtained by adding (effective) attractive interactions between the electrons. The summary, results and comparisons are presented below.
A. Non interacting Part: Normal state
In the first part of this manuscript we have taken the advantages of mirror symmetry operation on the AA-stacked BlG through the central plane and generalized it to include intercalated bilayer graphene.
AA-stacked pristine bilayer graphene: Two kinds of quasiparticles
The honeycomb lattice structure makes quasi-particles in the single layer graphene behave as a massless Dirac particles at low energies that provide a proper platform to examine the characteristic effects of QED, such as the Klein paradox and Zitterbewegung, which were never observed in particle physics. In addition to relativistic nature of quasiparticles in single layer graphene, they exhibit extra aspects of such behaviors in AA-stacked BlG. Interlayer coupling causes the bilayer graphene to exhibit properties that are not observed in the single layer graphene. In QED, the 4-component "Dirac spinor" (Dirac representation) decomposes into two irreducible representations, acting only on two 2-component right and left hand "Weyl spinors." There is an pedagogically useful mathematical similarity between the Schrdinger equation of AA-BlG in these two representations and the Dirac equation. The non-interacting AA-stacked Hamiltonian is invariant under mirror symmetry, leads to division of the AA-BlG band structure into two even and odd sectors characterized by eigenvalues of mirror operation h = ±1 (analogous to two decouple Weyl equations for massless relativistic chiral particles). Each of these sectors describes a graphene-like structure i.e. H + and H − . In this notion as has been shown in the Fig. 2c, up and down pseudo-spin (h-Pspin) of irreducible blocks of AA-BlG Hamiltonian viz. H + and H − , consists of two electrons with the same spin which each one located at similar sub-sites in the opposing layers. These quasi-particles describe by an additional index has been called "cone index". Here we refer to this index as h-chirality index. According to this notion, one can describe Dirac cones in AA-BlG shown in the Fig. 2(b) with two kind of chirality with respect to asymmetric in such a way that the structure and its vertical ("v-chirality") and horizontal ("h-chirality") mirror image are not superimposable. This chirality (h-chirality) is a general aspect of AA-BlG quasi-particles that holds for general hoppings and over the entire Brillouin zone and it is unrelated to the helicity operation. This is in contrast to the famous graphene chirality (helicity) that occurs just at low energies near the Dirac cones.
Physically, AA-stacked BlG can be interpreted as a "single layer honeycomb lattice" that instead of 1e − charge carriers, there are two types of fermionic quasi-particles with 2e − charge, moving through it that differ in a quantum number called cone index (h-chirality). Quasi-particles with different h-chirality don't interact but move independently. Also (±1) h-chiral quasi-particles have ±γ 0 on-site energies (similar to the positive and negative energy of the particles and the anti-particles in the QED). Hopping of quasi-particles with (+1) h-chirality constructs the even sector of band structure while the odd sector made by (-1) h-chiral quasiparticles. Near the Dirac cone points, quasiparticles with (±1) h-chirality are moving with Fermi velocities v ± f . One can distinguish quasi-particles with the same chirality (v-chirality) and different cone index (h-chirality) from their velocities which in the case of strong interlayer coupling could be observed experimentally.
Intercalated AA-staked bilayer graphene: Mirror symmetry operation advantage
Based on mean-field treatment of an extended Hubbard model, a realistic thirteen band tight binding model, has been constructed to include the case of experimentally observed structures such as Ca intercalated bilayer graphene, where its parameters are determined by a fitting to DFT band structure. We followed the notion that Calcium doped bilayer graphene as the thinest limit of graphite intercalation compounds ( Fig.1(a)).
In our previous work, the effects of Li-decoration on structural and electronic band structure of single layer graphene has been demonstrated in details and also symmetry character of the band-branches illustrated as well as the possible superconducting phases of lithium decorated single layer graphene LiC 6 were obtained analytically and analyzed. 41 . The Brillouin zone (BZ) of these structure is one third of that of graphene, with the Dirac points folded back to the Γ point. In this mini-BZ, the two π bands of (pristine) graphene folds to six branches and their different symmetries (d+id, s,...) are also separated as illustrated in Fig. 2 of ref. [41]. Generalization of these results to include intercalated bilayer graphene (IBlG) has strong advantages and provides additional insight into its physical properties. This is possible through decoupling of normal and superconducting Hamiltonians of IBlG into two independent corresponding single layer pseudo-graphene Hamiltonians, coupled only by a common chemical potential.
Similar to the pristine AA-staked BlG, accounting for symmetries of Bloch wave coefficients, the 13×13 Hamiltonian of IBlG converts, by mirror symmetry, to two decouple sectors: an 7×7 even symmetry sector H + c with involvement of the intercalant (coated single layer pseudo-graphene) and the 6 × 6 odd sector H − , for which the intercalant provides only renormalized hopping amplitudes and break down symmetry of hopping integrals (six bands shrunken single layer pseudo-graphene). Therefore, all previous discussions about 2e − charge, h-Pspin and chirality of quasiparticles in pristine AA-stacked BlG are extended to include IBlG.
Periodic perturbation of graphene layers potential due to ordered Intercalant atoms causes hopping integrals symmetry to break and so two distinct gaps of size E ± g = 2|t ± 11 − t ′ ± 11 | open at the Dirac point (folded to Γ point) of each of the even and odd sector pseudo-graphene structures. These two gaps are characteristic of AA-IBlG. Knowing the size of these energy gaps, one can find the difference in the first nearest neighbor intra and inter-layer hopping parameters symmetry breaking i.e. (∆t = t 11 − t ′ 11 ) and (∆γ = t 12 − t ′ 12 ). In the case of Li-intercalated BlG, experimental ARPES spectra (Fig. 4 Ref. 23 ) shows two distinct gaps of wide E − g = 0.20eV and E + g = 0.46eV . We slightly correct the discussion that has been stated in the [Sec. III, sub-sec.C of Ref. 23 ] about the relation between these two gaps and symmetry breaking interlayer coupling parameters.
In the case of Li-intercalated BlG, Li-s orbital is fully ionized and Li-C hybridization is negligible, so the odd and even sector band structure are similar to the band structure of pristine shrunken-graphene C 6 . The difference of course is due to ± sign that appears in the even and odd sectors between intra and inter layer hopping terms which leads to different bandwidth. The even and odd sectors Schrödinger equation solved analytically (or nearly so). From the beginning, the Hamiltonian is generalized to incorporate several broken symmetries, including the on-site energies, hopping integrals, and bonds lengths (geometry). Due to this generalization, it can be used to obtain analytic dispersion energies of not only C 6 CaC 6 , but also related graphene-like structures such as B 3 N 3 CaB 3 N 3 .
Tight Binding Parametrization of Ca-intercalated bilayer graphene from DFT
Dividing the thirteen bands into seven even-symmetry bands and six odd-symmetry bands, considerably facilitates tight Binding Parametrization from DFT. We used up to nine neighbor approximation tight binding model considering the symmetry breaking of bond length and hopping integral parameters across different direction of hexagons. The main problem with DFT data is that the odd and even sector data bands are not separated. But by inspecting the DFT band structure and to be careful in analytical calculations (knowing that odd sector does not affected by Ca-C coupling directly and so remain graphen like), the odd and even sector DFT flat bands can easily be distinguished. Emerging of two distinct gaps in the Dirac point of both sectors is the other guide to perform fitting. The reduced fitting parameters are given in Tables I and II with results are shown in the Fig. 3.
B. Interacting Part: Superconductivity
SA show that electron transport across a barrier must conserve the cone index, a consequence of the Klein tunneling behavior in AA-stacked BlG. Here it is discussed that Cone index footprint can be traced also in the formation of superconducting Cooper pairs. Due to this index, the salient differences are emerged between the Cooper pairs in single layer graphene and AA-stacked bilayer graphene. The two types of even and odd symmetry superconductivity are predictable in AA-stacked bilayer graphene.
Odd/ Even Superconducting Gap equations and Symmetry Phases
Similar to the normal state Hamiltonian, the superconducting Hamiltonian is also block diagonalized into two sectors. Each of these two blocks represents the superconducting Hamiltonian of the even and odd single layer pseudographene structures. The impact is that superconductivity in AA-stacked BlG can be established independently in two distinct band structure sectors. Two quasi particles (i.e. "four electrons") just with the same h-chirality (i.e. cone index) can team up to build a Cooper pair (Fig. 2c). In the other words, pairing in bilayer graphene arises between quasiparticles inside of the coated single layer pseudo-graphene H + c band structure (even sector-superconductivity) or uncoated shrunken single layer pseudo-graphene H − (odd sector-superconductivity) separately; even-odd pairing is impossible without further symmetry breaking.
Two distinct superconductivity gap equations corresponding to H + c and H − single layer pseudo-graphene structures emerged from minimization of the free energy. These behaviors show that general aspects of superconductivity in the (Li-)decorated single layer graphene 41 and (Ca-) intercalated bilayer graphene are similar, and their behaviors are different primarily in the probability of two gap superconductivity in the bilayer structures. A difference of course is that interlayer interaction becomes a key factor; the Cooper pairs in AA-stacked BlG instead of 2e − charge have 4e − charge and additionally have an right or left hand h-chirality index. This theoretical prediction requires empirical inspect.
These two even and odd superconducting gap equations were solved analytically to obtain the relations between the superconducting pairing potential and resulting ordered phases. The two sets of gap equations have solutions similar to those obtained in our previous work, for decorated single layer graphene. 41 Seven hybridized orbitals in pseudocoated single layer graphene support nine possible bond pairing amplitudes. There are nine superconducting phases with p x , p y , f , s ± , d ± xy and d ± x 2 −y 2 atomic orbital-like symmetries corresponding to each of these even(+)/odd(-) gap equations. Only three of them are physically reachable, denoted by Ψ ± 2,s , Ψ ± 2,dxy , and Ψ ± 2,d x 2 −y 2 . These symmetries almost preserve properties from a two band model of pristine graphene (Fig. 4 Ref. 41 ). The d-wave solutions are degenerate and so it can support chiral d x 2 −y 2 + id xy superconductivity in each of these sectors. These three phases are distorted by intercalation. In fact, the significant difference which appears between two bands pristine C 2 pairing symmetries and shrunken graphene C 6 (decorated graphene) is a skewness factor i.e. α l,± sy = 1 in front of the self consistent gaps solutions. In each of these even or odd sectors, one band is weakly dispersive near the Fermi energy along Γ → M where its Bloch wave function has linear combination of d x 2 −y 2 and d xy character, and is responsible for d x 2 −y 2 and d xy pairing with lowest pairing energy in our model(see Ref. 41 ). Because of the high density of states of carriers in this band, d-wave superconductivity is more robust against disorder than s-wave.
Dominant Bands: Possibility of Two Gap superconductivity
Superconductivity could be established in the odd or even sector of intercalated AA-stacked or simultaneously in both sectors. Even/odd sectors are coupled just via chemical potential. Two nearly "flat bands" with d-wave symmetry Bloch character, crossing the Fermi energy, each related to the graphene-like structures, are responsible for two distinct d-wave superconductivity gaps that could be emerged. The distorted s-wave superconductivity is constructed between quasi-particles in upper bands of both sector that have s-wave symmetry character. At moderate doping, distorted d-wave superconductivity is dominant in both sectors while in high doping, distorted s-wave becomes preferable. Superconductivity with different phases in each of these sectors e.g. distorted s-wave in one sector and d-wave phase symmetry in the other, is not so possible.
To know whether superconductivity in IBlG is single gap or multi gap, depending on what type of intercalant is used, one can inspect numerically which sector will prevail. Hybridization of Carbon (CP z ) and intercalant (Is) orbitals, electron or hole doping factor (chemical potential), nearest neighbor hopping symmetry breaking (Γ gap opening) and interlayer coupling (H 12 ) are important factors in specifying superconductivity pairing symmetry and dominant bands. While just the even sector is under the influence of C-I orbital hybridization and odd sector bands does not affected directly (except an small gap opening), but interlayer coupling has a dual effects. It reduces the bandwidth of one sector (e.g. the even sector) but simultaneously it increases the bandwidth of the other sector(e.g. odd sector). It therefore plays a crucial role in electronic correlation effects.
Mathematical analysis shows that in the limit of strong interlayer hopping, so that inter-and intra-layer hoppings are the same (up to a minus sign), then bandwidths of the odd (even) sector become completely flat while the bandwidths of the other sector are doubled. In this case superconductivity established in the flat band sector, even with a small Cooper pairing potential g 0 , leads to a high critical temperature i.e. k B T c = g 0 . In this limit all of the possible superconducting phases, i.e. d, p and s-wave symmetries, are degenerate. This observation suggests there may be some aspect of unconventional superconductivity in bilayer AA-graphene related to inter-versus intralayer hopping effects be available under high pressure.
The best conditions to induced superconductivity in IBlG are those that interlayer coupling be strong and band structure of even or odd sector or both slightly be deviated from pristine graphen-like structures. Under these circumstances if electron or hole doping causes the nearly flat bands of odd or even sector meet the Fermi surface then chiral d + id odd or even superconductivity may induce. An inadequate inspecting of the band structure of different metal intercalated C 6 M C 6 (M=Li, Na, K, Rb, Cs, Gr, Be, Mg, Ca,Sr and Ba) such as that can be seen in the Fig. 3(a) of Ref. 43 , shows that odd sector flat band (not affected by I-C orbital coupling) always lay on top of even sector flat band, that means the interlayer coupling t 12 in all of them is positive. The structures in which interlayer (IL) band is empty have negligible I-C coupling and so both even and odd sector have the structure similar to shrunken graphene C 6 . The even band of Li intercalated BlG meet the Fermi Surface and interlayer coupling is stronger than the others.
These is an interesting structure that could host even-superconducting beside normal odd-Dirac quasi-particles. But since IL band is empty, symmetry not allow out of plane phonon vibration to trigger superconductivity. However it may exhibit richer correlation effect than single layer graphene under pressure, gating or proximity effects. K and Rb intercalated BlG have potential to exhibit superconductivity, while in Gr intercalated BlG it seems that simultaneous odd and even Dirac cones coexist.
Experimental evidence for superconductivity in Ca-intercalated BlG has been reported. Motivated by this observation we performed numerical calculation for C 6 CaC 6 . Numerical calculations show that for both even and odd sector gap equations, d-wave phases, i.e. Ψ 2 ±,dxy and Ψ 2 ±d x 2 −y 2 , are dominant (smaller g 0 means less interaction energy for pairing) and slightly distorted by intercalation i.e. α 2,± d ≈ 1 while s-wave symmetry i.e. Ψ 2 ±,s require greater energy and are significantly distorted, α 2,± s = 1. Phase Transition from d-wave single-gap to d-wave dual-gap Superconductivity in calcium intercalated bilayer graphene is possible. Although T c experimentally around 4K and theoretically calculated near 6K are reported, also using Raman spectroscopy, possibility of distinguishing intralayer and interlayer electron-phonon interactions in samples of twisted bilayer graphene has been reported by ref.
[ 35 ] relying on these results, from Fig. 5 it can be seen that both even and odd d-wave phases are nearly degenerate at 2K, consistent with this system being a two gap superconductor around T c ≈ 2K. Our results support two d-wave gap superconductivity that has been proposed in ref. [40] (Fig. 2), although different sectors were not separated in their studies.
Relying on pre-mentioned properties, AA-stacked bilayer graphene may exhibit feature-rich electronic properties than singlelayer graphen. It seems that study of superconductivity in pristine and intercalated AA-stacking BlG could tend to interesting experimental achievements-such as (even and odd) chiral superconducting d + id pairing which has been predicted primarily in pristine single layer graphene at van Hove singularity point, and also simultaneous coexistence of different phases e.g. superconductivity and normal Dirac quasiparticles or two gap superconductivity (with different chirality) in different branches of their band-structures.
VII. ACKNOWLEDGMENTS
R. Gholami acknowledges support that allowed an extended visit to the University of California Davis during part of this work. W.E.P. was supported by NSF grant DMR-1207622. results and commented on the manuscript. S. M performed DFT calculation and designed the figures.
X. ADDITIONAL INFORMATION
Competing interests: The authors declare no competing interests.
XI. APPENDIX A: ACCURATE TIGHT BINDING MODEL
In our previous work we used realistic multiband tight binding model for decorated monolayer graphene and obtained its band structure analytically. 41 Here we follow and generalize that method and find analytic solutions for the intercalated bilayer graphene spectrum in general form. We consider Bloch ket state of Eq.8 as C α e i k. rnα |φ nα (33) in which r nα = r n + d α and r n is nth Bravais lattice site vector position and d α is vector position of the α-th subsite with respect to unit cell n. The Ca sublattice is labeled by α = 0 also A 1 3 subsites are labeled by α = 1, ..., 12 respectively. |φ nα = φ nα ( r − r n − d α ) is the atomic π electron ket state of subsite α of site n. The Schrödinger equation for this system is Symmetries of this system imply that C α ( k) = ±C α+6 ( k). The Schrödinger equation Eq.34 can be written in the following 13 × 13 matrix form eigenvalue problem where the column matrix C( k) is C( k) = (C 1 ( k) C 2 ( k) ...
Here t CaC i is the hopping amplitude from Ca to ith neighbor C atom. The Ca-Ca dispersion is The interlayer dispersion matrices H 11 , H 22 and interlayer dispersion matrices H 12 and H 21 are given by where the off-diagonal carbon-carbon dispersion matrices are in which m and n are layer index. Shorthand notation has been introduced as follows: cos k. ξ 1 + cos k. ξ 2 + cos k. ξ 3 (41) and it has been supposed that w t = t ′ mn Using the following unitary transformation one can separate the bilayer graphene Hamiltonian Eq. 35 into two decoupled single layer pseudo-graphene Hamiltonians, where one of them is decorated with the intercalant layer Here H ± = H 11 ( k) ± H 12 in matrix notation is in which k-dependent on-site energies have been defined as ε ± 1 ( k) = ǫ A − µ 0 + α ± ( k) and ε ± 2 ( k) = ǫ B − µ 0 + α ± ( k). Also the following shorthand notation has been introduced Unitary transformation of Eq. 43 divides thirteen bands of intercalated bilayer graphene into, six and seven bands groups. Following the approach 41 that has been applied to monolayer decorated graphene, an exact analytical solution of the six-band group can be found in general case. These bands are eigenvalues of H − matrix and are not affected directly by the intercalant band.
In the special case of pristine bilayer graphene in which γ ± * ( k) = θ ± ( k) = β ± ( k), ε ± 1 ( k) = ε ± 2 ( k) and τ ± i ( k) = d ± i ( k), Eq. 44 easily can be diagonalized to find eigenvalues and also eigenvectors. The eigenvalues are given by with eigenvectors are given by replacing m in the following equation with m = 0, 1, 2 respectively However, except at Γ point it is challenging (and unhelpful) to obtain an exact forms of the seven-bands group analytically. These bands are eigenvalues of the H + c matrix, and analytic expressions for them can be obtained just in the particular case of no hopping between intercalant layer and graphene sheet, similar to the case for lithium intercalated bilayer graphene where intercalant band is empty (no Li-C hopping). In these cases nontrivial solutions are eigenvalues of H + matrix. Eigenvalues of H − and H + matrices are given by 41 in which k dependent chemical potentials are defined as, The Π ± 0 (t 2 , ξ i , k) function is introduced as and, c ± Also, w ± m (t i , ξ i , k) are eigenvalues of the following matrices The eigenvalues can be obtained as wherein The upper left portion of Eq.58 can be obtained sufficiently well by perturbation theory.
XII. APPENDIX B: BOGOLIUBOV-DE GENNES TRANSFORMATION
The interacting Hamiltonian H su in matrix representation is 14 × 14 matrix, ). H N is Hamiltonian of the normal state and H p is the pair interaction matrix. The full matrix must be diagonalized to obtain the quasiparticle spectrum.
The mean field superconducting Hamiltonian of Eq. 1 in Nambu space isĤ su = kΨ † ( k)H su ( k)Ψ( k) where H su in matrix representation is, The interlayer pairing matrices are H P 11 ( k) = H P 22 ( k) and interlayer pairing matrices are H P 12 ( k) = H P 21 ( k). The pairing matrices are given by where m and n are layer index which can take 1 or 2. The order parameters accordingly in Fourier space are Σ 11 l ( k) = g 1 Σ l<ij> e i k. τ l , Σ 12 l ( k) = g where < ij > subscript indicate nearest neighbor pairing amplitude in real space as illustrated in Fig.1. Introducing the following unitary transformation matrix, one can transform Eq.63. Eq.60 can be transformed to the block diagonalize form in which and, ). New pairing matrices H + p ( k) and H − p ( k) are defined as, From Eq. 64 it can be seen that the superconducting Hamiltonian H su can be diagonalized into two decoupled new superconducting Hamiltonian H + su and H − su . Thus electrons just can be paired within the seven bands sector H + c or within the six bands sector H − , without coupling between the sectors. Thus superconductivity in bilayer graphene cane be interpreted as two decoupled monolayer graphene-like systems with independent behaviors.
XIII. APPENDIX C: TWO SUPERCONDUCTING GAP EQUATIONS
The linearized superconducting gap equation are obtained by minimizing the quasiparticle free energy with respect to the nearest neighbor order parameter, or equivalently with respect to ∆ α ± . The free energy of system is For F + the summation runs over n = 1, ..., 7 giving E Q n = E Q+ n,s ; for F − the summation takes n = 8, ..., 13 values giving E Q n = E Q− n,s , with E Q± n,s introduced in Eqs. 19 and 20. Minimization of the free energy with respect to ∆ α + gives (68) giving independent gap equations for the seven bands odd-symmetry graphene-like Hamiltonian H + c . Minimizing the free energy with respect to ∆ α − gives (69) where ∆ α as illustrated in Fig.1(b) covers all possible nearest neighbor inter-and intra-layer C-C pairing amplitudes. Eqs. 76 and Eq. 69 in matrix form written as (70) can be interpreted physically as pairing of electrons in different bands with pairing interaction g ± 0 . Just in this limit ∆ ± mn ( k) is equal to the product of band Green function and g 0 , Hered ±σ i ( k) = 7 m=1 C ± * m (E i ( k))ĉ σ m ( k) annihilates an electron with spin σ in the ith six (odd) or seven (even) sector bands with energy E ± i ( k). In this limit, Eq. 71 has two solutions Ψ ∆ = Ψ ′ ∆ or Ψ ∆ = −Ψ ′ ∆ . The corresponding gap equations Eqs. 70 and 71 become decoupled gap equations corresponding to the even or odd sector of the graphene-like systems.
XIV. APPENDIX D: FLAT BAND(S) SUPERCONDUCTIVITY
Mirror symmetry transformation rearranges the noninteracting Hamiltonian Eq. 8 as the direct sum of two single layer pseudo-graphene structuresĤ N =Ĥ + N ⊕Ĥ − N (even sector (+ sign) and odd sector (-sign)) with renormalized hopping integrals of the form t ± iασ,jβσ = t inter iασ,jβσ ± t intra iασ,jβ+6σ . In the limit case of strong interlayer hopping wherein t inter iασ,jβσ → ±t intra iασ,jβ+6σ one can see that odd (or even) sector bandwidths completely become flat while the other sector bandwidth doubles. In this limit, thermal weight factor of Eq. 25 tanh( 2 ) E ± j ( k)+E ± i ( k) → β 4 and so Γ matrix elements are given by These elements are linked to the normal state Bloch coefficients via the Ω ±β ij ( k) factors that given by Eq. 21. For the case of pristine bilayer graphene, Eq. 76 can be determined analytically. In this limit, normal state Bloch coefficients are given by wherein e iφ ± m ( k) = η * ± m |η ± m | and η ± m ( k) = d ± 2 ( k) + u m d ± 1 ( k) + u * m d ± 3 ( k). Ω ±β ij ( k) factors can be calculated by substituting Bloch coefficients Eq. 77 in the Eq. 21. For instant one can show Ω ±1 11 ( k) = Ω ±4 11 ( k) = Ω ±7 11 ( k) = − 1 3 cos( k. δ 1 − φ ± 1 ( k)) By calculating a large number of these factors and replacing them in the Eq. 76 one can obtain
|
2019-04-04T22:53:32.000Z
|
2019-04-04T00:00:00.000
|
{
"year": 2019,
"sha1": "ca744b4cd16c416d92e6d6bfd05272f71de7ecd0",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "ca744b4cd16c416d92e6d6bfd05272f71de7ecd0",
"s2fieldsofstudy": [
"Physics",
"Materials Science"
],
"extfieldsofstudy": [
"Physics"
]
}
|
248298150
|
pes2o/s2orc
|
v3-fos-license
|
Impact of Amylose-Amylopectin Ratio of Starches on the Mechanical Strength and Stability of Acetylsalicylic Acid Tablets
The two main components of starch — amylose and amylopectin, are responsible for its interaction with moisture. This study investigated how moisture sorption properties of the starches with different amylose-amylopectin ratio impacted tablet properties including drug stability. The starch samples were equilibrated to 33, 53, and 75% relative humidity (RH) and then assessed for tabletability, compactibility, and yield pressure. Effect of humidity on viscoelastic recovery was also evaluated. Tabletability and compactibility of high-amylose starch were better than that of high-amylopectin starch at 33 and 53% RH. However, at 75% RH, the reverse was observed. In terms of yield pressure, high-amylose starch had lower yield pressure than high-amylopectin starch. High-amylose starch tablets also exhibited lower extent of viscoelastic recovery than high-amylopectin starch tablets. The variations in the tableting properties were found to be related to relative locality of the sorbed moisture. Degradation of acetylsalicylic acid in high-amylose starch tablets at 75% RH, 40°C was less than the tablets with high-amylopectin starch. This observation could be attributed to the greater amount of water molecules binding sites in high-amylose starch. Furthermore, most of the sorbed moisture of high-amylose starch was internally absorbed moisture, therefore limiting the availability of diffusible sorbed moisture for degradation reaction. Findings from this study could provide better insights on the influence of amylose-amylopectin ratio on tableting properties and stability of moisture-sensitive drugs. This is of particular importance as starch is a common excipient in solid dosage forms.
INTRODUCTION
The interaction between water and solids is of paramount importance in the formulation, processing, and product performance of pharmaceutical solid dosage forms. Exposure of materials to moisture is known to influence flow, tabletability, drug stability, and dissolubility (1)(2)(3)(4)(5)(6)(7)(8)(9). Due to the ubiquitous nature of moisture, it is not possible or desirable to completely remove moisture as moisture is necessary for bond formation and manufacturing processes often involve water. However, moisture exposure can be kept short and residual moisture level low in the products while post-manufacture products can be protected by coating or packaging such that deleterious effects of moisture may be minimized.
Water can associate with solids in two ways -adsorption and absorption (10). Interaction of water at the surface as a monolayer (primary layer of water molecules tightly bound on available surface of solids) and as multilayer moisture (moisture layer that is formed beyond the monolayer as more water molecules adsorbed on the monolayer moisture) is termed as adsorption. Penetration of water into the bulk solid structure is known as absorption. The term sorption is commonly used to describe a state when both adsorption and absorption occur. The sorbed moisture can be further classified into bound and unbound moisture (11)(12)(13). Additionally, moisture can also be grouped based on relative distribution in solids as (1) monolayer moisture, (2) externally adsorbed moisture condensed on the monolayer, and (3) bound moisture as internally absorbed moisture (14,15). The extent of multilayer moisture can be approximated by subtracting monolayer moisture from externally adsorbed moisture (1). Availability of the sorbed moisture for a reaction, however, may not be similar across different materials. Water activity is a unitless parameter that ranges from 0 to 1 and can be derived from the ratio of water vapor partial pressure divided by saturation pressure. Water activity reflects the escaping tendency or fugacity of water from a substrate. Accordingly, water activity gives an indication of the degree freely available moisture for reaction and hence, the reactivity (16).
The solid-moisture interactions depend largely on the physicochemical properties of the materials. Nokhodchi et al. (1) reported an increase in moisture uptake with increasing particle size of hydroxypropyl methylcellulose. Similarly, Agrawal et al. (17) observed differences in distribution of sorbed moisture in relation to the particle size of ethylcellulose whereby larger particles had more internally absorbed moisture and less externally adsorbed moisture. However, Saripella et al. (18) reported no significant effect of crospovidone particle size on moisture affinity of the material and that similar moisture distribution patterns were observed. Besides particle size, molecular arrangements in solids (crystalline or amorphous structure) can also determine their interaction with moisture (19,20). Mihranyan et al. (20) suggested that structural properties of the materials including surface area and pore volume should be considered along with crystallinity, when evaluating moisture interactions. Particle porosity was also found to affect the state of moisture found in the material, as bound or unbound moisture (21). Yet, another study showed similar moisture uptake when the particle porosity was varied (22). Therefore, various physicochemical properties of the materials have to be examined holistically to explain moisture-solid interactions.
Starch is a natural carbohydrate polymer comprising of two main polysaccharides, amylose and amylopectin, and some minor secondary components (proteins, lipids, and minerals) in relatively small fractions (23). Depending on the botanical origins, the starches differ slightly in their compositions. Amylose-amylopectin ratio has been reported to influence gelatinization temperature, gelling/pasting behavior, swelling, crystallinity, and moisture interaction of starches (24)(25)(26). Starch can also be modified to confer suitable properties for specific purposes. The modifications of starch including but not limited to etherification, esterification, crosslinking, oxidation, cationization, and grafting could potentially alter properties of starch (27)(28)(29). However, modification may alter moisture interactivity of the starch.
In the pharmaceutical industry, starch is commonly used as a multifunctional excipient, functioning as a binder, disintegrant, diluent, or glidant (30). Different starches have been evaluated for their performance as tablet diluent. Differences in terms of the compression behavior of starches were attributed to variations not only in constituents but also in physical attributes such as shape, size, and size distribution. Paronen and Juslin (31) reported that particle rearrangement had minimal effect on the densification of larger starch granules (e.g., potato starch) during compression. On the contrary, the densification of smaller starch granules (e.g., maize starch) was primarily due to particle rearrangement. It has also been reported that gelling property of amylose contributes to formation of tablets having higher tensile strength (32). Starches of different botanical origins also exhibited differences in distribution of sorbed moisture and notably, a tablet formulation that contains starch with more of the moisture held as internally absorbed moisture had lower drug degradation (33).
Despite the massive amount of information on the structure and composition of starch, there is still limited literature information on moisture effects relating to tableting and drug stability of starch-based formulations. Also, variations of starch grains in their shape, size, and size distribution could confound the findings related to the differences in amyloseamylopectin ratio. Accordingly, it is essential to obviate variations due to physical attributes. Therefore, the aim of this investigation was to unravel the moisture properties of starches having different amylose-amylopectin ratio and to relate the findings to tableting properties and drug stability. Interaction of starch with moisture can be of consequential interest in formulations containing moisture-sensitive drugs or components. In the study, moisture interaction of the starches was evaluated by the moisture sorption-desorption isotherm and distribution pattern of the sorbed moisture. Effects of moisture on tableting properties, including tabletability, compactibility, and yield pressure, were investigated. Viscoelastic recovery of the tablets was also assessed. Lastly, stability of moisture-sensitive drugs in tablets formulated with starch was evaluated at 75% RH, 40°C. (36). For evaluation of ASA degradation, the following solvents were used -acetonitrile (HPLC Grade, Fisher Chemical, USA), ortho-phosphoric acid (Merck Chemicals, Germany), and purified water.
Conditioning of Starch
Prior to RH conditioning, the starch samples were ovendried at 60°C for at least 12 h. Subsequently, the oven-dried samples were stored at 25°C for at least 2 weeks over a series of saturated inorganic salt slurries of MgCl 2 •6H 2 O, Mg(NO 3 ) 2 •6H 2 O, and NaCl for storage conditions of 33, 53, and 75% RH, respectively.
Assessment of Morphology
Morphology of the starch was examined under a scanning electron microscope (SEM; JSM-6010LV, JEOL, Japan). The starch sample was mounted on a sample holder using a copper tape and subjected to sputter coating with platinum (MSP-2S, IXRF Systems, USA) prior to observation at 500× magnification with 2.5-kV accelerating voltage.
Characterization of Starch Grain Size
Images of at least 600 grains were randomly captured using a SEM (JSM-6010LV, JEOL, Japan). Size of the grains was measured with the in-built software (InTouchScope, JSM-6010, v.1.11). Size characterization was performed in terms of D 10 , D 50 , and D 90 , which correspond to the 10th, 50th, and 90th percentile grain sizes, respectively. The size distribution was calculated as span based on Eq. 1.
Powder X-ray Diffraction
Structure of the starch was characterized for X-ray diffraction pattern (XRD; XRD-6000, Shimadzu, Japan). The XRD pattern was evaluated for starch samples that had been conditioned to 33, 53, and 75% RH. The sample was scanned (Cu-Kα radiation source, 40 kV, 30 mA) from 5 to 50° 2θ, in 0.02° steps at 2°/min. Relative crystallinity of the sample was estimated using the software (XRD-6100/7000 v.7.00), based on the ratio between crystalline peaks and total intensity. The determination of relative crystallinity was performed in triplicates and averaged results reported.
Moisture Content
Moisture contents of the oven-dried and RH equilibrated starch were measured by drying approximately 500 mg of the sample at 105°C (HE73, Mettler Toledo, Switzerland) until a constant sample weight. The moisture content was subsequently calculated based on Eq. 2.
Dynamic Moisture Sorption-Desorption Isotherm
The moisture sor ption-desor ption isother m was obtained using a vapor sorption analyzer (Aqualab, Meter Group, USA) at 25°C. Approximately 1000 mg of oven-dried starch sample was placed in the sample chamber and subjected to conditions of increasing RH from 10 to 90% (sorption process) and decreasing RH of 90 to 10% (desorption process). The starch sample was equilibrated to each RH condition before moving to the next RH conditions. Equilibration was achieved when change in the sample mass with time was less than 0.05%/h. Data from the sorption-desorption isotherm was fitted using a non-linear regression analysis to the Guggenheim-Anderson-de Boer (GAB) model (Eq. 3).
where M w is the equilibrium moisture content, M m is the monolayer moisture content, C GAB is a constant that represents total heat of the first layer of sorption, and K GAB is a constant that is related to multilayer sorption. a w refers to water activity. Specific surface area (SSA) was derived from M m using Eq. 4.
Area of hysteresis loop was obtained by taking the difference between area under curve (AUC) of the sorption and desorption isotherms (Eq. 5). (2)
where θ Y-N is the fraction of surface covered by a monolayer of water molecules. RH is the relative humidity and E is a constant unique to each material, which can be expressed by Eq. 7.
where q l is the heat of adsorption of water molecules on the surface, q L is the normal heat of condensation of water molecules (40.8 kJ/mol), k B is the Boltzmann's constant (1.38 × 10 −23 J/K), and T is the absolute temperature.
To calculate the fraction of surface covered by multilayer moisture (Φ), Eq. 8 was used. Equation 9 was used to determine total amount of adsorbed moisture in the multilayer (α). where M s and M d refer to moisture contents of the sample during the sorption and desorption phases, respectively. RH max is the maximum relative humidity. A Y-N and B Y-N are constants unique to each material and can be expressed using Eqs. 12 and 13, respectively.
where ρ w is the water density (1 g/mL). W m refers to the dry sample weight used, which was 1000 mg. V m and V a (6) are the volumes of the adsorbed and absorbed moisture, respectively. As multilayer moisture is contributed by monolayer (A Y-N θ Y-N ) and externally adsorbed moisture (A Y-N (θ Y-N + α)), multilayer moisture can be estimated from Eq. 14 (1).
Evaluation of Tabletability and Compactibility
Compression was performed on starch samples that had been equilibrated to different RH conditions (33,53, and 75% RH). Starch, 200 mg per tablet, was compressed into tablets at different pressures -64, 127, 191, 255, and 318 MPa using a compaction simulator (STYL'One Evolution, Medel-Pharm, France). Flat face punches and die set of 10-mm diameter (Natoli Engineering Company, USA) were used for the compression. The upper and lower punches were set to move at a linear speed of 35 mm/s. Only freshly withdrawn samples of RH conditioned starch were used for compression into tablets. Immediately after preparation, the tablets were kept in the corresponding RH chamber at 25°C until required for analysis. Tablet evaluation was carried out on tablets that have been kept for at least 24 h post-production.
Tabletability describes tablet tensile strength as a function of applied compression pressure. Tabletability profile was obtained by plotting a graph of tablet tensile strength against the compression pressure. Slope of the linear line of the plot was used as a tabletability index.
Tablet tensile strength (σ) was calculated according to Eq. 15. The tensile strength was determined using five randomly chosen tablets and results were averaged.
where F refers to the tablet breaking force and was measured using a hardness tester (TBF 1000, Copley Scientific, UK). Thickness of tablet was measured 24 h after production (H 24h ) using a thickness gauge (Mitutoyo Absolute, 547-300S, Japan). The same thickness gauge was also used to obtain the tablet diameter (D 24h ).
Compactibility is the ability of a material to form a compact of sufficient strength. The compactibility of a material can be evaluated from a plot of tablet tensile strength against tablet porosity at different compression pressures. Tablet porosity (ε) was calculated based on Eq. 16.
where ρ app and ρ true refer to the apparent density and true density of the material, respectively. ρ app was calculated using Eq. 17. ρ true was measured using a helium pycnometer (Pentapycnometer, Quantachrome Instruments, USA). ρ true was measured on dried starch to minimize influence of volatile moisture on the measurement.
where W is the tablet weight. The tablet was weighed using an analytical weighing balance with a readability up to 0.1 mg and reproducibility of ±0.1 mg (QUINTIX224-1CEU, Sartorius Lab Instruments GmbH & Co. KG, Germany). r 24h is the tablet radius measured 24 h post-production. r 24h was derived from the tablet diameter, which was measured using a thickness gauge.
The Ryshkewitch-Duckworth equation (Eq. 18) can be transformed into a linear equation. By plotting ln tensile strength against tablet porosity, information on tablet tensile strength at zero porosity (σ 0 ) and constant b, which is related to pore distribution within a tablet, can be obtained (37,38).
Determination of Yield Pressure
Yield pressure of the starch sample was indirectly determined by assessing compressibility of the sample. Compressibility refers to the ability of a material to undergo volume reduction. For the assessment of compressibility, 500 mg of starch was compressed using a compaction simulator (STYL'One Evolution, MedelPharm, France) installed with 14-mm flat face punches and die set (Natoli Engineering Company, USA). The upper and lower punches were set to move at 15 mm/min linear compression speed to achieve maximum compression pressure of 95 MPa. Changes in the tablet thickness with compression pressure were recorded and used in computation of the ρ app using the Analis software (AnalisMX v2.07.08).
Heckel equation (Eq. 19) (39) was used to analyze the compressibility, based on in-die method.
where K H and A H are obtained from the slope and y-intercept of the linear portion of the Heckel plot, respectively. K H is a constant related to ability of a material to undergo plastic deformation. The reciprocal of K H is equivalent to yield pressure, which indicates compressibility of the material. A H is a constant related to degree of packing that can be achieved by particle rearrangement before considerable interparticle bonding takes place. D is the relative density, obtained from the ratio between ρ app and ρ true at compression pressure, P. The extent of particle rearrangement in the die was further evaluated in terms of D a (relative density during the initial densification resulting from die filling, particle slippage, and rearrangement), D 0 (relative density at zero applied pressure), and D b (relative density due to the change in density resulting from particle rearrangement). From the value of A H obtained from the y-intercept of the linear portion of the Heckel plot, D a was calculated using the following equations (Eqs. 20 and 21).
Similarly, using 0 H obtained from the starting point of the Heckel plot before compression starts, D 0 was calculated according to Eqs. 22 and 23. D b was obtained by taking the difference between D a and D 0 (Eq. 24).
Evaluation of Viscoelastic Recovery
Tablets for the evaluation of viscoelastic recovery were produced in the same manner as described in the section on evaluation of tabletability and compactibility. To account for changes in the axial and radial directions, the viscoelastic recovery was assessed by measuring tablet dimensions (height and diameter) using a thickness gauge. Tablet dimensions were measured immediately and 24 h after production to obtain tablet volume immediately (V imm ) (Eq. 25) and 24 h post-production (V 24h ) (Eq. 26).
where r imm and H imm refer to the tablet radius and thickness measured immediately after production.
The viscoelastic recovery was then expressed as change in tablet dimensions according to Eq. 27.
Tablet Preparation
Tablets, each weighing 300 mg, comprising starch and ASA (4:1, w/w) were produced using a compaction simulator (STYL'One Evolution, MedelPharm, France). Prior to tablet production, powder mixture was prepared by mixing starch and ASA in a glass bottle (diameter: 5.5 cm, height: 5.5 cm) with a figure of 8 mixing motion for 3 min. An accurately weigh amount of powder mixture was then filled manually into the die. The compression was performed using 10-mm flat face punches and die set (Natoli Engineering Company, USA) with both the upper punch and lower punch linear compression speeds set at 35 mm/s. Compression pressure of 191 MPa was used. Three batches of tablets were produced, with the starch fractions as (A) high-amylose starch, (B) mixture of high-amylose and high-amylopectin starches in 1:1 ratio, and (C) high-amylopectin starch. The tablets immediately after fabrication were stored under 75% RH, 40°C to accelerate ASA degradation. The tablets were also evaluated for viscoelastic recovery. The evaluation was performed on five randomly chosen tablets according to the method described in the previous section.
Evaluation of ASA Degradation
Tablets containing ASA were evaluated for ASA degradation using a HPLC method as previously reported (33). The tablets were evaluated 7, 14, 28, 60, and 90 days after production. At each sampling time point, at least three randomly chosen tablets were evaluated for ASA degradation and results were averaged. Individual tablet was pulverized in a mortar using a pestle. Subsequently, about 150 mg of the pulverized powder, accurately weighed, was mixed with a menstruum of acetonitrile:water (35:65, v/v) to 10 mL and ultrasonicated (LC60H, Elma, Germany) for a minute before passing through a 0.45-μm filter (regenerated cellulose; Sartorius, Germany) prior to HPLC analysis. The filtrate (20 μL) was next injected into the HPLC column for analysis. A reversed phase C-18 column (ACE Generix, 5 μm, 4.6 mm × 100 mm, Advanced Chromatography Technologies, UK) was used as the stationary phase and maintained at 40°C. The mobile phase comprising acetonitrile:water:ortho-phosphoric acid (35:65:0.2, v/v/v) was eluted at 0.8 mL/min using an (27) isocratic elution mode.
ASA Content Uniformity
ASA content in ten randomly chosen tablets was determined using the HPLC method described in the previous section.
Data Analysis
All graphs were plotted using the software (GraphPad Prism, v.6.01, USA). The same software was used to perform linear regression. Data fitting to the GAB model, Young and Nelson model, and Leeson and Mattocks model was carried out using Matlab (R2013b, MathWorks, USA).
Morphology, Size, and Size Distribution
SEM images of the starch grains (Fig. 1) showed that there were no obvious differences in the morphology of high-amylose starch and high-amylopectin starch. The starch grains can be described as having irregular and angular surfaces with polygonal shapes. However, results from the size characterization (Table I)
XRD Pattern
XRD patterns of the two starches are shown in Fig. 2 and the diffractograms are generally in agreement with those from other studies (41,42). No obvious changes in XRD patterns of the starches after storage at different RH conditions were detected. Distinct peaks at 2θ values of about 15° (peak 1), 18° (peak 2), and 24° (peak 3) were observed in the XRD patterns of high-amylopectin starch. In contrast, high-amylose starch showed peaks at 2θ values of around 17° (peak 1) and 20° (peak 2). The high-amylopectin starch generally exhibited higher relative crystallinity than highamylose starch, except samples equilibrated at 75% RH where both starches had almost comparable relative crystallinity (Table I). Additionally, the relative crystallinity of high-amylose starch was close to that of high-amylopectin starch when both starches were equilibrated at 75% RH. Figure 3 shows that after oven drying, both starches had comparable moisture content. Differences in the moisture content manifested after storage at different RH whereby high-amylose starch had higher moisture content than highamylopectin starch at all RH conditions. For both starches, the moisture content showed increasing trend with increasing storage RH.
Moisture Sorption-Desorption Isotherm
Moisture sorption-desorption isotherms of the starches are shown in Fig. 4a. The general shape of the isotherms is similar for both starches. The isotherms showed non-linear increment in moisture content with increasing water activity and had the characteristic sigmoidal shape curve, similar to typical moisture sorption-desorption isotherm of starch (26,43). Hysteresis between sorption and desorption isotherms was also observed over the entire range of water activity studied. The area of hysteresis was larger for high-amylose starch than for high-amylopectin starch. The areas of hysteresis were approximately 1.87 and 1.12 a w .% moisture for highamylose starch and high-amylopectin starch, respectively. Table II shows results of fitting the moisture sorptiondesorption data to the GAB model. The experimental data were relatively well fitted to the GAB model (R 2 = 1). While high-amylose starch had similar isotherm in shape as high-amylopectin starch (Fig. 4a), values of the parameters obtained from the GAB model were dissimilar between the starches. This suggests that the starches have their definitive modes of interaction with moisture. Values of M m and C GAB derived from the sorption isotherm were lower than the values obtained from the desorption isotherm. In contrast, K GAB values from the sorption isotherm were slightly higher than a b Fig. 1 Representative SEM images of a high-amylose starch and b high-amylopectin starch K GAB values obtained from the desorption isotherm. C GAB , a constant that is related to total heat of the first layer of sorption, was observed to have higher magnitude compared to K GAB . This confirmed the stronger interaction between sorption sites on the starch matrix and water molecules of the monolayer moisture. The lower magnitude of K GAB , a constant that is related to heat of multilayer sorption, implied weaker interaction of water molecules in the bulk or multilayer moisture compared to the monolayer moisture.
GAB Modeling
In general, high-amylose starch had higher M m than highamylopectin starch. Accordingly, SSA approximated from M m was also higher for high-amylose starch than for highamylopectin starch. The SSA values approximated from M m of the corresponding sorption isotherm were 296.87 and 253.26 m 2 /g for high-amylose and high-amylopectin starches, respectively. Similarly, the SSA values obtained using M m from the corresponding desorption isotherm were about 430.38 and 319.70 m 2 /g for high-amylose starch and high-amylopectin starch, respectively. In terms of the energy parameters, high-amylose starch was found to have higher C GAB than high-amylopectin starch for the parameters derived from the sorption and desorption isotherms. This indicates higher driving force for moisture sorption in high-amylose starch, specifically for formation of monolayer moisture. K GAB values derived from the sorption isotherm were almost comparable for the two starches, whereas the K GAB derived from the desorption isotherm was marginally higher for high-amylopectin starch than for high-amylose starch.
Locality of Sorbed Moisture
The experimental data were found to fit well to the Young and Nelson model (R 2 > 0.90) ( Table III). Percentage of the total moisture content distributed into monolayer moisture, externally adsorbed moisture, and internally absorbed moisture is shown in Fig. 4b. Initially, most of the sorbed moisture was found as monolayer moisture and the percentage reduced gradually as water activity increased. With increasing water activity, fraction of the sorbed moisture distributed as externally adsorbed moisture and internally a b For high-amylose starch, at water activity >0.4, considerable proportion of the sorbed moisture existed as internally absorbed moisture. In contrast, proportion of the sorbed moisture found as internally absorbed moisture in highamylopectin starch was low and at water activity >0.6, bulk of the moisture taken up by high-amylopectin starch was located externally. From Fig. 4b, it was also observed that at water activity >0.6, moisture distributed into externally adsorbed moisture was greater than that of monolayer moisture for both high-amylose and high-amylopectin starches. Therefore, allowing for estimation of extent of multilayer Table IV. The multilayer moisture increased as water activity increased and high-amylose starch had slightly higher multilayer moisture than high-amylopectin starch.
Tabletability, Compactibility, and Compressibility
Tableting properties of the starches are summarized in Table V. Tabletability and compactibility of high-amylose starch were more sensitive to the effect of RH as seen from the marked reduction in the tabletability index, b value, and σ 0 of high-amylose starch equilibrated to 75% RH. At RH of 33 and 53%, high-amylose starch had higher tabletability index than high-amylopectin starch. However, at 75% RH, high-amylopectin starch was found to have higher tabletability index than high-amylose starch.
A material with higher tabletability index suggests that it is more sensitive to change in the compression pressure, which translated to the production of higher tensile strength tablets at a lower compression pressure. The constant b value of Ryshkewitch-Duckworth equation is related to pore distribution within a tablet (44). A higher b value indicates faster increase in the tablet tensile strength as the porosity is reduced. Overall, high-amylose starch and high-amylopectin starch had comparable b values at 33 and 53% RH. However, at 75% RH, high-amylopectin starch had a considerably higher b value than that for high-amylose starch. Another parameter of the Ryshkewitch-Duckworth equation, σ 0 , was found to be greater for high-amylose starch at 33 and 53% RH. However, highamylopectin starch that had been equilibrated to 75% RH had higher σ 0 than high-amylose starch equilibrated to the same condition. The 53 and 75% RH equilibrated highamylopectin starch samples had comparable σ 0 .
From the Heckel parameters, high-amylose starch showed a lower yield pressure than high-amylopectin starch. For both starches, the yield pressure decreased with increasing
RH.
A lower yield pressure means that the material is more deformable under pressure. Despite their differences in yield pressure values, both starches showed rather similar die filling properties as evidenced from the comparable D 0 , D a , and D b values. For both starches, D a was slightly larger than D 0 . This could be ascribed to the elastic deforming property of starch. As such, unlike materials that undergo considerable plastic deformation or brittle fragmentation, there is less particle rearrangement when compressed. As RH increased, D 0 and D a values decreased, indicating reduced efficiency of the particles to pack in the die.
Viscoelastic Recovery
The powder expansion behavior during decompression was assessed as viscoelastic recovery. In general, the change in tablet dimensions showed an increasing trend as RH increased (Fig. 5). It was found that overall, tablets produced from high-amylopectin starch had greater change in tablet dimensions than high-amylose starch tablets. ASA tablets formulated with the different types of starch also exhibited similar trend (Fig. 6). The effect of applied compression pressure varied with the types of starch and equilibrium RH. For tablets produced from 33% RH equilibrated starches, the change in the tablet dimensions initially increased, reached the maximum at 127 MPa, and then decreased. Similar trend was observed for tablets produced from 53% RH equilibrated starches. The change in tablet dimensions with compression pressure was less remarkable for 75% RH equilibrated starches.
Properties of ASA Tablets
ASA content uniformity in the tablets was found to be good. The respective ASA content for the different formulations was 98.50% ± 1.33%, 99.51% ± 0.75%, and 99.50% ± 1.46% for tablets prepared from high-amylose starch, 1:1 mixture of high-amylose and high-amylopectin starches, and highamylopectin starch. Therefore, influence of drug content variation on ASA degradation was negligible. Percentage of ASA degradation in tablets produced from the starches of different amylose-amylopectin ratio is presented in Fig. 7. Differences in the percentage degradation of ASA among the tablets were not large but significant. Clearly, ASA degradation was impacted by the amylose-amylopectin ratio. Percentage of ASA degradation in tablets formulated with high-amylose starch was found to be consistently lower compared to that in the high-amylopectin starch tablets. Percentage of ASA degradation in tablets containing 1:1 mixture of high-amylose and high amylopectin starches was intermediate to the tablets containing only the high-amylose starch or high-amylopectin starch. After 90 days of storage, ASA degradation in tablets produced from high-amylopectin starch was 16.96% and 9.62% greater than the degradation in tablets containing high-amylose starch, and tablets containing 1:1 mixture of high-amylose and high-amylopectin starches, respectively. Fitting of the degradation data to the Leeson and Mattocks model was successful with R 2 > 0.90. Rate of ASA degradation was approximated from the slope of Leeson and Mattocks model and found to be 1.94 × 10 −3 , 1.98 × 10 −3 , and 2.20 × 10 −3 %•day −1 for tablets containing high-amylose starch, 1:1 mixture of high-amylose and high-amylopectin starches, and high-amylopectin starch, respectively.
Moisture Interaction of Starches With Different Amylose-Amylopectin Ratio
Starch is made up of predominantly two polysaccharides: linear chain amylose and highly branched amylopectin. The organization of amylose and amylopectin in starch gives starch its semi-crystalline structure (23). Amyloseamylopectin ratio of starch has been found to vary between and within botanical species (45) with implications on the molecular packing of the starch grains, whereby starches of higher amylopectin ratio have been reported to have higher relative crystallinity (25,46,47).
Interaction of moisture with starch is due to hydrogen bonding of water molecules to the hydroxyl groups on the polysaccharides (48). Although hysteresis could be observed in the isotherms of high-amylose starch and highamylopectin starch, the areas of hysteresis loop were not comparable (Fig. 4a). This could be attributed to differences in the polymeric structural organization, hence affecting accessibility and holding capacity of water molecule binding sites. Molecular packing or arrangement in solids is known to influence the interaction of the materials with water molecules (19,20). Crystallinity in amylopectin regions of starch results from intertwining of the outer chains of amylopectin in the form of double helices forming ordered regions that appear as "crystalline lamellae" (49). While amylose is also capable of forming double helices, packing of the double helices is less compact than those of amylopectin (49,50). Consequently, the hydroxyl groups in amylopectin are more constrained and become less accessible to interact with the ubiquitous water molecules as vapor in the environment. Differences in moisture interaction due to different packing arrangements in amylose and amylopectin could be observed from the results of the GAB as well as Young and Nelson model fittings.
Compared to high-amylopectin starch, high-amylose starch had higher M m for both sorption and desorption isotherms. Correspondingly, high-amylose starch also had higher SSA than high-amylopectin starch. Although the monolayer moisture values obtained from fitting sorption data into the Young and Nelson model was not directly comparable with the values obtained from the GAB model, the same trend was observed in which high-amylose starch had higher monolayer moisture content than high-amylopectin starch for the range of water activity evaluated. The monolayer moisture content obtained from the Young and Nelson model ranged from 3.17 to 3.74% and 2.43 to 2.86% for high-amylose starch and high-amylopectin starch, respectively. The lower relative crystallinity of high-amylose starch could also contribute to the greater proportion of sorbed moisture found as internally absorbed moisture. This is due to sorption of moisture occurring not only on the surfaces but also in the bulk structure of solids with lower relative crystallinity.
Amylose-Amylopectin Ratio of Starch and Tableting Properties
Presence of moisture is known to affect tableting properties as moisture often plays an important role in rearrangement, packing, and bond formation of the particles during tableting. Tabletability indices of the starches showed optima around the 53% RH equilibrated starch samples. The initial increase in tabletability index with RH could be attributed to the ability of moisture to act as a lubricant, hence facilitating volume reduction. This causes greater degree of powder bed densification and particle consolidation during compression (51). The improved densification in the presence of moisture was also observed as reduction in the yield pressure, as has been reported in other studies (52,53). With increasing compression pressure, tablet density approaches true density of the material and the tablet porosity reaches lower porosity at a faster rate. Tablets of lower porosity generally have larger areas of interparticulate bonding, therefore forming stronger tablets. This was also supported by the highest σ 0 observed for tablets produced from the starch samples equilibrated to 53% RH. Another possible explanation for the improvement in tablet tensile strength as RH increased from 33 to 53% could be due to moisture facilitated stronger interparticulate bonding by hydrogen bonding. Furthermore, moisture could smoothen micro-irregularities and facilitate proximal contacts between surfaces, therefore, reducing the interparticulate distances and increasing the van der Waals forces (54,55).
However, excessive amount of moisture was detrimental to the tablet tensile strength as seen from the reduced tabletability index, b value, and σ 0 of the starch samples equilibrated to 75% RH despite the lowered yield pressure. The subsequent reduction in tablet tensile strength occurring at higher RH has also been reported in other studies (56)(57)(58). It has been suggested that balance between the amount of monolayer moisture, externally adsorbed moisture, and internally absorbed moisture influences tablet tensile strength (1). From Fig. 4b, at water activity >0.6, the fraction of sorbed moisture distributed as externally adsorbed moisture was greater than the monolayer moisture. As such, for the starch samples stored at 75% RH, it could be implied that the amount of externally adsorbed moisture was higher. Thus, the reduced tablet tensile strength at high RH could be attributed to conditions where the labile surface moisture present could reduce or disrupt the interparticulate bonding. This was supported by the b value being the lowest for the starches equilibrated to 75% RH (Table V). The lower b value suggests that tablet tensile strength does not increase as much despite the reduction in tablet porosity. This also explains the reduced tabletability index and σ 0 observed for 75% RH equilibrated starch samples, albeit a more prominent reduction was observed in tablets produced from highamylose starch.
With reference to the amylose-amylopectin ratio and tableting properties, tablets produced from high-amylose starch were more susceptible to reduction in tablet tensile strength when stored under high RH. An analysis of locality of the sorbed moisture revealed that at high RH, amount of moisture adsorbed on the surface in excess to the monolayer moisture (extent of multilayer moisture) was higher in high-amylose starch (Table IV). The excess monolayer moisture could disrupt the interparticulate bonding, consequently lowering tablet tensile strength. Reduced tablet tensile strength when moisture level exceeded monolayer moisture has also been demonstrated in microcrystalline cellulose tablets (58).
Viscoelastic recovery of the tablets was also investigated as change in tablet dimensions in relation to the storage RH and the types of starch. Generally, the change in tablet dimensions increased as RH increased. This could indirectly contribute to the reduced tablet tensile strength as the RH increased from 53 to 75%, in addition to the moisture layer that interfered with interparticle bonds. Excessive viscoelastic recovery during decompression could disrupt the interparticle bonds formed during the compression phase, hence compromising tablet tensile strength. It was also observed that the viscoelastic recovery varied with the compression pressure and the types of starch. Several studies have reported that viscoelastic recovery can increase, decrease, or remain unaffected with the change in compression pressure (59,60). For starches equilibrated to 53% RH or lower, when compared to tablets equilibrated at 75% RH, the viscoelastic recovery decreased with increasing compression pressure, particularly when the compression pressure exceeded 127 MPa (Fig. 5). This could be because the high pressure brought the particles closer together, creating intensive particle-particle forces that hindered the expansion during the relaxation phase (59). The moisture layer on starches equilibrated to 75% RH could have weakened the compression pressure-viscoelastic recovery relationship that was observed in starches equilibrated to lower RH by acting as a cushion to the applied force. Therefore, while viscoelastic recovery could be observed in both starches, the extent varied with the types of starch, compression pressure used and environmental RH.
Amylose-Amylopectin Ratio of Starch and ASA Stability
Stability of a moisture-sensitive drug in formulations is affected by not only the amount of moisture present but also how it is retained in the formulations. Differences in the percentage of ASA degradation observed between tablets produced with high-amylose starch and high-amylopectin starch suggested that moisture retention in starch was affected by the ratio of amylose to amylopectin. This finding was supported by the lowered ASA degradation in tablets with high-amylose starch while ASA degraded faster when high amylopectin was used instead. For ASA tablets containing a 1:1 mixture of high-amylose and high-amylopectin starches, the ASA degradation was intermediate, suggesting a linearly proportionated outcome. Viscoelastic recovery could change tablet porosity, which could affect moisture exposure of ASA within the tablet matrix. Tablet porosity is often purported as the factor related to the ease of tablet components to come into contact with moisture/water, particularly in tablet disintegration and dissolution. However, in tablets formulated with elastically deforming materials, it has been reported that tablet porosity exerts minimal influence on drug degradation (61). Furthermore, accumulation of SA produced from ASA degradation can affect tablet pore volume and size with prolonged storage, consequently confounding the correlation between ASA stability and pore size (62). Therefore, the stability could mainly be related to the moisture interaction of the formulation components.
While high-amylose starch exhibited lower relative crystallinity than high-amylopectin starch, tablets formulated with high-amylose starch resulted in better ASA stability. A study on effects microcrystalline celluloses of different crystallinity on the stability of moisture-sensitive drugs also reported better drug stability in formulations containing microcrystalline of lower crystallinity (9,63). Clearly, molecular packing as reflected by crystallinity influences their interaction with moisture.
Hydrolysis of a moisture-sensitive drug in solid dosage form typically occurs when the drug dissolves in the sorbed moisture layer (40). Exposure of tablets to a fixed RH environment allows the sorbed moisture within the constituents to reach equilibrium state. This condition is akin to the process for obtaining moisture sorption-desorption isotherm, in which the sample is equilibrated to a particular RH before moving to the next RH condition. As such, results from fitting of the isotherm data to the GAB model as well as the Young and Nelson model could be extended to explain the reactivity of the sorbed moisture in relation to the amyloseamylopectin ratio of the starches.
An inverse relationship between the GAB parameters (M m and C GAB ) and percentage of ASA degradation was observed. M m is related to the number of water molecule binding sites. As high-amylose starch was found to have higher M m than high-amylopectin starch, the lower percentage of ASA degradation in high-amylose starch tablets could be attributed to the greater number of water molecule binding sites of high-amylose starch. C GAB is related to the total heat of monolayer moisture; the higher C GAB value of highamylose starch implied greater extent of interaction of moisture with the starch at monolayer moisture level. Therefore, the moisture may not easily be detached and be available for reaction in ASA degradation. Because the difference in K GAB between the two starches was marginal, K GAB may not be able to provide a clear distinction between high-amylose starch and high-amylopectin starch regarding the reactivity of sorbed moisture. Accordingly, K GAB , a constant that is related to binding of multilayer moisture on monolayer moisture formed on surface of the substrate, could be less descriptive for reactivity of sorbed moisture.
The lower ASA degradation in high-amylose starch tablets could also be attributed to the locality of the sorbed moisture. Indeed, importance of locality of sorbed moisture on functionality of excipients has been illustrated in crospovidone as a tablet disintegrant (64). In this study, findings from distribution of sorbed moisture were used to facilitate understanding of reactivity of sorbed moisture in relation to drug hydrolysis. In high-amylose starch, it was observed that at water activity >0.4, most of the sorbed moisture was the internally absorbed moisture. As such, the sorbed moisture was less available for hydrolytic reaction. In contrast, the locality of internally absorbed moisture in high-amylopectin starch was consistently lower than that of high-amylose starch throughout the range of water activity studied and at water activity >0.6, most of the sorbed moisture was associated with externally adsorbed moisture (Table IV). Therefore, as the tablets were stored at 75% RH, the higher amount of externally located moisture in highamylopectin starch had been the causative factor to greater ASA degradation. Interestingly, while tableting properties of highamylose starch were more at risk of reduced tabletability and compactibility at high RH (Table V), tablets with high-amylose starch had better stability profile than that of high-amylopectin starch. This may suggest that the larger values of M m and C GAB of high-amylose starch, and distribution of moisture into internally absorbed moisture are more instrumental in stability of hydrolysable drugs than tableting properties.
CONCLUSIONS
This study elucidated differences in tableting properties and moisture interactivities related to the amylose-amylopectin ratio of starch. Tableting properties of the starches were found to be affected not only by environmental RH but also by relative locality of the sorbed moisture. Highamylose starch demonstrated better tabletability than highamylopectin starch at 33 and 53% RH. However, at 75% RH, high-amylopectin starch exhibited better tabletability than high-amylose starch but drug stability was poorly compromised. At high RH, high-amylose starch was observed to have larger amount of moisture adsorb on the surface in excess to the monolayer moisture which disrupted bonding between the particles when compacted. This study also demonstrated that availability of binding sites for the water molecules, strength of moisture-starch interaction, and locality of sorbed moisture were determinant on the susceptibility of moisture-sensitive drugs present to degrade.
While this study has shown that variations in amyloseamylopectin ratio of starch could influence the effects of moisture on tableting properties and degradation of moisture-sensitive drugs, it should be noted that the starches used in this study were native-unmodified starches. Also, as the stability study was performed under elevated environmental RH and temperature, the results may be an exaggeration of degradation under non-stress conditions. Nonetheless, this study highlighted the importance of amylose-amylopectin ratio when changing from one type of starch to another during formulation development or product manufacture, particularly with actives that are moisture sensitive.
|
2022-04-22T06:23:04.176Z
|
2022-04-20T00:00:00.000
|
{
"year": 2022,
"sha1": "0f6367d38d0ee5e8d041e613848526eaa3b9878a",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1208/s12249-022-02266-0.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "59bb9947803bd524060811e990658c1140d71a03",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
268425246
|
pes2o/s2orc
|
v3-fos-license
|
A multi-omics systems vaccinology resource to develop and test computational models of immunity
Summary Systems vaccinology studies have identified factors affecting individual vaccine responses, but comparing these findings is challenging due to varying study designs. To address this lack of reproducibility, we established a community resource for comparing Bordetella pertussis booster responses and to host annual contests for predicting patients' vaccination outcomes. We report here on our experiences with the “dry-run” prediction contest. We found that, among 20+ models adopted from the literature, the most successful model predicting vaccination outcome was based on age alone. This confirms our concerns about the reproducibility of conclusions between different vaccinology studies. Further, we found that, for newly trained models, handling of baseline information on the target variables was crucial. Overall, multiple co-inertia analysis gave the best results of the tested modeling approaches. Our goal is to engage community in these prediction challenges by making data and models available and opening a public contest in August 2024.
In brief
Shinde et al. establish a community resource to compare patients' vaccine responses and to host annual contests to predict vaccination outcomes.They find specifically trained multi-omics models and simple age-based models outperformed.They aim to engage the community with a public prediction contest starting August 2024.
INTRODUCTION
The overall goal of our study is to provide a resource to develop and test computational models of vaccine-induced immunity.3][4] With the introduction of this vaccine, the number of reported whooping cough cases in the United States declined from approximately 200,000 a year in the pre-vaccine era to a low of 1,010 cases in 1976. 5Due to side effects reported MOTIVATION Several systems vaccinology studies have generated large datasets on the immune states of individuals before and after vaccination and have identified factors that drive differences in individual vaccine responses.However, it has been challenging to test how well conclusions from one study generalize across others given the differences in design.We aim to address this lack of reproducibility by establishing a community resource and engaging the research community through open prediction challenges that allow development and comparison of models that predict the immune response of human booster vaccinations for Bordetella pertussis.
with the use of wP vaccine, wP compounds in the DTwP vaccine were replaced with acellular pertussis (aP) antigens, leading to the development of a new and less reactogenic vaccine (DTaP) in 1991. 6Booster vaccines were similarly updated to include the acellular pertussis antigens (Tdap), which are routinely scheduled to be administered to teens and adults every 10 years. 7hile the aP vaccines provided protection from whooping cough equivalent to that of wP vaccines in clinical trials covering the initial period after vaccination, questions have been raised about their long-term durability 8,9 and protection against transmission. 10,11Specifically, an increase in pertussis outbreaks has been reported in various countries that have switched from wP to aP vaccines, 12,13 including the United States (data available from Pertussis Cases by Year, 14 accessed 15 May 2023).Many of these outbreaks occurred among children who only received aP vaccines.1][22][23][24][25][26] Some studies, including our own, 7,23,24 showed that there are long-lasting effects and differences in polarization and proliferation of T cell responses in adults originally vaccinated (primed) with aP vs. wP, despite subsequent Tdap booster vaccination. 20,21However, it remains unclear how this difference in immune responses is maintained over time between individuals primed with an aP vs. a wP vaccine.
To address these questions, our near-term goal is to determine how an individual responds to pertussis antigen re-encounter by characterizing the resulting cascade of events (i.e., recall memory response) and relating it to the pre-vaccination immune state.To achieve this, we apply a systems vaccinology approach that integrates different biological readouts such as transcriptomic, proteomic, and cytometric data to broadly define the immune state of an individual and to define changes in a pre-and post-vaccine setting.Subsequently, we create computational models connecting the pre-vaccination state of an individual to the final vaccination outcome after pertussis boost.8][29][30][31] Our long-term goal is to use a predictive understanding of pertussis booster responses to identify what differentiates aP from wP primed individuals and to determine the desirable characteristics of an elicited vaccine response.
3][34] This is especially challenging for systems vaccinology studies, as the design varies among studies.The multidimensional and heterogeneous nature of systems vaccinology data poses significant challenges for model development and validation.The presence of numerous features and a limited sample size further exacerbates the difficulties to conventional machine-learning (ML) and deep-learning methods.Overfitting is a crucial issue in a setting such as this, which is why testing any algorithm generated from the training data on a completely independent dataset (and new cohort) is so important.Integrating diverse data types, accounting for inter-individual variability, and capturing temporal dynamics are crucial aspects that need to be addressed to ensure the robustness and accuracy of computational models in system vaccinology.To address this, we measure the systems' response to Tdap booster vaccination over 4 years by creating four independent datasets with different cohorts for which computational models are created and tested (Table 1).We established the Computational Models of Immunity -Pertussis Boost (CMI-PB) resource to develop and test computational models that predict the outcome of Tdap booster vaccination that is designed to be used by the broader community.Here, we report on the outcome of the first challenge: an ''internal dry run'' where all teams involved in making predictions were part of the grant.We report on the challenges encountered for data sharing, formulating prediction questions, and the interpretation of the results from different prediction models, including the determination of which factors contributed to such predictions.These results will inform the design of the next prediction contest, which will open to community participation in August 2024.
RESULTS
This section covers two components: first, we describe the experience in setting up and running the internal prediction contest.Second, we describe specific models that were developed and discuss their performance on the prediction tasks.
Running the prediction contest
Providing access to experimental data in a uniform fashion Our experimental study is designed for a systems-level understanding of the immune responses induced by Tdap booster Our commitment involves conducting three annual challenges.The first challenge was completed in May 2022 with participation from the CMI-PB consortium.The second challenge concluded in January 2024 and featured the CMI-PB consortium along with a limited number of invited contestants from outside the consortium.We will involve members of the public in the third challenge.The first challenge included training data from a previously published study 7 and newly generated test data.Similarly, we will use both the training and test data from previous challenges as the training data for future challenges and generate new data for testing purposes.a Goal vaccination and closely mimics the design of previous studies from our group. 7Briefly, individuals primed with aP or wP in infancy were boosted with Tdap and blood was collected prebooster and post booster at days 1, 3, 7, and 14 (Figure 1A).Multiple assays, including (1) gene expression analysis (RNA sequencing [RNA-seq]) of bulk peripheral blood mononuclear cells (PBMCs), (2) plasma cytokine concentration analysis, and (3) cell frequency analysis of PBMC subsets were performed before and after booster vaccination until day 14.In addition, (4) plasma antibodies against Tdap were measured at all time points.We do not include T cell response assay data in the current challenge but plan to incorporate T cell data in future public CMI-PB challenge.Our overall goal is to make data from these studies available for analysis and utilize it to build computational models that predict the vaccination outcomes of newly tested individuals.For the first CMI-PB challenge, we collected data from a total of 60 subjects (28 aP + 32 wP; Table 1), which can be used as a training dataset to develop predictive models.Additionally, we obtained data from a separate group of 36 newly tested subjects (19 aP + 17 wP), which can be utilized as test data for running predictions.To integrate experimental data generated at different time points into the centralized CMI-PB database, we created unique subject and sample (specimen) identifiers and provided consis-tent nomenclatures for the different readouts between training and test datasets.The data collected post vaccination from the test dataset were withheld and utilized for the purpose of challenge evaluation.We used a relational database management system with tables corresponding to entity categories, including subject and specimen information, experimental data, and ontology tables (database schema is provided in Figure S1).We established different access modalities, including an application programming interface (API; https://www.cmi-pb.org/docs/api/)and bulk file downloads, and shared these different access modalities with our internal userbase of contestants.
The total feature count for the training dataset was 58,659, whereas the feature count for the test dataset was 58,462 (Figure 2A).These large numbers of features were primarily derived from the PBMC gene expression assay dataset, which has
Formulating the prediction tasks
We formulated multiple prediction tasks in order to quantitatively compare different approaches to model immune responses to Tdap booster vaccination.For each prediction task, pre-booster data (except from aP vs. wP status) of each subject were used to predict post-vaccination variables and rank individuals subsequently.We selected biological readouts known to be changed by booster vaccination under the premise that they are likely to Literature-based models (team 1) used raw data from the database and applied data-formatting methods specified by existing models.JIVE and MCIA approaches (teams 2 and 3) utilized harmonized datasets for constructing their models.(B) Flowchart illustrates the steps involved in identifying baseline prediction models from the literature, creating a derived model based on the original models' specifications, and performing predictions as described by the authors.(C) The JIVE approach involved creating a subset of the harmonized dataset by including only subjects with data for all four assays.The JIVE algorithm was then applied to calculate 10 factors, which were subsequently used for making predictions.JIVE employed five different regression models for prediction purposes.(D) MCIA approach applied MICE imputation on the harmonized dataset and used these data for model construction.MCIA method was applied to the training dataset to construct 10 factors.Then, these 10 factors and feature scores from the test dataset were utilized to construct global scores for the test dataset.Lasso regression was applied to make predictions.MCIAplus model was constructed by including additional features (demographic, clinical features, and 14 task values) as factor scores, and it also utilized lasso regression to make predictions.The MCIA approach utilized MICE imputation on the harmonized dataset for model construction.The MCIA method employed the imputed training dataset to construct 10 factors.These 10 factors, along with feature scores from the test dataset, were used to construct global scores for the test dataset.Lasso regression was applied to make predictions.Additionally, the MCIAplus model incorporated additional features such as demographic, clinical features, and 14 task values as factor scores.Finally, lasso regression was employed for making predictions.
capture meaningful heterogeneity across study participants based on our previous work. 7For instance, we have shown that the percentage of monocytes was significantly elevated on day 1 post booster vaccination compared to baseline (i.e., before booster vaccination), highlighting the role of monocytes in Tdap vaccine response. 7We created a first task in which the overall frequency of monocytes among PBMCs on day 1 post booster vaccination has to be predicted.Similarly, we have shown that plasma immunoglobulin (Ig) G1-4 levels significantly increased at day 7 post booster vaccination compared to baseline. 7The second task consists of predicting plasma IgG levels against the pertussis toxin (PT) on day 14 post booster vaccination.The third task is based on our previous finding that a subset of aP-primed individuals showed an increased expression of proinflammatory genes, including CCL3, on day 3 post booster vaccination. 7This task consists of predicting the gene expression of CCL3 on day 3 post booster vaccination.Overall, the first challenge comprised 14 prediction tasks that we describe in Table S1, including 13 prediction tasks of readouts identified from previous work and a ''sanity-check'' task to predict the expression of the sex-specific XIST gene post booster vaccination per individual. 35hoosing a metric to evaluate prediction performance We set out to choose a metric to evaluate how different prediction methods performed.Specifically, we wanted to have three considerations: (1) we needed a metric that would produce a single numeric value as an output.This would allow us to compare and rank the performance of the prediction methods effectively.
(2) The chosen metric needed to be non-parametric because the different experimental assays utilized in the study produce analyte measurement outputs with non-normal distributions.(3) We wanted to avoid incorporating arbitrary cutoffs or thresholds that could introduce subjectivity or bias into the assessment process.Based on these considerations, we chose the Spearman rank correlation coefficient as our primary metric.The prediction tasks in our first challenge thus constituted predicting the rank of individuals in specific immune response readouts from high to low after B. pertussis booster vaccination based on their pre-vaccination status.
Feedback from participants prior to data submission
We shared the prediction tasks, metrics, and data access instructions with our internal contest participants in order to test our anticipated approach.Two main points of feedback were made prior to receiving prediction results: (1) all users preferred using the bulk file downloads over utilizing the custom API we had created.Upon questioning, most preferred to work with data hands-on rather than having to learn a new interface.Given that creating reliable APIs is resource intensive, this was identified as an area we wanted to down-prioritize going forward.(2) When inspecting the antibody titer data across years, contestants noticed significant variation in the averages of the baseline values for donors (subjects) between the test and training datasets.Those variations were due to a switch in the site where the assays were performed.We thus standardized the antibody data in each year by applying the baseline median as a normalization factor (https://github.com/CMI-PB/ 2021-Ab-titer-data-normalisation; Figures S2 and S3), and provided both the raw data and normalized data to the contestants.
Gathering and evaluating prediction results
A total of 34 computational models were developed by three independent teams in accordance with the theme of the challenge.Each team worked separately on their own set of models.The first team focused on identifying and constructing baseline prediction models based on the systems vaccinology literature (Figure 2B).The second and third teams, on the other hand, focused on constructing prediction models derived from multi-omics dimension-reduction techniques (Figures 2C and 2D).We established a deadline of 3 months for each team to submit their models, and, subsequently, the corresponding predictions were received for evaluation.A complete submission file contained 14 columns, one column per prediction task.We found that most prediction models focused on a subset of tasks.Furthermore, we found that, in some cases, predictions for individual donors were omitted.In those cases, we used the median rank calculated from the ranked list submitted by the contestant to fill in missing ranks.An overview of the prediction results is summarized in Figure 3.
Model development and evaluation
Establishing baseline prediction models from the systems vaccinology literature With the first team, we set out to identify existing models developed within the systems vaccinology field that aim to predict vaccination outcomes.With systematic keyword queries using PubMed and Google Scholar and following citations, we identified 40 studies of potential interest.7][38][39][40][41][42] None of these models were developed for B. pertussis but rather they cover a wide range of vaccines, including those against influenza, hepatitis B, and the yellow fever virus.They employed a variety of methodologies, including classification-based (diagonal linear discriminant analysis, logistic regression, naive Bayes, random forest), regression-based (elastic net), and other approaches (gene signature and module scores).A summary of the literature review is depicted in Figures 2B and S4 for the 24 prediction methods that were implemented (Table S2).For each literature model, we adapted the output scores to our prediction tasks, as described in the STAR Methods.It has to be emphasized that these models were repurposed for our specific prediction tasks, and our work was not an evaluation of their performance in the areas for which they were intended.Rather, evaluating these adapted models sets a baseline of prediction performance and determines whether universal vaccine response predictors are readily available.Establishing a harmonized dataset to train ML models Many of the features evaluated by our assays have low-information content, specifically the transcriptomic assay, meaning that they have low analyte levels or analytes absent across specimens.Incorporating less informative features introduces various challenges in data analysis.Low analyte levels could be difficult to distinguish from background noise, missing data could skew statistical analyses, and these features tend to make it more challenging to identify a robust and accurate prediction model.To address these issues, we applied feature filtering on each assay in the training dataset, which is a widely adopted data pre-processing strategy. 7For gene expression, we filtered zero variance and mitochondrial genes and removed lowly expressed genes (genes with transcript per million [TPM] <1 in at least 30% of specimens).Similarly, we filtered features with zero variance from cytokine concentrations, cell frequency, and antibody assays.Subsequently, we removed features not measured for the test dataset and retained only those that overlapped between the training and test datasets.As a result, we were left with a total of 11,661 features in the harmonized dataset out of the original 58,420 overlapping features between training and test dataset (Figure 2A).
Multi-omics data typically have many thousands of features and direct model training from such data runs the risk of overfitting, where the model learns the noise in the training data rather than the underlying pattern.For this reason, feature selection techniques and/or domain knowledge are commonly employed to identify and focus on the most informative features, effectively reducing the problem's dimensionality.We have developed two ML approaches based on the integration of multi-omics data.The harmonized datasets were utilized for training these ML approaches, as described below.
Establishing purpose-built models using JIVE With the second team, we set out to build prediction models using the available CMI-PB training data.Given that this included data from different modalities, we wanted to utilize approaches that could leverage the CMI-PB dataset in an integrative fashion.We thus applied joint dimensionality reduction methods that discover patterns within a single modality and across modalities to reduce the number of dimensions.In particular, we applied the joint and individual variation explained (JIVE) method to reduce the dimensionality of our datasets before applying regression-based models to make predictions. 43,44JIVE decomposes a multi-source dataset into three terms: a low-rank approximation capturing joint variation across sources, low-rank approximations for structured variation individual to each source, and residual noise. 44This decomposition can be considered a generalization of principal-component analysis (PCA) for multi-source data. 44For JIVE, harmonized datasets for transcriptomics, cell frequency, and cytokines concentrations were first intersected on subjects, which resulted in 13 individuals with complete data, and, finally, the decomposition was applied, generating 10 factors per omics (Figure 2C).These factors were then used as input for five different regression-based methods to turn the JIVE results into predictive models for each specific task.These regression methods included linear regression, lasso, and elastic net with default parameters and two more variants of lasso and elastic net that involved an automatic hyperparameter search via cross-validation (CV; see Figure 2C).
A B C
Team 1 Team 2 Team 3 Team 3 Team 2 Team 3 Team 2
Figure 3. Evaluation of the prediction models submitted for the first CMI-PB challenge
Model evaluation was performed using Spearman's rank correlation coefficient between predicted ranks by a contestant and actual rank for each of (A) antibody titers, (B) immune cell frequencies, and (C) transcriptomics tasks.The number denotes Spearman rank correlation coefficient, while crosses represent any correlations that are not significant using p R 0.05.The baseline and MCIAplus models outperformed other models for most tasks.
Establishing purpose-built models using multiple coinertia analysis
The third team worked on three different approaches to build prediction models (Figure 2D).The first approach (baseline approach) utilized clinical features (age, infancy vaccination, biological sex) and baseline task values as predictors of individual tasks.The second approach (the MCIAbasic) utilized 10 multi-omics factors constructed using multiple co-inertia analysis (MCIA) as predictors of individual tasks.Prior to implementing MCIA, the harmonized datasets were further processed to impute missing data in the baseline training set using the multiple imputation by chained equations (MICE) algorithm (Figure 2). 45The objective function in MCIA maximizes the covariance between each individual omic and a global data matrix consisting of the concatenated omic data blocks. 46,47inally, the third approach (MCIAplus) combined the first two approaches and utilized clinical features, baseline task values, the baseline approach, and 10 MCIA factors identified through the MCIAbasic approach as predictors of individual tasks.Further, for all three approaches, we built a general linear model with lasso regularization for each task.We used the feature scores as input data and the prediction task values as response variables, generating separate predictive models for each task.
Comparing model prediction performance
In total, 32 different model predictions were submitted across the tasks, including 24 models identified from the literature to address antibody-related tasks, as well as eight models derived from multi-omics dimension reduction techniques, such as JIVE and MCIA.A heatmap visualization of Spearman's correlations for tasks versus models is presented in Figure 3.At least one of the prediction models showed significant correlations for 10 out of 12 prediction tasks, whereas no model showed significant correlations for the remaining two tasks.
The 24 literature-based models were specifically designed to address antibody-related tasks and gave insignificant correlations for 22 out of 24 models.Exceptions were two models (fur-man_2013_age and kotliarov_2020_TGSig), showing significant correlations for six out of the seven antibody-related tasks by at least one model.The most successful model (furman_2013_age) was derived from a previous study by Furman et al., 38 where chronological age of an individual was used as the sole predictor for antibody response levels to influenza vaccination.Signaturebased analyses, such as pathway and clustering analysis, are effective in capturing patterns within omics datasets that have a large number of variables, including transcriptomic data.However, when it comes to age-based signatures, a primary limitation arises from datasets where the range of ages of individuals is very limited.In our specific case, both our training and test cohorts have a median age of 23 years, with a range of 18-51 and 19-47 years old, respectively.Despite this limitation, the fur-man_2013_age model successfully predicted tasks associated with PT-specific IgG and its subtypes, IgG1 and IgG4 (Figure 2).Many models described that biological age, measured through an individual's physiological state and overall health, is more accurate than chronological age for predicting the onset of disease and death. 37,48We examined a derived model incorporating biological age from Fourati et al. 37 to compare whether this model had similar performance to chronological age, which did not demonstrate significant correlations.These results suggest that chronological age has a strong predictive potential to be universally utilized as a biomarker to predict antibody responses against different pathogens in addition to influenza and B. pertussis.The second-best-performing implied model (kot-liarov_2020_TGSig) employed signature analysis using blood transcription modules (BTMs) and established sets of transcriptional modules designed to describe the changes in gene expression in blood in response to different vaccines. 49In the case of the kotliarov_2020_TGSig model, a specific BTM comprising B cell gene signatures was utilized as the predictor of antibody response levels to influenza vaccination in the original studies, and it successfully predicted tasks related to filamentous hemagglutinin (FHA)-specific IgG and IgG1 responses.Overall, the majority of literature-based implied models exhibited insignificant performance, while the simplest model that solely relied on chronological age demonstrated promising results.
JIVE-based submissions attempted 10 tasks, excluding the four antibody-related tasks that had missing samples within the harmonized dataset.Diving into the cell-frequency tasks, we saw a modest performance for predicting plasmablast levels on day 7, and, surprisingly, the simple linear regression performed best.However, for other cell-frequency tasks, there was no clear pattern of model performance.Within gene-expression tasks, JIVE-based models performed best when predicting CCL3 levels on day 3 and, once again, models without hyperparameter tuning performed the best.Hyperparameter tuning is a procedure that requires a set of candidate values for each hyperparameter; for lasso and elastic net, this means optimizing alpha and/or L1 ratio.In ideal situations, models derived from hyperparameter tuning should perform the best; however, given our low number of samples, this process may become unstable and lead to overfitting.Turning to IL6 at day 3, all JIVE-based models performed modestly, which suggests that this task may be harder than others.As for NFKBIA on day 7, predictions were poor for all JIVE-based models.The poor performance of JIVE-based models on some predictive tasks may be due to the limited number of subjects used (n = 13).A recent benchmark paper showed that JIVE performs reasonably well with a dataset of approximately 170 samples. 50Another issue may be that the latent factors learned by JIVE are not necessarily capturing correlates of predictive tasks framed in this challenge without inclusion of any clinical information and explicit use of baseline values for each task.Going forward, we will continue utilizing JIVEbased models but will try to improve them by utilizing more samples, more latent factors, and clinical information for the future iterations of this challenge.
The baseline, MCIAbasic, and MCIAplus approaches were the only methods that submitted predictions for all 14 tasks.These three approaches outperformed other teams' approaches.Specifically, both the MCIAplus and baseline approaches demonstrated significant correlations for 10 out of the 14 tasks, as illustrated in Figure 3. On the other hand, the MCIAbasic approach exhibited significant positive correlations for five out of the 14 tasks.When examining the antibody tasks, both MCIAplus and the baseline approaches showed robust performance, ranking first in five out of the seven tasks.The baseline approach showed significant correlations for all three Cell Reports Methods 4, 100731, March 25, 2024 7 Resource cell-frequency tasks, whereas MCIAplus has similar performance to the baseline model for two tasks except for predicting plasmablast on day 7.The MCIAplus model showed significant correlations for three out of four gene-expression tasks, while the baseline model showed significant correlations for two out of four tasks.The MCIAbasic approach worked very well with three antibody levels and two cell-frequency tasks; however, it performed poorly for all four gene-expression tasks.When examining what factors led to improved performance of the baseline and MCIAplus approach as compared to the MCIAbasic approach, it was straightforward to deduce that clinical information and the baseline values of the prediction tasks were strong contributors in predicting most tasks.However, there was one notable exception observed in the analysis.Specifically, when considering the task related to NFKBIA on day 7, the MCIAplus approach exhibited a significant correlation, outperforming both the baseline and MCIAbasic models.This improvement in performance was attributed to a combination of MCIA factors and baseline features, highlighting their collective contribution to the predictive capabilities of the MCIAplus approach in this particular task.We noted that, while most contributing features were shared between the baseline and MCIAplus approaches for most tasks, there were certain instances where the MCIA factors exhibited a greater contribution.For example, in the prediction of IgG responses on day 14, common features were baseline levels of IgG and IgG1 responses against PT; however, apart from these two features, two MCIA factors were significant contributors to the MCIAplus model, whereas baseline levels of IgG1 responses against FHA antigen were a significant contributor in the case baseline model.Overall, it is worth noting that clinical information and baseline values of known immune signatures significantly affected the prediction performance of underlying models.
DISCUSSION
Here, we report on the first rigorous evaluation of multi-omics prediction tools on vaccine immune responses.This inaugural dry run constitutes an important step for the development and refinement of our future community prediction contest.Furthermore, all source code for the imputation, models, and assessment metrics is publicly available as part of our CMI-PB GitHub repository (https://github.com/CMI-PB).This will serve as an important resource and benchmark for future contestants.
Major lessons learned from our inaugural prediction contest include the importance of providing contestants with both original (raw) data and standardized computable matrices.Through this approach, we can simplify the process of data access and help avoid contestants having to standardize their model inputs independently.Also highlighted was the importance of testing the compatibility across all data sources before announcing the challenge, as we realized that additional normalization was required for the antibody titer data.Critically, we also learned that clinical variables, such as age, can play a role in making successful predictions, and thus we have included all collected clinical information, including health-span-related characteristics such as chronic diseases and immune exposures, in all future challenge datasets.We are expanding the CMI-PB challenge to over 30 invited contestants to validate our approach for a second time before opening the next round to the public.This second CMI-PB challenge has been designed to address some of the shortcomings identified during the first challenge.We expect to make additional adjustments informed by the second challenge to help ensure success in the initial public challenge.This iterative process aims to provide contestants with a rich user experience, allowing for smoother data access and a much less tedious prediction submission process.
The major goal of the first challenge was to develop and refine a pipeline that can access methods for predicting the immune response to Tdap booster vaccination.The pipeline developed to run the first challenge provided a benchmark for models developed in future contests and code to evaluate the performance and significance of the results.In order to identify biomarkers that are generally important for a successful vaccination response, a large number of samples is needed, divided across multiple cohorts.In the coming years, additional datasets will become available within the CMI-PB resource.This will undoubtedly assist the development and tailoring of models specifically aimed at predicting the immune response outcomes of the Tdap vaccination.With several Tdap vaccination cohorts, it should be possible to determine the components of the immune response that are consistently important for a good vaccine response.
The presented results based on literature models demonstrated that the majority of vaccine prediction methods found in the literature are inadequate in capturing the fundamental immunological features required for effective vaccination against B. pertussis.Several plausible explanations exist for the lack of generalizability and insufficient capture of underlying mechanisms in these prediction methods.One possibility is that vaccinations for distinct pathogens possess fundamentally unique characteristics.Another potential explanation is that these prediction methods may be overfitting the datasets used for their development, which is a well-known problem in ML models that require training data for prediction. 29In the present study, the ability of transcriptional, clinical, and cell population-based signatures to predict vaccine responses was independently examined across multiple studies from the literature.The results showed that only occasionally were the immune signatures significant in a study from which they were not derived.This indicates that prediction methods that are developed on a single or a few vaccination studies usually do not generalize well.It is noteworthy that a model based on age inferred from the literature exhibited strong performance in predicting antibody-related tasks.Aging is characterized by a progressive loss of physiological integrity and an increased susceptibility to immunosenescence. 51Age has been reported to be an important determinant of vaccine effectiveness in older adults. 52Furthermore, we plan to incorporate more clinical factors, including immune exposures, time of vaccination, and health history attributes, into all future contests.This will aid contestants in constructing more refined prediction models.
The presented results, based on the JIVE and MCIA ML approaches, provide valuable insights into the importance of data imputation, model quality check, and the significant impact of incorporating clinical and pre-vaccination signatures on model performance.The baseline approach was the simplest modeling approach among all and attained notable performance for most tasks.This finding aligns with recent demonstrations that integrating prior immunological knowledge serves as an effective approach for reducing model complexity and improving robustness. 53Further, it is worth noting that MCIA and JIVE are distinct extensions of PCA, each employing different algorithms to decompose information extracted from multi-omics datasets.It is important to clarify that our intention was not to compare these two models directly but rather to share our learnings from the two separate prediction approaches.With the JIVE approach, we opted to use complete assay information for model development with minimal data pre-processing.However, this approach yielded limited success, likely due to the limited sample size after requiring complete data for each subject, except for moderate performance in predicting two specific tasks.We are keen to refine further the JIVE approach in alignment with the MCIAplus approach in future challenges.Similarly, MCIAbasic model implementation closely resembled JIVE, except for the utilization of imputed data.With the imputed data, this model achieved significant success as compared to JIVE.Unsupervised approaches such as MCIA hold a lot of potential in uncovering hidden patterns and relationships within complex immune profiles.Further, the MCIAplus approach performed significantly well in predicting most tasks where we integrated modeling, immunological insights, and clinical knowledge together.We intend to reapply this model for future challenges and look forward to improving MCIAplus approach with pre-vaccination immune signatures available within existing studies, such as utilizing BloodGen3 modules to identify pertussis booster pre-vaccination signatures. 49Overall, our ML approaches pointed out that, beyond age, the inclusion of baseline responses also was a key determining factor to get predictions right.In the next challenge cycle, we expect every contestant to recognize this and integrate it into their approach for improved results.
With the first challenge, we focused on a limited set of prediction approaches, including the existing baseline models from the literature and two ML-based models.There are plenty of other approaches that have been utilized to elucidate the kinetics of vaccine-induced immune responses and the durability of vaccine effects.For instance, network-based and longitudinal modeling approaches utilized dynamic patterns and temporal relationships between omics to predict vaccine responses. 54,55ur cohort size will be growing with the recruitment of study subjects for each future challenge, and we believe this would help prediction models perform better as larger datasets provide a richer and more diverse pool of information, allowing models to capture more complex patterns and relationships, leading to improved predictive performance and generalization capabilities.This expanded data volume will also help mitigate issues such as overfitting and enhance the models' robustness and reliability.In addition, we are considering to include T cell assay data, which will become available for future challenges.In the first challenge, our emphasis was primarily on the execution and evaluation of the contest pipeline, rather than delving into the biological rationale underlying the top-performing models.We intend to study top-performing models from upcoming con-tests closely, and we believe this will greatly aid in comprehending the influence of various factors that contributed to the accurate prediction of existing Tdap booster vaccination signatures.In the first challenge, we incorporated a total of 14 tasks, out of which contestants successfully generated significant predictions for 12 tasks.However, two tasks pertaining to IgG response to pertactin (PRN) antigen on day 14 and IL6 expression on day 3 did not yield significant predictions.We will continue to evaluate which prediction tasks are the most meaningful; what are the right data to evaluate them; and how questions should be asked, such as asking for an absolute ranking of responses or a fold change compared to baseline.
We are committed to performing comparable experiments on a yearly basis that can be used to build a large set of consistent experimental data.The CMI-PB resource (1) provides access to systems vaccinology data from prior experiments by our group and others relevant to Tdap booster vaccination, (2) explains the nature of the experiments performed and the data generated and how to interpret them (which can be a hurdle for more computationally oriented scientists), and (3) invites visitors to participate in the prediction challenge that asks to utilize baseline data from individuals prior to vaccination in order to predict how they rank in different vaccine response measurements.We believe that the open access to data and the ability to compare model performances will increase the quality and acceptance of computational models in systems vaccinology.
We believe that this collaborative and innovative approach will create a hub for immunologists to push for novel models of immunity against Tdap boost.We expect the resultant models will also be relevant for other vaccinology studies.Contestants from the research community that are interested in participating are encouraged to contact us via cmi-pb-contest@lji.organd check the website (www.cmi-pb.org)for the upcoming contest information.submission files and evaluation code, descriptions, and access to the necessary data files that contestants used to develop their predictive models and make predictions.d The codebase for normalizing antibody titer data is available at Zenodo (https://zenodo.org/records/10642152),while the code for standardizing data and generating computable matrices is available at Zenodo (https://doi.org/10.5281/zenodo.10642081).
STAR+METHODS KEY RESOURCES
The codes for all models submitted for the first CMI-PB challenge are available, including those identified from the literature.All 24 models derived using the literature-based survey are available at Zenodo (https://zenodo.org/records/10642081).The codebase for the JIVE models is available at Zenodo (https://zenodo.org/records/10642104)and the codebase for the MCIAbased models can be found at Zenodo (https://zenodo.org/records/10642081). d Any additional information required to reanalyze the data reported in this work paper is available from the lead contact upon request.
EXPERIMENTAL MODEL AND STUDY PARTICIPANT DETAILS
Human volunteers that were primed with either the aP or wP vaccination during childhood were recruited.The characteristics of all participants are summarized in Table S4.All participants provided written informed consent before donation and were eligible for Tdap (aP) booster vaccination containing tetanus toxoid (TT), diphtheria toxoid (DT), and acellular Pertussis that contains inactivated pertussis toxin (PT) and cell surface proteins of Bordetella pertussis including filamentous hemagglutinin (FHA), fimbriae 2/3 (Fim2/3), pertactin (PRN).Longitudinal blood samples were collected pre-booster vaccination (day 0) and post-booster vaccination after 1, 3, 7, and 14 days.This study was performed with approvals from the IRB at the La Jolla Institute for Immunology, and written informed consent was obtained from all participants before enrollment.
METHOD DETAILS PBMC and plasma extraction
Whole blood samples (with heparin) were centrifuged at 1850 rpm for 15 min with breaks off.Subsequently, the upper fraction (plasma) was collected and stored at À80 C. PBMCs were isolated by density gradient centrifugation using Ficoll-Paque PLUS (GE).35 mL of RPMI 1640 medium (RPMI, Omega Scientific) diluted blood was slowly layered on top of 15 mL Ficoll-Paque PLUS.Samples were spinned at 1850 rpm for 25 min with breaks off.Then, PBMC layers were aspirated and two PBMC layers per donor were combined in a new tube together with RPMI.Samples were spinned at 1850 rpm for 10 min with a low break.Cell pellets of the same donors were combined and washed with RPMI and spinned at 1850 rpm for 10 min with breaks off.Finally, PBMCs were counted using trypan blue and a hemocytometer and, after another spin, resuspended in FBS (Gemini) containing 10% DMSO (Sigma-Aldrich) and stored in Mr. Frosty cell freezing container overnight at À80 C. The next day, samples were stored at liquid nitrogen until further use.
Plasma antibody measurements
Pertussis antigen-specific antibody responses were quantified in human plasma by performing an indirect serological assay with xMAP Microspheres (details described in xMAP Cookbook, Luminex 5 th edition).Pertussis, Tetanus, and Diphtheria antigens (PT, PRN, Fim2/ 3, TT, and DT (all from List Biological Laboratories) and FHA (Sigma) and as a negative control Ovalbumin (Sigma) were coupled to uniquely coded beads (xMAP MagPlex Microspheres, Luminex Corporation).PT was inactivated by incubation with 1% formaldehyde (PFA) at 4 C for 1 h.1% PFA PT and TT were then purified using Zeba spin desalting columns (ThermoFisher).The antigens were coupled with each unique conjugated microsphere using the xMAP Antibody Coupling Kit (Luminex Corporation).Plasma was mixed with a mixture of each conjugated microsphere, and WHO International Standard Human Pertussis antiserum was used as a reference standard (NIBSC, 06/140).Subsequently, the mixtures were washed with 0.05% TWEEN 20 in PBS (Sigma-Aldrich) to exclude non-specific antibodies, and targeted antibodies responses were detected via anti-human IgG-PE, IgG1-PE, IgG2-PE, IgG3-PE, IgG4-PE (all from SouthernBiotech) and human IgE-PE (ThermoFisher).Samples were subsequently measured on an FLEXMAP 3D instrument (Luminex Corporation), and the log(10) of the median fluorescent intensity (MFI) was calculated.
PBMC cell frequencies
Cryopreserved PBMC were thawed by incubating cryovials at 37 C for 1 min and stained with the viability marker Cisplatin.Subsequently, PBMCs were incubated with an antibody mixture for 30 min.After washing, PBMCs were fixed in PBS (Thermo Fisher) with 2% PFA (Sigma-Aldrich) overnight at 4 C.The next day, PBMCs were stained with an intracellular antibody mixture after permeabilization using saponin-based Perm Buffer (eBioscience).After washing, cellular DNA was labeled with Cell-ID Intercalator-Ir (Fluidigm) and cell pellets were resuspended in 1:10 EQ Beads (Fluidigm) in 1 mL MiliQ water.Samples were measured using a Helios mass cytometer (Fluidigm).Twenty One different PBMC cell subsets were identified using the unsupervised gating approach DAFi 60 with the exception of antibody-secreting cells (ASCs), which were manually gated as CD45 + Live + CD14 À CD3 À CD19 + CD20 À CD38 + cells.Gating was performed using FlowJo (BD, version 10.7.0).
Figure 1 .
Figure 1. Outline for establishing the CMI-PB resource
Figure 2 .
Figure 2. Data processing, computable matrices, and prediction model generation (A) Generation of a harmonized dataset involved identifying shared features between the training and test datasets and filtering out low-information features.Literature-based models (team 1) used raw data from the database and applied data-formatting methods specified by existing models.JIVE and MCIA approaches (teams 2 and 3) utilized harmonized datasets for constructing their models.(B) Flowchart illustrates the steps involved in identifying baseline prediction models from the literature, creating a derived model based on the original models' specifications, and performing predictions as described by the authors.(C) The JIVE approach involved creating a subset of the harmonized dataset by including only subjects with data for all four assays.The JIVE algorithm was then applied to calculate 10 factors, which were subsequently used for making predictions.JIVE employed five different regression models for prediction purposes.(D) MCIA approach applied MICE imputation on the harmonized dataset and used these data for model construction.MCIA method was applied to the training dataset to construct 10 factors.Then, these 10 factors and feature scores from the test dataset were utilized to construct global scores for the test dataset.Lasso regression was applied to make predictions.MCIAplus model was constructed by including additional features (demographic, clinical features, and 14 task values) as factor scores, and it also utilized lasso regression to make predictions.The MCIA approach utilized MICE imputation on the harmonized dataset for model construction.The MCIA method employed the imputed training dataset to construct 10 factors.These 10 factors, along with feature scores from the test dataset, were used to construct global scores for the test dataset.Lasso regression was applied to make predictions.Additionally, the MCIAplus model incorporated additional features such as demographic, clinical features, and 14 task values as factor scores.Finally, lasso regression was employed for making predictions.
Table 1 .
Past and future CMI-PB annual prediction challenges
TABLE
(Continued on next page) Cell Reports Methods 4, 100731, March 25, 2024 e1 d The training and test datasets used for the first challenge can be accessible through a Zenodo repository at https://doi.org/10.5281/zenodo.10789473.The repository includes detailed information on the datasets, challenge tasks, submission format,
|
2024-03-17T06:17:49.854Z
|
2024-03-01T00:00:00.000
|
{
"year": 2024,
"sha1": "014844e924bead99ebb161d6412d843854be7bff",
"oa_license": "CCBY",
"oa_url": "http://www.cell.com/article/S2667237524000560/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d1687b5bc3ca38936bc2c77cd4a494335bb68158",
"s2fieldsofstudy": [
"Computer Science",
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
1848786
|
pes2o/s2orc
|
v3-fos-license
|
Chinese Herbal Medicine for Myelosuppression Induced by Chemotherapy or Radiotherapy: A Systematic Review of Randomized Controlled Trials
Background. Myelosuppression is one of the major side effects of chemo- and radiotherapy in cancer patients and there are no effective interventions to prevent it currently. Chinese herbal medicine (CHM) may be helpful due to its multidrug targets. Objectives. This study was designed to evaluate effectiveness of CHM on preventing patients from experiencing myelosuppression by chemo- or radiotherapy. Search Methods. Randomized controlled trials (RCTs) were retrieved from seven different databases from the date of database creation to April 2014. We assessed all included studies using Cochrane Handbook for Systematic Reviews of Interventions 5.1.0 and performed statistical analysis using RevMan 5.2.1. Results. Eight RCTs were included (818 patients). Pooled data showed that increase of white blood cells (WBCs) is higher with CHM plus chemotherapy/radiotherapy than with chemotherapy/radiotherapy only. Both CHM compared to placebo and CHM combined with chemotherapy/radiotherapy compared to chemotherapy/radiotherapy lacked significant differences in the peripheral platelets, red blood cells (RBCs), and hemoglobin changes. Conclusions. Our results demonstrated that CHM significantly protected peripheral blood WBCs from a decrease caused by chemotherapy or radiotherapy. There were no significant protective effects on peripheral RBCs, hemoglobin, or platelets, which may be related to low quality and small sample of included studies.
Introduction
Myelosuppression, also known as bone marrow suppression or myelotoxicity, is a decline in the activity of the bone marrow, resulting in decreased numbers of WBCs, platelets, and RBCs. Myelosuppression is one of the most commonly observed side effects of chemotherapy and radiotherapy, and it is also a listed side effect of many chemotherapy drugs. Patients are usually given these medications anyway because dying from cancer poses a more immediate threat. Therefore, the possibility of myelosuppression must be considered and monitored when using a chemo-or radiotherapy treatment plan.
Once patients undergo myelosuppression, the bone marrow cannot make the normal level of blood cells. Given that many blood cells have a very short life in the body, patients start to suffer medical complications almost immediately. These include anemia from a low number of RBCs, hemorrhage due to thrombopenia, and immunosuppression caused by a low number of WBCs. Patients will be at risk of developing fatal infections and will not be able to fight them off [1][2][3][4][5][6][7][8], which contributes to the survival rate of the malignancies.
It is crucial to avoid damaging nonmalignant cells during the clinical application of chemotherapy and radiotherapy to reduce morbidity and mortality from infections due to myelosuppression. There have been many research attempts to find safe agents that can reduce myelosuppression and improve the immune response in chemotherapyor radiotherapy-treated patients. One treatment that has become increasingly attractive in recent years is the use of alternative therapies, especially CHM, as an adjunctive 2 Evidence-Based Complementary and Alternative Medicine treatment to prevent myelosuppression. Numerous studies have already reported the myelosuppression reduction effects in cancer patients who received CHM during their chemotherapy or radiotherapy treatment. These studies had a variable design and have generally reported inconclusive or conflicting results, making the clinical decision of whether to recommend or omit the use of CHM during chemotherapy/radiotherapy in cancer patients difficult [9][10][11][12].
It would be worthwhile to assess the quality and evaluate the efficacy of data from trials according to the principles and measurements of evidence-based medicine. There is no previously published systematic review examining the role of CHM to prevent myelosuppression caused by chemotherapy or radiotherapy. In the present study, we sought to perform a systematic review of RCTs on the use of CHM during chemotherapy or radiotherapy of cancer patients to generate a more precise estimate of the possible therapeutic value of CHM on preventing myelosuppression.
Study Design.
Our review was restricted to RCTs that compared CHM plus chemotherapy/radiotherapy with placebo plus chemotherapy/radiotherapy or chemotherapy/ radiotherapy alone.
Participant Characteristics.
We included all patients with any type of solid tumor or hematologic malignancy, who accepted chemotherapy or radiation therapy combined with CHM, irrespective of the patient's sex, age, ethnicity, and occupation. All appropriate definitions of myelosuppression included decreased peripheral blood WBCs, RBCs, platelets, or hemoglobin. Patients with serious medical conditions were excluded.
Types of Intervention.
The intervention was required to be a clinical trial evaluating all forms of CHM (herbal formula, single herb, herbal extractions, or compounds including herbs and other supplements), which were administered either orally or intravenously, used alone or in combination with other herbs for subjects in the treatment and placebo groups or without additional intervention except chemotherapy or radiotherapy in the control groups.
Outcome Measures.
The outcome measures included changes in the peripheral blood WBCs as the primary outcome and changes in the peripheral blood RBCs, platelets, and hemoglobin as the secondary outcomes.
Methodological Quality Assessment.
The methodological quality of all included trials was independently assessed by two reviewers according to "Risk of Bias table, " which is recommended by Cochrane Handbook 5.1.0. Reviewers were not blinded with respect to the authors, institution, and journal because they were familiar with the literature.
Two review authors (Youji Jia and Huihui Du) independently assessed the risk of bias with the criteria in the Cochrane Handbook for Systematic Reviews of Interventions 5.1.0 (http://www.cochrane-handbook.org). Random sequence generation (selection bias), blinding of participants and personnel (performance bias), allocation concealment (selection bias), blinding of outcome assessment (detection bias), incomplete outcome data (attrition bias), selective reporting (reporting bias), and other sources of bias were scored as "yes, " "no, " or "unclear" according to the definitions of each of the criteria. Disagreements between review authors were resolved by discussion or with a third author (Xuejun Cui). The methodological quality assessment of the trials was used to exclude trials with fatal flaws, such as a dropout rate higher than 50%.
Exclusion Criteria.
Exclusions included case or experience reports, preclinical studies (e.g., in vitro and animal studies), review, systematic review, trials in which the treatment groups included ingredients not considered CHM (such as acupuncture, massage, and exterior use), nonrandomized controlled trials, and publications without original data on the outcomes.
Data Extraction. Two reviewers (Youji Jia and Huihui
Du) independently extracted the study characteristic data from all eligible articles, including the authors, publication date, study type, participants, sample size, interventions, outcomes, baseline treatment, type of CHM, and follow-up. The authors were contacted for more information, as needed. Two review authors (Min Yao and Xuejun Cui) checked and entered data into Review Manager (RevMan 5.2.1).
Statistical Analysis.
Statistical analysis was performed using RevMan 5.2.1. The results were pooled and continuous data were expressed as the weighted mean difference (WMD) or standardized weighted mean difference (SMD) with a 95% CI.
The chi-square test ( 2 test) and 2 statistic ( 2 stands for the percentage of variability owing to between-study variability) were used to evaluate the heterogeneity of intervention effects. The clinically and statistically homogeneous studies were pooled using the fixed effect model if > 0.05 ( 2 ≤ 50%), when it was considered to have better homogeneity. The clinically homogeneous and statistically heterogeneous studies were pooled using the random effects model if ≤ 0.05 ( 2 > 50%), when there was heterogeneity between studies. The results of meta-analysis were described graphically using the forest plot. Subgroup analysis was performed based on the clinical heterogeneity, such as type of CHM used.
Funnel plots were made to assess the publication bias, when at least 10 trials were included in the meta-analysis.
Description of the Included Studies.
In total, 646 articles were retrieved following the search strategy described above (259 in English and 387 in Chinese). Potential studies, including 14 in English and 303 in Chinese, were identified by title and abstract screening to exclude trials that were duplicates [13,14], reviews [15][16][17], animal studies [18][19][20], and case or experience reports [21,22]. By reading the full text we excluded those studies with incorrect randomization or lack of randomization [23][24][25]; those studies that lacked original data of outcomes [10,[26][27][28]; and RCTs using acupuncture, massage, ear acupoint, medicine for exterior treatment [29][30][31]. Eight trials met the inclusion criteria (see the details in Figure 1) and were included in the final review. Two of the trials were published in English, and six of them were published in Chinese. Included studies were published from 2001 to 2013.
These eight trials were all RCTs using CHM, and the duration of studies ranged from 1 to 3 years. Six of the studies were performed in mainland China, and two of them were conducted in Taiwan.
A total of 818 subjects (429 males and 389 females) were included in the eight trials. The number of patients included in each study ranged from 58 to 235, and there was an average sample size of 103.5. There are seven adult • Duplicate publications (30) • Animal studies (119) • Vitro studies (72) • Reviews (48) • Case studies (21) • Experience introduction (39) • Incorrect or no randomization (154) • Had no original data of outcomes (133) • RCTs, not using CHM, but • Ear acupoint (4) • Medicine for exterior use (4) patients and one pediatric patient, who had a total of 13 different types of cancer, including breast cancer, colon cancer, nasopharyngeal cancer, lung cancer, colorectal cancer, stomach cancer, leukemia, esophageal cancer, pancreatic cancer, prostate cancer, neuroblastoma, Wilms tumor, and hepatoblastoma. The baselines of all eight randomized studies were compared between the treatment and control groups, and there were no statistically significant differences. The intervention varied noticeably across the trials. All eight trials included a basic chemotherapy or radiotherapy in both the test and control groups and five of the trials described the type of chemotherapy drugs used. Two [32,33] of the trials included placebo in the control group and the remaining six trials did not include any other intervention except basic chemotherapy or radiotherapy, allowing for comparison between the CHM (test) and control groups. For the test groups, three of the trials used decoction of the CHM formula [34][35][36], three of the trials used Chinese patent medicine (particle or soluble granules) [37][38][39], and two [32,33] of the trials used extracts of CHM (Table 1).
All eight trials showed routine blood reports, including the WBCs, RBCs, and hemoglobin (Hb) and platelet (PLT) values.
Risk of Bias in the Included Studies.
The reports of all trials mentioned randomization, but only five described the method of randomization [35][36][37][38][39]. In addition, the reports of three trials mentioned double-blinding [32,33,38]. We assessed all included studies according to the Cochrane Handbook for Systematic Reviews of Interventions 5.1.0. Figure 2 shows the results of the author's judgment about each methodological quality item for each included study. One of the studies was defined as medium quality and the others were of low quality. [38], which was obviously different from the others, while the change of increased WBCs is higher in patients treated with CHM than placebo at 0.59 (95% CI: 0.25 to 0.93) (Figure 3). When this study was excluded for the sensitivity analysis, the degree of heterogeneity dropped (Chi 2 = 4.68, = 0.32, 2 = 14%), resulting in 209 patients in the intervention arm and 211 in the control arm. The overall effect estimate continuously showed a significant trend, supporting the treatment of CHM at 0.46 (95% CI: 0.31 to 0.61) (Figure 4).
Effects of Chinese Herbal Medicine on Protecting Red
Blood Cells from Decreasing in Response to Chemotherapy or Radiotherapy in Cancer Patients. Two trials that included an RBC examination conducted a placebo-controlled test and were combined in a meta-analysis ( Figure 5), including a total of 86 patients in the intervention arm and 77 in the control arm. There was very little heterogeneity between the studies (Chi 2 = 1.06, = 0.30, 2 = 5%) and there were no significant differences in the RBCs between CHM and placebo when used during chemotherapy or radiotherapy in clinical cancer patients, with a value of −0.09 (95% CI: −0.26 to 0.08).
Only one included study investigated the change in the RBCs between CHM combined with chemotherapy and chemotherapy, which also showed no statistically significant difference ( > 0.05).
Effects of Chinese Herbal Medicine on Protecting Platelets from Decreasing in Cancer Patients
Undergoing Chemotherapy or Radiotherapy. Six reports with platelet measurements were divided into two subgroups. One subgroup included two studies that compared the effects of CHM versus placebo during chemotherapy or radiotherapy in clinical cancer patients [32,33], including a total of 86 patients in the intervention arm and 77 in the control arm. There was no heterogeneity between the two studies (Chi 2 = 0.12, = 0.73, 2 = 0%) and our meta-analysis showed no significant differences in the platelets between the CHM and placebo groups when used together with chemotherapy or radiotherapy in clinical cancer patients, with a value of 23.67 (95% CI: −1.95 to 49.30) (Figure 6, upper part). Another subgroup consisted of 4 studies that compared the effects of CHM combined with chemotherapy/radiotherapy versus chemotherapy/radiotherapy alone on hemoglobin protective effects in clinical cancer patients [34,35,37,38], including a total of 235 patients in the intervention arm and 233 in the control arm. There was no heterogeneity among these studies (Chi 2 = 1.89, = 0.60, 2 = 0%) and the metaanalysis revealed no significant differences in the platelets between CHM combined with chemotherapy/radiotherapy and chemotherapy/radiotherapy alone, with a value of 3.96 (95% CI: −6.48 to 14.40) (Figure 6, lower part).
Effects of Chinese Herbal Medicine on Protecting Hemoglobin from Decreasing in Cancer Patients Undergoing
Chemotherapy or Radiotherapy. Six studies included measurements of the serum hemoglobin levels. Two of the studies compared the effects of CHM and placebo during chemotherapy or radiotherapy in clinical cancer patients [32,33]. These two studies were combined in a meta-analysis and included a total of 86 patients in the intervention arm and 77 in the control arm. There was heterogeneity between the two studies (Chi 2 = 3.77, = 0.05, 2 = 73%) and the effect estimate did not support the CHM intervention, with a value of −2.29 (95% CI: −7.71 to 3.13) (Figure 7, upper part). Four of the remaining studies compared CHM combined with chemotherapy/radiotherapy versus chemotherapy/radiotherapy on the hemoglobin protective effects in clinical cancer patients [34,35,37,38]. These four studies were combined in a meta-analysis and included a total of 234 patients in the intervention arm and 229 in the control arm. There was little heterogeneity among these studies (Chi 2 = 4.49, = 0.21, 2 = 33%) and our meta-analysis revealed no significant differences in the hemoglobin between CHM combined with chemotherapy/radiotherapy and chemotherapy/radiotherapy alone, with a value of 0.08 (95% CI: −2.87 to 3.03) (Figure 7, lower part).
Publication Bias Assessment.
Funnel plots could not be performed due to the small number of studies evaluated.
Discussion
In this systematic review of articles published in English and Chinese, we have identified eight randomized studies using CHM. A total of 818 subjects were included and the duration of studies ranged from 1 to 3 years. Six of these studies were performed in mainland China and 2 of them were conducted in Taiwan. The baselines of these eight randomized studies were compared between the treatment and control groups, and there was no significant difference. Although we searched both English and Chinese databases, we still cannot promise that all relevant trials were found, so the publication bias could not be ignored.
We have tried to identify all RCTs on CHM for prevention of chemotherapy-or radiotherapy-induced myelosuppression, although this might be limited by incomplete citation tracking, as is the case with most systematic reviews. We were able to review studies performed and published in China and English-speaking countries, and a small number of studies performed in Japan and Korea were written in English. We could not include all trials from Korea or Japan written in their native language even though traditional Chinese medicine (TCM) is extensively used in these two countries.
Herbal formulae used in studies performed in China generally showed a good tolerability, while CHM intervention used in studies performed outside China was likely to have more side effects [33]. These differences might be due to the more precise methodology in studies conducted outside mainland China. On the other hand, lack of compliance to the principles of TCM during the selection of herbal formulae may be another reason. In China, the philosophy of TCM emphasizes "personalized therapy, " and the categories of symptoms and signs judged by TCM doctors created the principle for herbal medicine selection. Therefore, even though they have the same clinical diagnosis, different patients may be given different TCM prescriptions depending on the collected symptoms and signs. Additionally, studies performed in China usually do not describe the reasons for falling off, method of randomization, and information on blinding. These methodological limitations may contribute to the better tolerability and lower frequency of adverse effects of CHM in studies performed in China. In spite of these deficiencies, the overall data suggest that CHM was better tolerated [34,[37][38][39]. Taken together, these preliminary outcomes can form the foundation for designing future trials to assess these therapeutic strategies, preferably by means of rigorous methodologies based on Western principles and selection criteria according to the CHM theory.
CHM generally uses multiple herbs, which may produce complementary and antagonistic effects to balance the benefits and adverse effects. Even with these positive results, some over-the-counter Chinese remedies have been used together with Western medications, which may increase the chance of side effects [40,41]. These results also underline the importance of quality control and need to standardize the prescribing, dispensing, and administration of these "herbal remedies, " which are often marketed as health supplements without adverse effects.
Given that myelosuppression mainly results from the use of symptomatic therapy, the application of TCM is a possible strategy to address this unmet therapeutic area. Myelosuppression is graded according to anticancer drugs in acute and subacute toxicity of classificatory criteria (WHO criteria) as described in Table 2 [42].
The following is a description of the conventional therapeutic methods. Recombinant human erythropoietin (rhEPO), with supplement of other iron agents (such as dextran), is used to promote erythropoiesis and eliminate the iron utilization obstacle for the RBC thrombocytopenia and anemia. Transfuse RBCs or whole blood when hemoglobin is less than 85 g/L. Reduce movement, control the blood pressure, avoid using antiplatelet drugs, and use interleukin-11 (IL-11) and rhTPO when patients have thrombocytopenia and bleeding. Transfuse blood components with platelets or whole blood when the platelet concentration is less than 20 × 10 9 /L or bleeding is severe. Prevention is preferred for leukopenia/neutropenia, fever, or infection, and we can use conventional drugs to increase the WBCs or hematopoietic stem cell differentiation, promoting the effects of therapy. Apply recombinant human granulocyte colony-stimulating factor (rhG-CSF) to patients with severe symptoms, and use antibiotics to control infection, when necessary. Generally, patients with degree III or higher myelosuppression must be treated. However, there are currently no clear criteria for those belonging to degree II or lower, and treatment mainly focuses on symptomatic therapy [43][44][45]. However, there is a lack of effective intervention strategies for preventing myelosuppression in the clinic.
This systematic review is based on a number of clinical RCTs, and the quality of included studies was strictly screened and controlled. In this meta-analysis, we concluded that CHM could effectively prevent radiotherapy-and chemotherapy-induced myelosuppression in cancer patients. The reduction in WBC counts during radiotherapy or chemotherapy in cancer patients was blocked by the administration of CHM, which controlled infection. Therefore, the use of CHM is recommended as a basic therapeutic remedy during radiotherapy and chemotherapy in cancer patients to prevent infection due to insufficient WBCs. Among the six included RCTs that studied the protective effects of CHM on chemotherapy or radiotherapy affected WBCs, it looks like CHM had extremely strong protective effects on WBCs in Li et al. 's report [38], while no protective effects were reflected in Shi et al. 's paper [34], as shown in Figure 3. We found that the type of cancer, age of participants, chemotherapy protocols, and CHM interventions were different between these two RCTs by comparison of the information described in the papers. Most important is that Li et al. studied adults with acute leukemia, while Shi et al. 's report includes children with neuroblastoma, nephroblastoma, and hepatoblastoma; myelotoxicity chemotherapy was used in both RCTs but the forms of CHM used were different. On the first glance it seems that CHM is more effective in treatment of chemotherapy damaged WBCs in adult patients with nonsolid tumors than in children with solid tumors. As we know, acute leukemia is a cancer of primitive WBCs in the bone marrow, characterized by the rapid and overproduction of abnormal WBCs that accumulate in the bone marrow and interfere with the production of normal blood cells. The baseline of peripheral blood WBCs of acute leukemia patients is different from the normal level and different from patients with solid tumors as well. Thus, it is incomparable between the leukemia data and solid tumor data about the protective effects of CHM on chemotherapy or radiotherapy affected WBCs. Also, we could not eliminate the possibility of their miscalculation to include leukemia cells while counting WBCs since this data showed a greater heterogeneity (Figure 3). Therefore, this study was excluded later for a sensitivity analysis in our current metaanalysis, which evaluated the effects of CHM on preventing WBCs loss in cancer patients undergoing chemotherapy or radiotherapy. By reading the paper carefully, we found that the authors stated that the WBCs counts of both groups declined after intervention with CHM, while the CHM group was higher, and the absolute value of WBCs decrease was smaller in the CHM treatment group than in the control group; thus, we came to the conclusion that CHM could protect chemotherapy damaged WBCs. At the same time, similar outcome data in treatment and control group were provided by Shi et al. in this paper, so there are no treatment effects in our analysis. But the baseline data was different between the treatment and control groups provided by Shi et al., which in turn resulted in a treatment effect of CHM. This result is also not credible.
After comparing the detailed information of the four convincing RCTs (Chen and Shen [37], Xu et al. [39], Chu [35], and Liu [36]), we found that common solid tumors of adult patients, such as colon cancer, gastric cancer, lung cancer, and breast cancer, were included in all these studies. Astragalus membranaceus and Angelica sinensis were used in all their CHM prescriptions, which is a classic coupled CHM for replenishing Qi and Blood. Moreover, tonifying kidney CHM was used in three of the studies, such as sealwort, glossy privet fruit, Radix Polygoni Multiflori, Radix Rehmanniae Preparata, and Fructus Psoraleae. This suggested that tonifying kidney CHM may contribute to treatment of leucopenia caused by myelosuppression.
The higher the quality of the included studies is, the more we can draw scientific conclusions by meta-analysis. Some studies in the literature have limitations. For example, in some, the random method was not clear and blinding was not implemented. Some studies were performed in China or Taiwan, which did not have international registries, and there was a lack of scientific quality control. The same group in Taiwan performed two of the studies, which may have a performance bias. The included studies had heterogeneity in the type of cancer and use of CHM; as a result, the subgroup analyses could not be conducted. These may, to some extent, limit the scientific validity of the analyzed results.
Conclusions
In conclusion, we demonstrated that CHM significantly prevented peripheral WBCs from being damaged by chemotherapy and radiotherapy in cancer patients by comparing CHM plus chemotherapy or radiotherapy with chemotherapy or radiotherapy alone. However, these results provide no convincing evidence for the efficacy of CHM on recovering platelets, red blood cells, and hemoglobin, which were affected by chemotherapy and radiotherapy in cancer patients. However, this may be due to the small number, size, and methodological quality of the available RCTs that used CHM to prevent bone marrow suppression as a result of radiotherapy and chemotherapy. Further rigorous, multicenter RCTs with a large sample size are necessary to further examine these topics, but they must overcome the limitations present in the current publication. This will benefit patients with decreased bone marrow function.
|
2018-04-03T01:25:00.191Z
|
2015-02-23T00:00:00.000
|
{
"year": 2015,
"sha1": "3f893bcf605e424b9869459e6936f50b6ff91580",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/ecam/2015/690976.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "acc374fd325f821d39236010854d366f9cee6c04",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
231721378
|
pes2o/s2orc
|
v3-fos-license
|
Variations in Orf3a protein of SARS-CoV-2 alter its structure and function
Severe acquired respiratory syndrome coronavirus 2 (SARS-CoV-2) rapidly spread worldwide and acquired multiple mutations in its genome. Orf3a, an accessory protein encoded by the genome of SARS-CoV-2, plays a significant role in viral infection and pathogenesis. In the present in-silico study, 15,928 sequences of Orf3a reported worldwide were compared to identify variations in this protein. Our analysis revealed the occurrence of mutations at 173 residues of Orf3a protein. Subsequently, protein modelling was performed that revealed twelve mutations which can considerably affect the stability of Orf3a. Among the 12 mutations, three mutations (Y160H, D210Y and S171L) also lead to alterations in secondary structure and protein disorder parameters of the Orf3a protein. Further, we used predictive tools to identify five promising epitopes of B-cells, which resides in the mutated regions of Orf3a. Altogether, our study sheds light on the variations occurring in Orf3a that might contribute to alteration in protein structure and function.
Introduction
The severe acquired respiratory syndrome coronavirus-2 (SARS-CoV-2), the etiological agent of coronavirus disease 19 , is an RNA virus that induces mild to severe respiratory distress in infected individuals [1][2][3]. The disease, started from wet seafood market area of Wuhan province (China), has now affected 218 countries leading to a global pandemic threat with severe implications on healthcare system worldwide [4]. As of January 15, 2021, the SARS-CoV-2 have already infected more than 90 million people worldwide and caused about two million deaths.
The genome of SARS-CoV-2 is comprised of a single-stranded positive sense RNA, about 30 kb in length [5]. It contains 29 open reading frames (Orfs) that encode four structural, sixteen non-structural and nine accessory proteins [6]. Orf3a is the largest accessory protein of 275 amino acids in SARS-CoV-2 [7] which is involved in critical steps of viral infection cycle and is required for viral replication, and assembly that determines virulence of SARS-CoV-2 [8]. Structurally, this protein is a multi-pass membrane protein that forms a homotetrameric viroporin with TRAF, ion channel and caveolin binding domain [8]. Functionally, Orf3a has been demonstrated to impact host immune system by activating pro-IL-1β gene expression as well as IL-1β secretion that eventually activates NF-kB signalling and NLRP3 inflammasome and contributes to the generation of cytokine storm [9,10]. A recent analysis of human protein interactome revealed that Orf3a interacts with TRIM59 (an E3 ubiquitin ligase) to regulate antiviral innate immune signalling [11]. Altogether, Orf3a is directly involved in pathogenesis of SARS coronaviruses and also acts as an important immune modulator.
The global sequencing efforts of the SARS-CoV-2 genome from different countries revealed that its genome is rapidly evolving by acquiring mutations [12][13][14]. As the Orf3a protein plays a very crucial role in virus infection and pathogenesis, it is quite intriguing to understand the structural and functional implications of Orf3a mutations. Present in-silico study was conducted to identify and characterize mutations in Orf3a protein. We compared a total of 15,928 sequences of Orf3a protein, reported till September 14, 2020 worldwide with the first reported sequence from Wuhan, China. Our study revealed 173 mutations in Orf3a protein. The probable implications of these mutations on the structure and function of Orf3a were discussed.
Orf3a sequence retrieval
The Orf3a sequences were retrieved from the NCBI-virus-database that has 15,928 sequences of Orf3a deposited till September 14, 2020. All these sequences were downloaded from the database (listed in Supplementary Table 1). The amino acid sequences of the Orf3a were exported in the FASTA format. The polypeptide sequences with characters other than standard amino acid sequences such as 'X' represent sequencing errors were excluded from the analysis. Jalview visualization tool was used to identify and remove the redundant sequences from the analysis. After considering these exclusion criteria, the remaining Orf3a polypeptide sequences were used for mutational analyses. The reference or wild-type sequence used in this study (accession ID: YP_009724391)was the first reported sequence of SARS-CoV-2 from Wuhan, China [5].
Multiple sequence alignments (MSAs)
The MSAs were performed using Clustal Omega tool [15], and the first reported sequence Orf3a (accession ID: YP_009724391) from Wuhan, China was used as a reference sequence for comparison. First, the Orf3a fasta sequences were uploaded into the Clustal Omega webserver as an input to run the programe that utilizes HMM and pairwise alignment to generate the MSA data. The variations were recorded carefully and used for further analysis.
Secondary structure prediction
In order to understand the implications of mutation on the secondary structure of Orf3a, the secondary structure prediction tool CFSSP was used. The CFSSP programe was developed by Ashok et al. [16] which predicts the secondary structure from the input polypeptide sequences. To run this webserver, we uploaded the wild type and the corresponding Orf3a sequence containing the identified mutations as an input. The predicted secondary structure from wild type and mutant sequences were obtained as an output. We analysed the secondary structure between wild type and mutants and the differences, if any, were marked.
Protein disorder prediction
PONDR-VSL2 webserver was used to calculate the per-residue disorder distribution in the query sequences as described elsewhere [17]. The PONDR-VSL2 provides the per-residue disorder predisposition scores on the scale from 0 to 1. The value 0 represents fully ordered residues while 1 depicts fully disordered residues. The value of 0.5 is threshold above which residues are considered disordered. Residues are considered highly and moderately flexible if the disorder score ranges from 0.25 to 0.5 and 0.1 to 0.25 respectively.
Protein modelling studies
The protein modelling studies were performed to understand the impact of mutation on the stability of the Orf3a protein. This analysis was conducted using DynaMut programe [18]. The solved structure of Orf3a, RCSB ID: 6XDC [19] was used for protein modelling studies. The effect of mutations on protein was shown in terms of difference in free energy (ΔΔG). The positive value of ΔΔG indicates stabilizing mutation; however, negative value represents destabilizing mutation. The Dyna-Mut webserver can only predict ΔΔG for those regions of protein whose structure have been solved. The three regions, (1-39, 175-180 and 239-275) appeared as unmodeled regions of Orf3a [19], therefore, the mutations residing in these areas have not been used for stability prediction.
Epitope predictions
B-cell epitope predictions were performed as described by Jesperson et al. [20] using IDEB analysis resource. The parameters such as hydrophilicity, flexibility, accessibility, turns, exposed surface, polarity and antigenic propensity of polypeptide chains have been correlated with the location of epitopes. This webserver uses these properties to predict epitopes from the provided input sequence. All prediction calculations are based on propensity scales for each of the 20 amino acids.
Identification of mutations in Orf3a of SARS-CoV-2
Recently, the structure of Orf3a has been solved [19] as represented by the cartoon (Fig. 1A). It is mainly comprised of helical regions, and forms a channel like structure in the membrane. A standalone Innovagen's peptide calculator (https://pepcalc.com/) was used to understand the overall physiochemical properties of Orf3a. It derives calculations and estimations on physiochemical properties of input molecule that includes peptide molecular weight, peptide extinction coefficient, peptide net charge at neutral pH, peptide iso-electric point and peptide water solubility. The colour coded display of amino acid classification and peptide hydropathy plot of Orf3a have been shown in the Fig. 1B. In order to identify the variations among Orf3a proteins, Clustal Omega mediated multiple sequence alignments (MSA) were performed between the Orf3a protein sequences among SARS-CoV-2 reported till September 14, 2020. The analysis revealed as many as 173 point mutations as highlighted in red font (Fig. 1C) and details of each mutation have been mentioned in Table 1.
Analysis of the effect of mutations on Orf3a stability
To assess the impact of mutations on Orf3a, protein modelling studies were performed using DynaMut webserver [18]. This webserver calculates the change in free energy (ΔΔG) due to the mutation induced variation in the target protein. The positive ΔΔG represents increase in stability while the negative ΔΔG represents decrease in stability. Our analysis revealed various mutations that alter stability of the protein as shown in Supplementary Table 1. Our analysis revealed that the mutations caused destabilization as well as stabilisation in Orf3a protein structure. Top twelve mutations have been shown in the Table 2. The maximum positive ΔΔG (1.7 kcal/mol) was obtained for G49V mutation, leading to increase in stability. Similarly, R126S mutation caused maximum negative ΔΔG (− 2.02 kcal/mol), leading to decrease in the stability of Orf3a. , D2Y 59 L83F 117 S171L 2 M5V 60 L85F 118 G172V, G172C 3 R6T 61 L86W 119 G174D 4 I7T 62 F87L 120 T175K, T175I 5 T9K, T9I 63 V88L, V88A 121 T176I 6 T12N 64 T89I 122 S177I 7 V13L, V13A, V13I
Secondary structure and protein disorder predictions due to mutations in Orf3a
Subsequently, the twelve mutations were characterised that exhibited maximum variation in ΔΔG by predicting their effect on the secondary structure of the Orf3a protein. The CFSSP webserver was used to analyse the variations in secondary structure where these mutations reside. The data revealed that out of twelve mutations, only three positions led to change in the secondary structure ( Fig. 2A, C and E). Rest of the nine locations exhibited no alteration in secondary structure (data not shown). The detailed analysis revealed that Y160H mutation has led to shift of beta-sheet to coiled-coil structure ( Fig. 2A). The turn structure is replaced by coiled coil at S171L (Fig. 2C) mutation while D210Y mutation leads to replacement of turn structure by beta-sheet (Fig. 2E).
The impact of these three mutations on protein disorder parameters was further analysed. The PONDR-VSL2 webserver was used to measure the protein disorder contributed by these three mutations. Our analysis revealed that Y160H (Fig. 2B) and D210Y (Fig. 2F) decreased the protein disorder while S171L (Fig. 2D) increased the protein disorder. Altogether, both secondary structure and protein disorder were altered due to the mutation in Orf3a.
Effect on B cell epitopes due to Orf3a mutations
B cell epitopes were predicted using webserver as shown in IDEB analysis resource [20]. The data has been represented graphically (Fig. 3A). The yellow shaded area corresponds to the high score peptides that can act as potential B-cell epitopes. This tool provided five peptide sequences (B-cell epitopes) as shown in Fig. 3B. Subsequently, we compared these sequences with the mutations identified in this study. Our data revealed that peptide 1 was mutated at its all three positions while peptide 3 was also mutated at its all positions except one. Peptide 2 also has five mutations, out of nine. Similarly, peptide 4 and 5 were also found to harbour multiple mutations. It is plausible that due to these mutations the respective epitopes will change and they might help SARS-CoV-2 to evade immunogenic response of the host.
Discussions
Due to the rapid spread of SARS-CoV-2 in various countries worldwide, WHO announced COVID-19 a global pandemic on March 11, 2020 [21]. With the spread of virus to new locations, it acquired mutations leading to evolution of SARS-CoV-2 variants that can potentially affect the rate of viral spread, its pathogenicity and interactions with host. In our study, 173 mutations in Orf3a were identified after analysing approximately 16,000 reported sequences of Orf3a. Our study also showed that there was a considerable alteration instability and dynamicity due to mutations at various positions that might alter Orf3a function. These data were further supported by the protein disorder analysis and secondary structure predictions (Fig. 2). Previous studies revealed that Orf3a, a widely expressed protein, triggered inflammatory responses in the host cells [22,23]. It is plausible that the mutations occurring in Orf3a can highly affect the function of this protein. To gain some insight into the altered function of Orf3a, in-silico analyses were performed to predict the possible B-cell epitopes generated by the peptides of this protein. Our data supports the fact that these mutations might help the virus to evade immune system of the host because of the loss of putative epitopes (Fig. 3).
The putative consequences of variations in Orf3a explained in our observation are in conformity with similar findings reported recently. In an analytical study, it has been observed that the accumulation of nonsynonymous mutations in Orf3a of SARS-CoV-2 could be driving protein changes that might mediate immune evasion and thus favouring viral spread [24]. Occurrences of epitope loss due to mutation in SARS-CoV-2 has also been reported experimentally where six putative epitopes in wild type Orf3a are found to be replaced by five in mutant variants, and Fig. 2. Analysis of the secondary structure and intrinsic disorder predisposition of the unique mutations of SARS-CoV2 Orf3a in comparison with the reference Orf3a protein (YP_009724391) from China, Wuhan. (A, C and E) Secondary structure predictions, the amino acid sequences near the mutation site were uploaded on CFSSP web tool that predict secondary structure. Each panel (A, C and E) shows the secondary structure of the wild type and mutated input sequences. The panel (i) represents the wild type or Wuhan sequence while panel (ii) represents the mutated Indian sequence. The mutation site is highlighted in the rectangular box. (B,D and F) protein disorder prediction, the analysis was conducted using PONDR-VSL2 algorithm. A disorder threshold is depicted at a score of 0.5). Residues/regions with the disorder scores >0.5 are considered as disordered.
such loss of epitopes might allow the mutant to escape interaction with host immunity system [25]. Moreover, a novel missense mutation in the Orf3a gene has been found responsible for the global dissemination of SARS-CoV-2 [26]. Further, SARS-CoV-2 strain with Orf3a mutation often found to carry a mutation in its S (spike) gene, facilitating its interaction with ACE-2 receptors followed by viral entry in the host cells [27]. Majumdar and Niyogi [28] have also observed an appreciable association of Orf3a mutation in SARS-CoV-2 with higher infection and mortality rate.
In summary, structural variations and residue composition in the Orf3a protein might be related to rapid infection kinetics and spreading of SARS-CoV-2. Mutational analysis studies are, therefore, highly pertinent to determine the changes in the structure and function of viral proteins.
Conclusions
Altogether, this study identified several interesting mutations of Orf3a and characterised them showing their probable effects on immune evasion. However, the data obtained here warrants validation to better understand the implications of these mutations on the function of Orf3a.
Declaration of competing interest
Authors declare no conflict of interests. Fig. 3. Prediction of B-cell epitopes. A) On the graphs, the Y-axis depicts for each residue the correspondent BepiPred score (averaged in the specified window); while the X-axis depicts the residue positions in the sequence. The larger score for the residues might be interpreted as that the residue might have a higher probability to be part of epitope (those residues are colored in yellow on the graphs). B) The top five peptides of Orf3a that showed the highest score. The sequences of all five peptides are shown. The number in parentheses represents the location of the peptide in the primary sequence of Orf3a. The red font shows the location of mutant residues of Orf3a. (For interpretation of the references to color in this figure legend, the reader is referred to the Web version of this article.)
|
2021-01-29T05:26:08.394Z
|
2021-01-27T00:00:00.000
|
{
"year": 2021,
"sha1": "ae6ed5350a81825cc7e706323a0a2f615beed2c5",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.bbrep.2021.100933",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ae6ed5350a81825cc7e706323a0a2f615beed2c5",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
251442723
|
pes2o/s2orc
|
v3-fos-license
|
Construction of Discontinuous Enrichment Functions for Enriched FEM's for Interface Elliptic Problems in 1D
We introduce an enriched unfitted finite element method to solve 1D elliptic interface problems with discontinuous solutions, including those having implicit or Robin-type interface jump conditions. We present a novel approach to construct a one-parameter family of discontinuous enrichment functions by finding an optimal order interpolating function to the discontinuous solutions. In the literature, an enrichment function is usually given beforehand, not related to the construction step of an interpolation operator. Furthermore, we recover the well-known continuous enrichment function when the parameter is set to zero. To prove its efficiency, the enriched linear and quadratic elements are applied to a multi-layer wall model for drug-eluting stents in which zero-flux jump conditions and implicit concentration interface conditions are both present.
Introduction
Consider the interface two-point boundary value problem (1) −(β(x)p ′ (x)) ′ + w(x)p(x) = f (x), x ∈ I = (a, b), p(a) = p(b) = 0, where w(x) ≥ 0, and 0 < β ∈ C[a, α] ∪ C[α, b] is discontinuous across the interface α with the jump conditions on p and its flux q := βp ′ : where the jump quantity [s] α := s(α + ) − s(α − ), s ± := s(α ± ) := lim ǫ→0 + s(α ± ǫ). The primary variable p may stand for the pressure, temperature, or concentration in a medium with certain physical properties and the derived quantity q := −βp ′ is the corresponding Darcy velocity, heat flux, or concentration flux, which is equally important. The piecewise continuous β reflects a nonuniform material or medium property (we do not require β to be piecewise constant). The function w(x) reflects the surroundings of the medium. The case of λ = 0 is widely studied, while the case of λ > 0 gives rise to a more difficult situation. For example, the case of rightward concentration flow [27,28,29] imposes (4) [p] α = λ(βp) ′ (α − ) [βp ′ ] α = 0, which generates an implicit condition since the left-sided derivative is unknown. Implicit interface conditions abound in higher dimensional applications [1,15,19,20]. For definiteness, we will study a class of efficient enriched methods for problem (1) under the jump conditions (4), but our methods apply to problem (1) subject to the general conditions (2)-(3) with a well-posed weak formulation. After a simple calculation, it is easy to see that (4) is equivalent to (5) [ which is indeed of the type (2)- (3). Numerical methods for the interface problem (1) under (4) generally use meshes that are either fitted or unfitted with the interface. A method allowing unfitted meshes would be very efficient when one has to follow a moving interface [17] in a temporal problem. For the unfitted methods, there are available geometrically unfitted finite element methods typified in [7] and the reference therein, the immersed finite and finite difference methods [8,13,14,18,21,22,23], the stable generalize finite element methods (SGFEM) [5,3,4,10,31], among others. In an unfitted method, the mesh is made up of interface elements where the interface intersects elements and non-interface elements where the interface is absent. On a non-interface element, one uses standard local shape functions, whereas on an interface element one uses specialized local shape functions reflecting the jump conditions. For an enriched method, the standard finite elements are enriched with some enrichment functions that reflect the presence of the interfaces. It was originally designed to handle crack problems [6,12,24], but for recent years efforts have been made to generalize it to fluid problems, see [27] and the references therein.
The construction of the local shape basis of an immersed finite element or finite difference method uses information on discontinuous β while an enriched method does not. Thus an enriched method does not require the discontinuous diffusion coefficient to be piecewise constant, which is an advantage. On the other hand, it makes the choice of the enrichment function less intuitive and the error analysis arguably harder. The purpose of this paper is to propose an approach to constructing the enrichment function from optimal order error analysis. The general idea is as follows. In the error analysis, we use the principle that says, roughly, the error in the finite element solution p h should be bounded by the approximation error in the finite element space V h : Suppose that the consistent error is of optimal order, then the optimal order analysis is completed if we can demonstrate an optimal order approximate piecewise polynomial from V h . In an enriched finite element method, V h takes the form of where S h is a standard finite element space (e.g., P k -conforming, k ≥ 1), and the function ψ is an enrichment function to reflect the jump conditions. In the literature, ψ is usually given beforehand and then one tries to find the optimal order interpolating polynomial to prove convergence. Our new approach is to connect the construction of ψ and the interpolating polynomial together and finds ψ through error analysis. In this way, we also have a unified theory for constructing enrichment functions for continuous and discontinuous finite element solutions. Let's mention how we were motivated to come up with the new approach. In view of (6), for standard continuous conforming finite element methods there are familiar interpolating polynomials that do the job [11]. For problem (1) with a continuous solution ([p] α = 0), Deng [10] proved the convergence of finite element solutions in all P i −conforming spaces(i ≥ 1) enriched by the same well-known hat function [5,3,4,10] (cf. Eq. (49) below). The crux of the proof was again the existence of a simple interpolating polynomial. However, for an enriched immersed or unfitted method approximating a discontinuous solution, it is impossible to find the same type of interpolation operator due to the finite jump of [p] α (cf. [9]). On the other hand, we in [2], unaware of [10], used a different interpolation operator to prove optimal order convergence. The approach was motivated by an additional presence of an interface deviation. In this paper we generalize the analysis of [2]; modifying that operator (I c h of (15) below) to find the desired interpolating polynomial. The new family of the enrichment functions is a result of this analysis, not given beforehand. However, the formula of the enrichment function (cf. (8) below) is simple and intuitive, and can be used without knowing the detail of the analysis.
The organization of this paper is as follows. In Section 2, we state the weak formulation for the implicit interface condition problem, define enrichment functions and spaces, and put their role in perspective in Remark 2.1. In Section 3, we carry out the error analysis and show how the construction of the enrichment function is related to it. Optimal order convergence in the broken H 1 and L 2 -norms is given in Theorem 3.4. In addition, the second-order accuracy of p h at the nodes is proven in Theorem 3.5. In Section 4, we provide numerical examples of a porous wall model to demonstrate the effectiveness of the present enriched finite element method and confirm the convergence theory. Furthermore, following the viewpoint of the SGFEM [3,4,10], we compare the condition numbers of our (discontinuous solution) method with those in the continuous solution case [2], and numerically show that they are comparable for the same mesh sizes. Both linear and quadratic enriched elements are tested. Finally, in Section 5 we give some concluding remarks and discuss possible extensions of the present approach to multiple dimensions. , v(a) = v(b) = 0}. We use conventional Sobolev norm notation. For example, |u| 1,J denotes the usual H 1 -seminorm for u ∈ H 1 (J), and ||u|| 2 i,I − ∪I + = ||u|| 2 i,I − + ||u|| 2 i,I + , i = 1, 2 for
Enrichment Functions and Spaces
. The space H 1 α,0 (I) is endowed with the || · || 1,I − ∪I + norm, and H 2 α (I) with the || · || 2,I − ∪I + norm. With this in mind, the weak formulation of the problem (1) under (4) is: Given f ∈ L 2 (I), find p ∈ H 1 α,0 (I) such that (7) a(p, q) = (f, q) ∀q ∈ H 1 α,0 (I), where The above weak formulation can be easily derived by integration-by-parts and by (4). Since λ > 0, the bilinear form a(·, ·) is coercive and is bounded due to Poincaré inequality. By the Lax Milgram theorem, a unique solution p exists. Throughout the paper, we assume that the functions β, f , and w are such that the solution p ∈ H 2 α (I).
In other words, the continuous case is the limiting case of the discontinuous ones. • Notice that the slopes are uniformly bounded, i.e., there exists a constant C > 0 such that • The main gist of this paper is to obtain ψ as a natural consequence of our error analysis. The definition of m 2 is a result of zeroing out of infinite coefficient of [p ′ ] α in the error analysis (cf. Eq. (39)). Let us describe the enriched space associated with ψ. LetĪ = ∪ n−1 and let S h be the conforming linear finite element space where φ i 's are the Lagrange nodal basis (hat) functions. We denote the usual and define the enriched finite element space Consider the enriched finite element method for problem (1)
Optimal Order Interpolating
Polynomial I h p. It is essential for the enriched space to have good approximation properties for the functions in H 2 α (I) that satisfy the jump conditions (5). For p ∈ H 2 α (I), let p i , i = 1, 2 be the extensions of p restricted to I − and I + to H 2 (I), respectively [16]. Thus p ′ 2 −p ′ 1 is in H 1 (I) ⊂ C(Ī) due to the Sobolev inequality, and as a result the usual P 1 -interpolation operator π h (p ′ 2 − p ′ 1 ) ∈ S h is well defined. To exhibit approximation properties ofS h for functions in H 2 α (I) that satisfy (5), we first define the interpolation operator In particular (16) This interpolation operator has been used successfully in [2], but our experience showed that it is not capable of handling the discontinuous case [p] α = 0. To emphasize we use a superscript c to indicate continuity. Now we modify it with an added correction term to accommodate the case of [p] α = 0: Define the interpolation operator I h : where The δ-term is motivated by the error analysis in Lemma 3.1 below. Its presence is to kill the jump term in p across α that may go to infinity as h goes to zero (See Eq. (25)).
Let χ i , i = 1, 2 be the characteristic functions of I − and I + , respectively, and let Note that functions in the above space may be discontinuous at α. Define the auxiliary interpolationsĪ h : To derive a bound for the term |p − I h p| 1,I − ∪I + we split the error as follows: From the classical approximation theory Thus it suffices to estimate the second term on the right side of (19), which is done in the following two lemmas. We mention in passing that all the constants in the estimates should be independent of the interface position as well. This fact is important if one wants to use the method for moving interface problems.
Construction of Enrichment Functions in relation to Error Analysis
Lemma 3.1. There exists a positive constant C independent of h and α such that Proof. It suffices to show the detailed analysis on the interface element [x k , x k+1 ]. For the interval [x k , α], from the definition (15) of I c h p and the addition and substraction of the same quantity yield where and combining this with the second term in J 1 leads to where using the Taylor's expansion with integral remainder form where the constant C = 2, independent of h and α. Note that J 1 is the difference between a small quantity J 3 and a large quantity [p] α /(x k+1 − x k ) as h goes to zero. The latter is controlled by the ψ ′ terms in (18) through the δ−parameter in the following relation by the way we defined δ in (18). Next we show that J 2 is the difference between a small quantity and a large term we can control.
To avoid clustering of expressions, let ∆ : We also denote ∆ ′ (x k ) by ∆ ′ k , and ∆ ′ (x k+1 ) by ∆ ′ k+1 . Below, we use these notations when necessary. First note that with ψ = m 1 (x − x k ) and and hence (9)) where Each of the m 1 h −1 k I i terms in J 4 can be estimated similarly using the Cauchy-Schwarz inequality, e.g., where C is a constant independent of h and α. Combining these estimates we see that with From (25) and using (23), (26), and (18).
Gathering all the local estimates and integrating, we have where C is independent of h and α. Proof. For the interval [α, x k+1 ], from the definition (15) of I c h p and adding and subtracting of the same quantity, Noting thatĪ h p = π h p 2 χ 2 for x ∈ [α, x k+1 ], we see that and hence Having decomposedJ 1 as the difference of a small term and a large term plus a finite term, we do the same forJ 2 . First, with ψ = m 2 (x − x k+1 ) we have where where the last term can be estimated as before. Thus, with due to (11) will be estimated as follows. From (30), (31), and (5) we have Gathering all the above local estimates and integrating, we conclude that there exists a constant C > 0 independent of h and α such that Using Lemmas 3.1 and 3.2, we obtain (7) and (14), respectively. Then there exists a constant C > 0 such that The constant C does not depend on independent of h and α but depends on the ratio ρ : Then using the boundedness and coercivity properties of the bilinear form a(·, ·), we get where β * = sup x∈[a,b] β(x) and β * = inf x∈[a,b] β(x). Thus, by Cea's lemma and Theorem 3.3 Then the usual duality argument leads to We note that the jump ratios ρ := β * β * are of moderate size for the wall model in the next section.
Theorem 3.5. Second order accuracy at nodes. Suppose that β ∈ C 1 (a, α) ∩ C 1 (α, b) and 0 ≤ w ∈ C[a, b]. Let p be the exact solution and p h be the approximate solution of (7) and (14), respectively. Then there exists a constant C > 0 such that where C depends on certain norms of the Green's function at ξ.
Numerical Examples
In this section, we test our method using the multi-layer porous wall model for the drug-eluting stents [25] that has been studied using the immersed finite element methods [27,28,29,30]. In this one-dimensional wall model of layers, a drug is injected or released at an interface and gradually diffuses rightward. The concentration is thus discontinuous across the injection interface and continuous in the other layers. At all interface points, a zero-flux condition is imposed. We run tests on both enriched linear and quadratic finite element spaces.
Enriched Linear Elements.
In this subsection we test the efficiency of our method on three problems. In Problem 1, we place only one interface point to model the layer where the drug is delivered. In Problem 2, we place two interfaces to model the layers where the concentration has continuously spread. Finally, in Problem 3 we combine the previous two cases and place three interface points to simulate the full wall model. In all three problems, we confirm in Table 1-Table 3 the optimal order convergence in the broken H 1 and the L 2 norms. In addition, the nodal errors are shown to be second order in all these tables as well. We are interested in the behavior of the condition numbers of the associated stiffness matrices. Following the viewpoint of the SGFEM [3,4,10], we compare the condition numbers in Problem 1 and Problem 3 (discontinuous solutions) with those in Problem 2 (continuous solution) [2] that are displayed in Table 2. We can see that the condition numbers in Table 1 and Table 3 are comparable in the order of magnitude with those in Table 2 for the same mesh sizes.
Based on optimal order analysis, we derived a family of enrichment functions for the conforming P 1 finite element, and the resulting enriched method can approximate discontinuous solutions in optimal order in the broken H 1 and L 2 norms. Encouraged by the preliminary numerical results for the quadratic element in subsection 4.2, we hope to extend the same approach to all P i , i ≥ 2.
Extension of our approach to higher dimensions is highly desirable. The tools we used in our approach in one dimension include Taylor's expansion, extension operators in Sobolev spaces, and balancing the lower order large terms resulting from extension operators with the higher-order terms in the multipliers with the enrichment function. All these have their counterparts in higher dimensions. The new ingredient in higher dimensions will include some contamination from the added geometric complexity near the interface. An analogous 1D parameter called interface deviation ǫ was introduced in [2] to mimic the geometric complexity (The enrichment function breaks at α − ǫ instead of the interface point α). We wish to further investigate the effect of this parameter on our present method.
|
2022-08-10T01:15:43.557Z
|
2022-08-08T00:00:00.000
|
{
"year": 2022,
"sha1": "b8d56ce0fe08dfbb1c8e3511a05607c57a140e4c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "c36a05ce43f1a3206a8f3296a5e6adec469c5d63",
"s2fieldsofstudy": [
"Mathematics",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
270377271
|
pes2o/s2orc
|
v3-fos-license
|
Perception of enhanced learning in medicine through integrating of virtual patients: an exploratory study on knowledge acquisition and transfer
Introduction Virtual Patients (VPs) have been shown to improve various aspects of medical learning, however, research has scarcely delved into the specific factors that facilitate the knowledge gain and transfer of knowledge from the classroom to real-world applications. This exploratory study aims to understand the impact of integrating VPs into classroom learning on students’ perceptions of knowledge acquisition and transfer. Methods The study was integrated into an elective course on “Personalized Medicine in Cancer Treatment and Care,” employing a qualitative and quantitative approach. Twenty-two second-year medical undergraduates engaged in a VP session, which included role modeling, practice with various authentic cases, group discussion on feedback, and a plenary session. Student perceptions of their learning were measured through surveys and focus group interviews and analyzed using descriptive statistics and thematic analysis. Results Quantitative data shows that students highly valued the role modeling introduction, scoring it 4.42 out of 5, and acknowledged the practice with VPs in enhancing their subject matter understanding, with an average score of 4.0 out of 5. However, students’ reflections on peer dialogue on feedback received mixed reviews, averaging a score of 3.24 out of 5. Qualitative analysis (of focus-group interviews) unearthed the following four themes: ‘Which steps to take in clinical reasoning’, ‘Challenging their reasoning to enhance deeper understanding’, ‘Transfer of knowledge ‘, and ' Enhance Reasoning through Reflections’. Quantitative and qualitative data are cohered. Conclusion The study demonstrates evidence for the improvement of learning by incorporating VPs with learning activities. This integration enhances students’ perceptions of knowledge acquisition and transfer, thereby potentially elevating students’ preparedness for real-world clinical settings. Key facets like expert role modeling and various authentic case exposures were valued for fostering a deeper understanding and active engagement, though with some mixed responses towards peer feedback discussions. While the preliminary findings are encouraging, the necessity for further research to refine feedback mechanisms and explore a broader spectrum of medical disciplines with larger sample sizes is underscored. This exploration lays a groundwork for future endeavors aimed at optimizing VP-based learning experiences in medical education. Supplementary Information The online version contains supplementary material available at 10.1186/s12909-024-05624-7.
Introduction
In Medical Education, a persistent challenge lies in the bridge between acquiring theoretical knowledge and applying it in real-world clinical scenarios.Many medical students struggle with translating their classroom learning into practical settings.The primary challenge lies in effectively translating the concepts students have learned into authentic patient interactions.This gap is particularly concerning because it affects the quality of patient care, as medical students are not just learning to acquire knowledge but must be able to apply this knowledge in complex healthcare settings.
One approach to address this challenge is the use of Virtual Patients (VPs), a computer-based simulation of real-life clinical scenarios for students to train clinical skills [1].Research has shown that using VPs in the classroom can effectively improve various aspects of learning, from core knowledge and clinical reasoning to decisionmaking skills and knowledge transfer [2][3][4][5].The VPs provide students with the opportunity to practice skills in a safe and controlled simulation environment.
Recent studies have focused on optimizing the design and arrangement of VPs as part of learning activities to facilitate both knowledge acquisition and retention [6][7][8].For instance, Verkuyl, Hughes [8] demonstrated that using VPs as gamification tools can improve students' confidence, engagement, and satisfaction.
However, studies focusing on the specific factors that contribute to these improvements when integrating VPs into the classroom are limited, particularly in understanding how to use VPs in the classroom to facilitate the transfer of knowledge students' gain from the class to the subsequent studying stage of their education and eventual practice.
Acquisition and transfer of knowledge are critical factors in medical education, as medical students must be able to apply their knowledge and skills to real-world clinical scenarios [9].Research suggests that for the effective transfer of knowledge, students should be immersed in authentic environments, enabling the transition of learned competencies to advanced stages [10][11][12][13].
Despite the consensus on the efficacy of VPs as a tool, there is a gap in understanding how to integrate VPs in the classroom to optimize students' learning, especially in facilitating learning transfer.The effectiveness of VPs is not just in their use but also in how they are used by students to enhance their understanding on how to reason and make decisions about medical treatments when dealing with clinical cases.Without a clear and deep understanding, we risk underutilizing their potential and losing opportunities for medical students to become well prepared for real-world clinical scenarios.
Certain elements, such as role modeling instruction [14][15][16], using various authentic cases [17][18][19], and engaging in peer discussions on feedback [20][21][22], emerge as potential key components that could be integrated to maximize the knowledge acquisition via VPs.For instance, Stalmeijer, Dolmans [23] show how an expert, serving as a role model, provides guidance that facilitates student learning by demonstrating clinical skills and reasoning out loud.While there is ample evidence supporting the advantages of inclusion of VPs in education, there is not enough research focusing on the detailed aspects of effective instructional design techniques.This paper delves into these components, seeking to understand how the VP integration influences students' learning and knowledge transfer.Figure 1 shows the theoretical framework of how integrating VPs in class affects students' learning and might impact the transfer of learning in a simulated VP environment to practice.
This exploratory study aims to investigate how instructional design elements such as role modeling, various authentic cases, and peer dialogues on feedback within VP sessions affect students' learning from the learner's perceptions.The core research question in this study focuses on how the implementation of role modeling, various authentic cases, and peer dialogue on feedback in VPs, influences learners' perception of knowledge gain and transfer in personalized medicine.
Setting
The study was conducted at Maastricht University in the elective course, "Personalized Medicine in Cancer Treatment and Care".This course is open to second-year undergraduate medical students of Maastricht University.
Participants
Initially, 24 students enrolled in this course for the academic year of 2022-2023, and 22 students participated in the Virtual Patient session.In total, 19 students voluntarily completed the survey designed to evaluate their experiences and perceptions of the Virtual Patients session.Thereafter, 9 of the 19 survey respondents voluntarily agreed to participate in three focus group interviews, with 2-4 students in each focus group.Students were informed that participation in this research with larger sample sizes is underscored.This exploration lays a groundwork for future endeavors aimed at optimizing VP-based learning experiences in medical education.
study had no impact on student's academic performance or their continuation in their studies.
Intervention
The instructional approach for the VP cases was structured in a specific format for the students.Figure 2 shows the instructional design for VP integration.The first stage was a role-modeling phase, where an expert demonstrated the clinical reasoning process using VP Case A. This was followed by a practice session where students worked in pairs on two different VP cases (Case B and C).After that, students formed two larger groups each including 5 or 6 students, and discussed the system feedback that was provided by VP platform.Finally, the expert summarized the session and addressed students' questions.The whole intervention lasted 120 min.Figure 1 gives an overview of the intervention steps.
1. Role modeling (30 min): The intervention started with an expert, a clinician with teaching experience, demonstrating a clinical case (Case A) and showing the clinical reasoning process by thinking aloud.The expert served as a role model in showcasing the approach toward clinical problem-solving, provided supportive information, and demonstrated how to proceed through the case.The aim of the role modeling session was to empower students to apply the insights and methodology gained from experts in case A to solve subsequent cases (case B and case C), Although these cases shared similarities in underlying principles, they diverged on patient characteristics such as age, complications, and smoking history that can influence patient treatment outcomes.
2 and 3. Two VP pair tasks (20 min each): In this segment, the 22 participating students were paired, resulting in 11 pairs.These pairs were then divided into two groups.Group 1 (6 pairs) and group 2 (5 pairs) alternated in going through Case B and Case C to account for the practice effect.These cases were variations of the clinical cases introduced during the role-modeling demonstration, differing in patient characteristics such as age, complications, and smoking history to challenge the students' reasoning.Students were encouraged to work collaboratively.
4. Feedback discussion (30 min): Upon completion of the VP cases, an automated feedback is immediately provided about the reasoning analysis.Participants were instructed to save this feedback for later discussion.After that, Students were organized into groups of six, based on the sequence in which they engaged with the cases.For instance, those who first practiced with Case B and then proceeded to Case C formed Group (1) Conversely, students who started with case C and then moved on to case B were assembled into Group (2) To foster meaningful dialogue, students engaged in discussions focused on the feedback generated by the Virtual Patient system, guided by a printed discussion guide distributed to each group (see Appendix 2).The discussion aimed to deepen students' understanding and enrich their conversations about the cases they had just completed.
5. Plenary (15 min): This part lasted 15 min.Hosted by the expert to summarize the session and address questions or doubts raised by students.
During the practice and discussion sessions, the expert circulated among the groups to offer additional guidance and support.
The virtual patient cases
Three Virtual Patient (VP) cases (Case A, B, and C) were created to enhance students' comprehension of specific concepts, knowledge, and skills in clinical reasoning.The VP practice was developed on the P-Scribe (www.pscribe.nl)learning platform, a web-based e-learning system based in the Netherlands.The platform facilitates the design and implementation of text-based VP sessions (Appendix 4).
While these cases shared a foundation on authentic head and neck cancer treatment, they were characterized by varying patient characteristics in terms of age, gender, and medical history (anamnesis).
Within each VP case, students were presented with a scenario related to neck cancer.Figure 3 shows the chart of a VP case.Each case starts with an overview of the patient and their medical history which students had to use to make an initial assessment.After this, students encountered a mix of multiple-choice and open-ended practice questions.These questions guided students in planning diagnostics, formulating a diagnosis, and devising a treatment plan tailored to the patient's specific needs.Immediate feedback was provided after students submitted each response, and comprehensive summative feedback was given at the conclusion of each case to foster understanding and learning from any potential misjudgments or oversights (See Appendix 4).
Measurement instruments
Learning-perception survey: The survey (Appendix 1) consisted of 20 items, structured into five primary sections: general experience, intended learning outcome, role modeling, practicing with various authentic cases, and reflection on peer dialogue around feedback.The first item asked about students' general experience through the whole session.The second item focused on their perception of intended learning outcomes.Six items then focused on the students' perceptions of learning through role modeling followed by 5 items addressing perceptions related to their learning on practicing with authentic cases.The final seven items explored students' perception of learning from dialogue around feedback.Participants indicated their level of agreement for each statement using a 5-point Likert scale: 1 denoting "Strongly Disagree", 2 for "Disagree", 3 for "Neutral", 4 for "Agree", and 5 for "Strongly Agree".For interpretation, average scores below 3 were considered as "in need for improvement", those of 4 or higher as 'good' , and those between 3 and 4 as 'neutral' .
Focus group interviews: Three focus group interviews (Appendix 3) were conducted to dive deeper into students' perceptions of their learning experience, knowledge gain, and knowledge transfer in real-world settings.The focus group took place after the survey and the survey data did not affect the development of the focus group questions.In focus group 1, two students, in focus group 2, two students and in focus group 3, five students participated.The interviews were structured around a series of questions that explored students' perceptions of their learning across specifically designed sections.These sections included Role Modeling, Practice with Various Authentic Cases, and Dialogue around Feedback.The structure aimed to understand students' perspectives on each key component of the learning sections.
Analysis
The analysis of the survey data was conducted by calculating the mean, standard deviation, and the Alpha Coefficient for the responses pertaining to each of the five key dimensions of the survey.The mean score provided an indicator of the average student perception, while the standard deviation offered insights into the variability of the responses.The Alpha Coefficient, a measure of internal consistency, was computed to assess the reliability of the survey dimensions.Through these statistical measures, an overall understanding of the students' perceptions regarding the various aspects of the Virtual Patients was attained, facilitating a robust analysis aligned with the research objectives.
The focus-group interview data were analyzed following the thematic analysis procedure set out by Braun and Clarke [24]: (1) familiarize yourself with your data, (2) generate initial codes, (3) search for themes, (4) review themes, (5) define and name themes, and ( 6) produce the report.The interview was guided by pre-existing frameworks or theories in medical education.This ensured the capture of major aspects of the VP learning experience as underscored in the existing literature: role modeling, using various authentic cases, and peer dialogue around feedback [16-18, 20, 21].The focus group interview was recorded, transcribed, and coded by three team members and ordered in initial themes (Z.L, M.A, and X.L).These themes were discussed with the larger team.We used a process of inductive and deductive analysis and used the three design principles of role modeling, practice with various authentic cases, and group discussion on feedback as sensitizing concepts to study the data [24].Thereafter, quantitative and qualitative analyses were collectively appraised, compared, and checked for inconsistencies.In this triangulation, the themes identified in focus-group interviews were explanatory to the descriptive statistics of the survey.
Trustworthiness
Several measures were taken to enhance the study's trustworthiness.First, triangulation was achieved by employing multiple data collection methods, including surveys and focus group interviews.The interview data collection continued until saturation was reached, ensuring a comprehensive understanding of the student's experiences and perceptions.Secondly, the coding process followed an iterative approach.Team members initially coded transcripts independently, and then met to reach a consensus before moving on to code subsequent transcripts.Three researchers conducted the coding independently to minimize bias and enhance the validity of the findings.Finally, a member check among a sample of the focus group interviewees was conducted.In response to the question asking whether they agreed with summaries of preliminary results and would provide comments, confirmatory responses were received as well as some minor additional comments and clarifications.The latter were taken into account in the analysis and interpretation of the data.
Ethical approval
The Maastricht University Ethical Committee reviewed and approved this study.The approval number is FHML-REC/2023/021.
Results
The findings from both the survey data and focus group interviews were presented to explore students' perceptions of the effectiveness of the Virtual Patient (VP) Session in enhancing their clinical reasoning skills.
Survey data
The survey explored students' perceptions across five key dimensions: General Experience, Intended Learning Outcome, Role Modeling, Practicing with Various Authentic Cases, and students' reflection on Peer Dialogue around Feedback.The students scored the VP sessions on 20 items (Table 1).The scores varied between M = 2.95 to M = 4.58, on a scale of 1-5.
For the General Experience of Virtual Patient Session (Items Q1-Q2) the average score was M = 4.13 (SD = 0.70).Specifically, the overall experience was positively rated at M = 4.11.The component that assessed the improvement of clinical reasoning skills received an average score of M = 4.16.
Regarding the Students' Perception of Learning from Role Modeling (Items Q3-Q8), the average score was M = 4.38 (SD = 0.61).Students agreed that the expert demonstration at the start of the session helped them understand the intended learning outcomes and was useful in guiding them through the Virtual Patient cases, with scores ranging from M = 4.26 to M = 4.58.
Students' perception of learning from practicing with various authentic cases (Items Q9-Q13), received an average score of M = 4.00 (SD = 0.86).The scores measured the students' perception of how well the provided Virtual Patient cases matched their current level of understanding, enhanced their comprehension of the subject matter, and helped them grasp the complexities inherent in real-world clinical scenarios.
For their perception of learning from Peer Dialogue around Feedback (Questions 14-20), the average score was M = 3.24 (SD = 1.05).These scores measure the students' perception of the effectiveness of peer dialogue in enhancing understanding, generating strategies to address feedback, and prioritizing areas of improvement.
Focus group interview data
The interviews revealed five themes: ' Which steps to take in clinical reasoning' , ' Asking challenging questions to enhance deeper understanding of knowledge' , 'The variety in cases helps to enhance transfer to the real world' , and 'Deeper understanding of reasoning through reflections' .
Which steps to take in clinical reasoning
Students acknowledged the expert's initial demonstration helped them to develop structured knowledge and gain understanding of the clinical reasoning process.
I think it (Role modeling) helps to find a pattern in clinical reasoning as well. At first, it (the expert) explained to us. For example, are there possible lymph nodes? Yes or no. Then you need to do this and this…Then you can make kind of…pattern that differs for the diagnosis and the prognosis. So you can make kind of a diagram in your head. Which you can use later on. And your knowledge becomes more structured. (Focus Group 2, Student B)
Students also perceived that the integrated practice with Virtual Patients helped them to anticipate the subsequent steps in clinical reasoning.They indicated the patterns learned through practicing with virtual Patients helped them understand the procedures they needed to follow to evaluate the patient.
I think now I know the steps which they (the procedural) followed to evaluate the patient, so first we can do this and then that.First, you determine the TNM (Tumour, Node, Metastasis) staging and do the endoscopy, then the TNM staging, and then you make the treatment plan.Now it's more clear how they do those steps.(Focus Group 1, Student A) Moreover, students thought the pair work and dialogue helped them think and clarify with each other what steps they needed to do in clinical reasoning when they had different opinions.Q5.The demonstration by the experienced clinician at the start of the session was useful for gaining a better understanding on how to reason when dealing with a similar patient case.Q6.The reasoning-out-loud approach used by the clinician when going through the clinical case at the start enhanced my understanding of the reasoning behind the choices made when going through the clinical case.Q7.The clinician demonstrated the specific clinical steps that are necessary to know when going through a Virtual Patient case.Q8.The demonstration of the clinician at the start stimulated me to adopt a similar approach when working with the Virtual Patient cases myself.
Challenging their reasoning to enhance deeper understanding
Students reported how the course design differed from other blocks.According to the students, the VP practice was particularly beneficial in helping them integrate knowledge, and make the knowledge their own.Students also indicated their preference for the structured approach of the VP session, where an initial demonstration by an expert, sharing their clinical experience, followed by hands-on practice with VP cases was perceived to enhance transfer to practice.This method, as described by the student, bridged the gap between theoretical knowledge and practical application.They think this structure made the knowledge clear and further helped them to transfer their knowledge from theory to practice.
I think it's really valuable because you have already had an example about it (Demonstrating Case A). (Focus Group 1, Student A)
Students indicated that the diagnosis practice in VP led them to realize the difference in real-world scenarios.They said while in the simulated environment might seem easy to choose multiple diagnostic options, in the real world, medical professionals must make more selective decisions due to limitations.They think this experience taught them to think of prioritizing and decision-making in a realistic medical setting.
Yeah, maybe also there (in VP cases) were also a question about which imaging techniques you would use and then it was Echo or CT, MRI, there was also an option where you could listen to the lungs and some of the people also checked that one, but it isn't really necessary, so you think it only takes one minute, so why not, but in the real world there isn't always time to do everything, so it's also good
Discussion
The study demonstrated the perception of students' learning and knowledge transfer by integrating VP cases with role modeling introductions, and peer dialogue around feedback, specifically in the context of personalized medicine in cancer treatment and care.The survey reflected a positive learning experience and students reported they gained a better understanding of the clinical reasoning process as well as which steps to take when dealing with a clinical case through this specific course design with integration of VP cases.Qualitative data showed that the integration of VPs into the educational setting clearly shifted the students from being passive observers in a traditional lecture-based format to active participants in a simulated clinical environment.This shift is in line with previous research findings, which suggest that the use of VPs in clinical training actively engages learners and encourages the application of their knowledge [4].
The quantitative data revealed that students highly valued the role modeling session, as indicated by the high average scores.Qualitative data explained that the role modeling session enabled students to not only observe the clinical process being demonstrated but also to engage in active thinking by interacting with the expert.As discussed by Cruess, Cruess [15], role modeling not only consciously imparts knowledge but also unconsciously influences students' attitudes and behaviors, making the learning experience more relatable to the clinical environment.In this study, by sharing clinical reasoning and personal anecdotes during the class, experts made the learning experience more relatable to the clinical environment that students would face in the future.This mirrored the role modeling research by Morgenroth, Ryan [25] which emphasizes the importance of role models in shaping the self-concept and motivation of individuals.Moreover, the qualitative data showed that the demonstration by the expert serves as a fundamental pre-knowledge for students to cover the knowledge gap and prepare them with the following practice.This finding aligns with van Merrienboer's scaffolding concept emphasizing the importance of initial expert guidance in learning processes [16].
Followed by the role modeling demonstration, students practiced on two VP cases in pairs and perceived that the VP practice enhanced their clinical reasoning skills, and also helped them understand the real-world clinical setting.The result showed that the variety and real-life complexity of cases in the VP sessions were perceived to be essential for students' knowledge gain and transfer.The positive perception of various authentic cases aligns with previous research highlighting the importance of exposure to diverse and authentic scenarios in medical training [17,18].Moreover, the hypothetical "what-if " scenarios further enhanced students' analytical abilities, preparing them for the multifaceted challenges they would encounter in real-world medical situations.Survey responses (Q10, mean = 4.37; Q13, mean = 4.05 in Table 1) indicated a consensus among students on the improvement with this practice in understanding and applying knowledge.Our findings corroborate with Jonassen and Hernandez-Serrano [26]'s study emphasis on the importance of authentic learning environments for effective knowledge transfer.
After the practice, students discussed the feedback provided by the VP system.Despite its mixed quantitative reception, the peer dialogue on feedback was qualitatively found to be a vital component for promoting critical thinking, discussion, and reflection.The Feedback from the VPs, both immediate and delayed, along with peer dialogue, emerged as crucial elements in students' learning process.In this study, students showed different preferences for receiving feedback.Some students preferred immediate feedback, however, others preferred delayed feedback.How feedback was provided notably influenced peer interactions.Given that immediate feedback was dispensed upon submission of answers, the peer dialogues automatically started when students noticed disparities or encountered obstacles.Such dialogues not only served to resolve ambiguities but also fostered collective reflection, enhancing comprehension of the subject.By vocalizing their thoughts and engaging in active discussions, students were able to solidify their understanding and uncover nuances they might have missed otherwise.This aligns with the importance of engaging in peer discussions on feedback as outlined in the theoretical background [20][21][22].
When looking at the integration of VP cases with the particular course design, students perceived that the expert demonstration, followed by VP practice, and peer dialogue around feedback fostered a comprehensive understanding, allowing them to integrate diverse clinical knowledge, which in turn promoted understanding.The "Watch-think-do-reflect" structure not only ensured better knowledge retention but also enhanced students' enthusiasm towards the subject.Observing model demonstrations enabled students to assimilate clinical nuances and contemplate real-world applications.Subsequent hands-on practice with VP cases fortified their cognitive structures, honing their clinical reasoning.Ultimately, students perceived that reflective peer discussions on feedback solidified their learnings, enhancing knowledge retention.
Limitations
This study employed a survey and focus group interviews that provided a comprehensive understanding of students' perceptions of learning.However, there are several limitations.The study had a small sample size and was conducted in the context of an elective course, which may limit the generalizability of the findings.Furthermore, the study was exploratory in nature and did not measure actual learning outcomes or long-term retention, which are critical aspects of educational impact.
Implications for future research
Future research should investigate whether integrating Virtual Patients (VPs) into classroom activities enhance student learning outcomes by incorporating learning assessments and involving larger and more diverse participant groups to validate our findings.Additionally, a deeper analysis of students' reasoning processes and interactions could provide insights into how and why knowledge gain and transfer are fostered or hindered.Furthermore, it is also important to understand the most beneficial moment for integrating VPs into educational settings to enhance transfer from a simulated to a real practice setting.This understanding could inform the development of more effective educational strategies and interventions.
Conclusion
The integration of Virtual Patients into classroom learning appears to offer a promising approach to enrich medical education.Key elements such as role modeling and various authentic cases contribute positively to students' perception of learning, as well as peer dialogue on feedback.However, the approach to peer dialogue on feedback may need to be refined for more consistent benefits.Furthermore, studies with larger sample sizes and broader participant groups are essential to provide robust support for the efficacy of this educational approach and its components.
Fig. 3 Fig. 2 Fig. 1
Fig. 3 VP case flow chart of learning from Role Modeling Session (Q3-Q8) Q3.The demonstration by the experienced clinician at the start of the session enhanced my understanding of the intended learning outcomes of the Virtual Patient session.Q4.The demonstration by the experienced clinician at the start of the session was useful in guiding me when going through the Virtual Patient cases myself.
Table 1
Survey results of students'(n = 19) Q1.My overall experience of the Virtual Patient session is positive.Q2.The Virtual Patient session helps me improve my clinical reasoning skills.
Perception of learning from Practicing with Authentic Cases (Q9-Q13)
Q9.The provided two Virtual Patient cases fitted well with my current level of understanding.Q10.Engaging with the two Virtual Patient cases enhanced my understanding of the subject matter.Q11.Engaging with the two Virtual Patient cases enhanced my understanding of the complexities inherent in realworld clinical scenarios.
Q12. Discussing similarities and differences between the two Virtual Patient cases helped me to better understand variations in treatment approaches between different patients.Q13 Engaging with these virtual patients will enable me to apply what I have learned to real clinical practice.Perception of learning from Peer Dialogue around Feedback (Q14-Q20)Q14.The peer dialogue on feedback enhanced my understanding of the subject matter.Q15.The feedback provided by the Virtual Patient system is constructive.Q16.The feedback provided by the Virtual Patient system enhanced meaningful discussion in our group.Q17.The peer dialogue on feedback was effective in helping me understand the feedback provided by the Virtual Patient system Q18.The peer dialogue on feedback will enable me to take what I have learned into real practice Q19.The peer dialogue on feedback helped me generate specific strategies to address the feedback provided by the Virtual Patient.Q20.The peer dialogue helped me prioritize the areas I still need to improve.
Furthermore, they emphasized the questions asked by experts challenged them to think, put the knowledge in their own words and apply the knowledge with their own reasoning.
Yeah, she (the expert) gave examples and guided the reading of the tables for TNM (Tumor, Node, Metastasis) staging, and those were also in the Virtual Patient cases, but because she already used them once and explained how we have to use them, it became more clear to us, what these tables are for and how they are used (Focus Group 1, Student B). yeah, this is the stomach or this is the heart, whatever, and now you need to look it up yourself and think about it yourself, what you see, so that really helps.(Focus Group 1, Student B) Having cases that are closer to the real world, like the comorbidity we discussed, would make it more realistic.(For instance, ) What if he also has obesity or diabetes?Those are the patients that we are going to see in the future.So it helps out a lot to have those different conditions as well.(Focus Group 2, Student B) Students indicated the sense of practical immersion is amplified by the "side information that you don't really need" (Focus Group 3, Student E) from the cases.They highlighted the side information represented the interaction with real patients and made them think of clinical situations in real-world settings.
And then we do it ourselves.We had to find out what was wrong and go on.So I quite liked it.It gave me a deeper understanding.(Focus Group 3, Student A) So for example, about age, it's more difficult to do a treatment above 70.(What if that patient) has things like smoking history and that kind of stuff.
|
2024-06-12T06:17:43.435Z
|
2024-06-11T00:00:00.000
|
{
"year": 2024,
"sha1": "64f2826ba251303a7fe43a4c4a5fece1f0f6012c",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "7f89fa17ca5cd65cd2421f310bd647f6ae4437d0",
"s2fieldsofstudy": [
"Medicine",
"Education",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.